diff --git a/.astro/settings.json b/.astro/settings.json
index ceb2ed8a8..ebb445cdf 100644
--- a/.astro/settings.json
+++ b/.astro/settings.json
@@ -3,6 +3,6 @@
"enabled": false
},
"_variables": {
- "lastUpdateCheck": 1728296475293
+ "lastUpdateCheck": 1731065649795
}
}
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index af5f0247e..81d7bf8f8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,5 +1,6 @@
.idea
.temp
+.astro
# build output
dist/
diff --git a/contributing.md b/contributing.md
index 1bd29b362..3c3766abb 100644
--- a/contributing.md
+++ b/contributing.md
@@ -1,71 +1,18 @@
-# Contribution
+# ✨ Contribution Guidelines ✨
First of all, thank you for considering to contribute. Please look at the details below:
-- [Hacktoberfest Contributions](#hacktoberfest-contributions)
- [New Roadmaps](#new-roadmaps)
- [Existing Roadmaps](#existing-roadmaps)
- [Adding Projects](#adding-projects)
- [Adding Content](#adding-content)
- [Guidelines](#guidelines)
-
-## Hacktoberfest Contributions
-
-We are taking part in [Hacktoberfest 11](https://hacktoberfest.com/)!
-
-Before you start to contribute to our project in order to satisfy [Hacktoberfest requirements](https://hacktoberfest.com/participation/#contributors), please bare in mind the following:
-
-* There is not a Hacktoberfest t-shirt this year [(see their FAQ)](https://hacktoberfest.com/participation/#faq).
-* There is not an infinite opportunity to contribute to the roadmap.sh project.
-
-### Hacktoberfest Specific Contribution rules
-
-As Hacktoberfest attracts a lot of contributors (which is awesome), it does require a more rigid and strictly enforced set of guidelines than the average contribution.
-
-These are as follows:
-
-1. No single file contributions, please contribute to a minimum of two.
-
-Whilst single file contributions, such as adding one link to a single topic, is perfectly fine outside of hacktoberfest, this can (and probably will) result it an easy 4 pull requests for everyone and we will just become a Hacktoberfest farming project.
-
-***Note: If you contribute the entire contents of a topic i.e. the topic has 0 copy and 0 links, this will count.***
-
-2. Typo fixes will not count (by themselves).
-
-Whilst fixing typos is a great thing to do, lets bundle them in with actual contributions if we see them!
-
-3. The same basic rules apply.
-
-- Content must be in English.
-- Maximum of 8 links per topic.
-- Follow the below style guide for content.
-
-Here is an example of a **fully complete** topic:
-
-```markdown
-# Redis
-
-Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.
-
-Learn more from the following resources:
-
-[@official@Link 1](https:/google.com)
-[@article@Link 2](https:/google.com)
-[@article@Link 3](https:/google.com)
-[@course@Link 4](https:/google.com)
-[@course@Link 5](https:/google.com)
-[@video@Link 6](https:/google.com)
-[@video@Link 7](https:/google.com)
-[@video@Link 8](https:/google.com)
-```
-
-Contributions to the project that meet these requirements will be given the label `hacktoberfest-accepted` and merged, contributions that do not meet the requirements will simply be closed.
-
-Any attempts at spam PRs will be given the `spam` tag. If you recieve 2 `spam` tags against you, you will be [disqualified from Hacktoberfest](https://hacktoberfest.com/participation/#spam).
+- [Good vs. Not So Good Contributions](#good-vs-not-so-good-contributions)
## New Roadmaps
For new roadmaps, you can either:
+
- Submit a roadmap by providing [a textual roadmap similar to this roadmap](https://gist.github.com/kamranahmedse/98758d2c73799b3a6ce17385e4c548a5) in an [issue](https://github.com/kamranahmedse/developer-roadmap/issues).
- Create an interactive roadmap yourself using [our roadmap editor](https://draw.roadmap.sh/) & submit the link to that roadmap in an [issue](https://github.com/kamranahmedse/developer-roadmap/issues).
@@ -73,10 +20,10 @@ For new roadmaps, you can either:
For the existing roadmaps, please follow the details listed for the nature of contribution:
-- **Fixing Typos** — Make your changes in the [roadmap JSON file](https://github.com/kamranahmedse/developer-roadmap/tree/master/src/data/roadmaps) and submit a [PR](https://github.com/kamranahmedse/developer-roadmap/pulls).
+- **Fixing Typos** — Make your changes in the [roadmap markdown file](https://github.com/kamranahmedse/developer-roadmap/tree/master/src/data/roadmaps) and submit a [PR](https://github.com/kamranahmedse/developer-roadmap/pulls).
- **Adding or Removing Nodes** — Please open an [issue](https://github.com/kamranahmedse/developer-roadmap/issues) with your suggestion.
-**Note:** Please note that our goal is not to have the biggest list of items . Our goal is to list items or skills most relevant today.
+**Note:** Please note that our goal is **not to have the biggest list of items**. Our goal is to list items or skills most relevant today.
## Adding Projects
@@ -84,7 +31,7 @@ If you have a project idea that you think we should add to the roadmap, feel fre
The detailed format for the issue should be as follows:
-```
+```md
## What is this project about?
(Add an introduction to the project.)
@@ -112,14 +59,14 @@ Find [the content directory inside the relevant roadmap](https://github.com/kamr
Please adhere to the following style when adding content to a topic:
-```
+```md
# Topic Title
(Content)
Visit the following resources to learn more:
-- [@type@Description of link](Link)
+- [@type@Title/Description of Link](Link)
```
`@type@` must be one of the following and describe the type of content you are adding:
@@ -131,19 +78,19 @@ Visit the following resources to learn more:
- `@podcast@`
- `@video@`
-It's important to add a valid type, this will help us categorize the content and display it properly on the roadmap.
+It's important to add a valid type, this will help us categorize the content and display it properly on the roadmap. The order of the links based on type is same as above.
## Guidelines
--
Please don't use the project for self-promotion!
+-
Please don't use the project for self-promotion!
We believe this project is a valuable asset to the developer community, and it includes numerous helpful resources. We kindly ask you to avoid submitting pull requests for the sole purpose of self-promotion. We appreciate contributions that genuinely add value, such as guides from maintainers of well-known frameworks, and will consider accepting these even if they're self authored. Thank you for your understanding and cooperation!
--
Adding everything available out there is not the goal!
+-
Adding everything available out there is not the goal!
The roadmaps represent the skillset most valuable today, i.e., if you were to enter any of the listed fields today, what would you learn? There might be things that are of-course being used today, but prioritize the things that are most in demand today, e.g., agree that lots of people are using angular.js today, but you wouldn't want to learn that instead of React, Angular, or Vue. Use your critical thinking to filter out non-essential stuff. Give honest arguments for why the resource should be included.
-- Do not add things you have not evaluated personally!
+-
Do not add things you have not evaluated personally!
Use your critical thinking to filter out non-essential stuff. Give honest arguments for why the resource should be included. Have you read this book? Can you give a short article?
@@ -151,26 +98,31 @@ It's important to add a valid type, this will help us categorize the content and
If you are planning to contribute by adding content to the roadmaps, I recommend you to clone the repository, add content to the [content directory of the roadmap](./src/data/roadmaps/) and create a single PR to make it easier for me to review and merge the PR.
-- Write meaningful commit messages
+-
Write meaningful commit messages
Meaningful commit messages help speed up the review process as well as help other contributors gain a good overview of the repositories commit history without having to dive into every commit.
- Look at the existing issues/pull requests before opening new ones
-### Good vs. Not So Good Contributions
+## Good vs. Not So Good Contributions
Good
- - New Roadmaps.
- - Engaging, fresh content links.
- - Typos and grammatical fixes.
- - Content copy in topics that do not have any (or minimal copy exists).
+- New Roadmaps.
+- Engaging and fresh content links.
+- Typos and grammatical fixes.
+- Enhanced Existing Content.
+- Content copy in topics that do not have any (or minimal copy exists).
Not So Good
- - Adding whitespace that doesn't add to the readability of the content.
- - Rewriting content in a way that doesn't add any value.
- - Non-English content.
- - PR's that don't follow our style guide, have no description, and a default title.
- - Links to your own blog articles.
+- Adding whitespace that doesn't add to the readability of the content.
+- Rewriting content in a way that doesn't add any value.
+- Non-English content.
+- PR's that don't follow our style guide, have no description, and a default title.
+- Links to your own blog articles.
+
+***
+
+Have a look at the [License](./license) file.
diff --git a/license b/license
index 0cdd8ff0f..4dcb35a46 100644
--- a/license
+++ b/license
@@ -1,7 +1,6 @@
Everything including text and images in this project are protected by the copyright laws.
You are allowed to use this material for personal use but are not allowed to use it for
-any other purpose including publishing the images, the project files or the content in the
-images in any form either digital, non-digital, textual, graphical or written formats.
+any other purpose including publishing the images, the project files or the content in the images in any form either digital, non-digital, textual, graphical or written formats.
You are allowed to share the links to the repository or the website roadmap.sh but not
the content for any sort of usage that involves the content of this repository taken out
of the repository and be shared from any other medium including but not limited to blog
@@ -9,7 +8,7 @@ posts, articles, newsletters, you must get prior consent from the understated. T
conditions do not apply to the readonly GitHub forks created using the Fork button on
GitHub with the whole purpose of contributing to the project.
-Copyright © 2023 Kamran Ahmed
+Copyright © 2017 - Present. Kamran Ahmed
Please note that I am really flexible with allowing the usage of the content in this
repository. If you reach out to me with a brief detail of why and how you would like
diff --git a/package.json b/package.json
index dc5ef9fb1..28b522487 100644
--- a/package.json
+++ b/package.json
@@ -37,6 +37,7 @@
"@nanostores/react": "^0.8.0",
"@napi-rs/image": "^1.9.2",
"@resvg/resvg-js": "^2.6.2",
+ "@tanstack/react-query": "^5.59.16",
"@types/react": "^18.3.11",
"@types/react-dom": "^18.3.1",
"astro": "^4.16.1",
diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml
index ac1af87ab..c65650762 100644
--- a/pnpm-lock.yaml
+++ b/pnpm-lock.yaml
@@ -32,6 +32,9 @@ importers:
'@resvg/resvg-js':
specifier: ^2.6.2
version: 2.6.2
+ '@tanstack/react-query':
+ specifier: ^5.59.16
+ version: 5.59.16(react@18.3.1)
'@types/react':
specifier: ^18.3.11
version: 18.3.11
@@ -1198,6 +1201,14 @@ packages:
peerDependencies:
tailwindcss: '>=3.0.0 || insiders || >=4.0.0-alpha.20'
+ '@tanstack/query-core@5.59.16':
+ resolution: {integrity: sha512-crHn+G3ltqb5JG0oUv6q+PMz1m1YkjpASrXTU+sYWW9pLk0t2GybUHNRqYPZWhxgjPaVGC4yp92gSFEJgYEsPw==}
+
+ '@tanstack/react-query@5.59.16':
+ resolution: {integrity: sha512-MuyWheG47h6ERd4PKQ6V8gDyBu3ThNG22e1fRVwvq6ap3EqsFhyuxCAwhNP/03m/mLg+DAb0upgbPaX6VB+CkQ==}
+ peerDependencies:
+ react: ^18 || ^19
+
'@tybys/wasm-util@0.9.0':
resolution: {integrity: sha512-6+7nlbMVX/PVDCwaIQ8nTOPveOcFLSt8GcXdx8hD0bt39uWxYT88uXzqTd4fTvqta7oeUJqudepapKNt2DYJFw==}
@@ -4278,6 +4289,13 @@ snapshots:
postcss-selector-parser: 6.0.10
tailwindcss: 3.4.13
+ '@tanstack/query-core@5.59.16': {}
+
+ '@tanstack/react-query@5.59.16(react@18.3.1)':
+ dependencies:
+ '@tanstack/query-core': 5.59.16
+ react: 18.3.1
+
'@tybys/wasm-util@0.9.0':
dependencies:
tslib: 2.7.0
diff --git a/public/pdfs/roadmaps/engineering-manager.pdf b/public/pdfs/roadmaps/engineering-manager.pdf
new file mode 100644
index 000000000..2b7a43d94
Binary files /dev/null and b/public/pdfs/roadmaps/engineering-manager.pdf differ
diff --git a/public/roadmap-content/ai-engineer.json b/public/roadmap-content/ai-engineer.json
index bab60f520..c66d83134 100644
--- a/public/roadmap-content/ai-engineer.json
+++ b/public/roadmap-content/ai-engineer.json
@@ -1,73 +1,256 @@
{
"_hYN0gEi9BL24nptEtXWU": {
"title": "Introduction",
- "description": "",
+ "description": "AI Engineering is the process of designing and implementing AI systems using pre-trained models and existing AI tools to solve practical problems. AI Engineers focus on applying AI in real-world scenarios, improving user experiences, and automating tasks, without developing new models from scratch. They work to ensure AI systems are efficient, scalable, and can be seamlessly integrated into business applications, distinguishing their role from AI Researchers and ML Engineers, who concentrate more on creating new models or advancing AI theory.",
"links": []
},
"GN6SnI7RXIeW8JeD-qORW": {
"title": "What is an AI Engineer?",
- "description": "",
- "links": []
+ "description": "AI engineers are professionals who specialize in designing, developing, and implementing artificial intelligence (AI) systems. Their work is essential in various industries, as they create applications that enable machines to perform tasks that typically require human intelligence, such as problem-solving, learning, and decision-making.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "AI For Everyone",
+ "url": "https://www.coursera.org/learn/ai-for-everyone",
+ "type": "course"
+ },
+ {
+ "title": "How to Become an AI Engineer: Duties, Skills, and Salary",
+ "url": "https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/how-to-become-an-ai-engineer",
+ "type": "article"
+ },
+ {
+ "title": "AI engineers: What they do and how to become one",
+ "url": "https://www.techtarget.com/whatis/feature/How-to-become-an-artificial-intelligence-engineer",
+ "type": "article"
+ }
+ ]
},
"jSZ1LhPdhlkW-9QJhIvFs": {
"title": "AI Engineer vs ML Engineer",
- "description": "",
- "links": []
+ "description": "An AI Engineer uses pre-trained models and existing AI tools to improve user experiences. They focus on applying AI in practical ways, without building models from scratch. This is different from AI Researchers and ML Engineers, who focus more on creating new models or developing AI theory.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What does an AI Engineer do?",
+ "url": "https://www.codecademy.com/resources/blog/what-does-an-ai-engineer-do/",
+ "type": "article"
+ },
+ {
+ "title": "What is an ML Engineer?",
+ "url": "https://www.coursera.org/articles/what-is-machine-learning-engineer",
+ "type": "article"
+ },
+ {
+ "title": "AI vs ML",
+ "url": "https://www.youtube.com/watch?v=4RixMPF4xis",
+ "type": "video"
+ }
+ ]
},
"wf2BSyUekr1S1q6l8kyq6": {
"title": "LLMs",
- "description": "",
- "links": []
+ "description": "LLMs, or Large Language Models, are advanced AI models trained on vast datasets to understand and generate human-like text. They can perform a wide range of natural language processing tasks, such as text generation, translation, summarization, and question answering. Examples include GPT-4, BERT, and T5. LLMs are capable of understanding context, handling complex queries, and generating coherent responses, making them useful for applications like chatbots, content creation, and automated support. However, they require significant computational resources and may carry biases from their training data.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is a large language model (LLM)?",
+ "url": "https://www.cloudflare.com/en-gb/learning/ai/what-is-large-language-model/",
+ "type": "article"
+ },
+ {
+ "title": "How Large Langauge Models Work",
+ "url": "https://www.youtube.com/watch?v=5sLYAQS9sWQ",
+ "type": "video"
+ },
+ {
+ "title": "Large Language Models (LLMs) - Everything You NEED To Know",
+ "url": "https://www.youtube.com/watch?v=osKyvYJ3PRM",
+ "type": "video"
+ }
+ ]
},
"KWjD4xEPhOOYS51dvRLd2": {
"title": "Inference",
- "description": "",
- "links": []
+ "description": "In artificial intelligence (AI), inference refers to the process where a trained machine learning model makes predictions or draws conclusions from new, unseen data. Unlike training, inference involves the model applying what it has learned to make decisions without needing examples of the exact result. In essence, inference is the AI model actively functioning. For example, a self-driving car recognizing a stop sign on a road it has never encountered before demonstrates inference. The model identifies the stop sign in a new setting, using its learned knowledge to make a decision in real-time.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Inference vs Training",
+ "url": "https://www.cloudflare.com/learning/ai/inference-vs-training/",
+ "type": "article"
+ },
+ {
+ "title": "What is Machine Learning Inference?",
+ "url": "https://hazelcast.com/glossary/machine-learning-inference/",
+ "type": "article"
+ },
+ {
+ "title": "What is Machine Learning Inference? An Introduction to Inference Approaches",
+ "url": "https://www.datacamp.com/blog/what-is-machine-learning-inference",
+ "type": "article"
+ }
+ ]
},
"xostGgoaYkqMO28iN2gx8": {
"title": "Training",
- "description": "",
- "links": []
+ "description": "Training refers to the process of teaching a machine learning model to recognize patterns and make predictions by exposing it to a dataset. During training, the model learns from the data by adjusting its internal parameters to minimize errors between its predictions and the actual outcomes. This process involves iteratively feeding the model with input data, comparing its outputs to the correct answers, and refining its predictions through techniques like gradient descent. The goal is to enable the model to generalize well so that it can make accurate predictions on new, unseen data.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Model Training?",
+ "url": "https://oden.io/glossary/model-training/",
+ "type": "article"
+ },
+ {
+ "title": "Machine learning model training: What it is and why it’s important",
+ "url": "https://domino.ai/blog/what-is-machine-learning-model-training",
+ "type": "article"
+ },
+ {
+ "title": "Training ML Models - Amazon",
+ "url": "https://docs.aws.amazon.com/machine-learning/latest/dg/training-ml-models.html",
+ "type": "article"
+ }
+ ]
},
"XyEp6jnBSpCxMGwALnYfT": {
"title": "Embeddings",
- "description": "",
- "links": []
+ "description": "Embeddings are dense, continuous vector representations of data, such as words, sentences, or images, in a lower-dimensional space. They capture the semantic relationships and patterns in the data, where similar items are placed closer together in the vector space. In machine learning, embeddings are used to convert complex data into numerical form that models can process more easily. For example, word embeddings represent words based on their meanings and contexts, allowing models to understand relationships like synonyms or analogies. Embeddings are widely used in tasks like natural language processing, recommendation systems, and image recognition to improve model performance and efficiency.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What are embeddings in machine learning?",
+ "url": "https://www.cloudflare.com/en-gb/learning/ai/what-are-embeddings/",
+ "type": "article"
+ },
+ {
+ "title": "What is embedding?",
+ "url": "https://www.ibm.com/topics/embedding",
+ "type": "article"
+ },
+ {
+ "title": "What are Word Embeddings",
+ "url": "https://www.youtube.com/watch?v=wgfSDrqYMJ4",
+ "type": "video"
+ }
+ ]
},
"LnQ2AatMWpExUHcZhDIPd": {
"title": "Vector Databases",
- "description": "",
- "links": []
+ "description": "Vector databases are specialized systems designed to store, index, and retrieve high-dimensional vectors, often used as embeddings that represent data like text, images, or audio. Unlike traditional databases that handle structured data, vector databases excel at managing unstructured data by enabling fast similarity searches, where vectors are compared to find those that are most similar to a query. This makes them essential for tasks like semantic search, recommendation systems, and content discovery, where understanding relationships between items is crucial. Vector databases use indexing techniques such as approximate nearest neighbor (ANN) search to efficiently handle large datasets, ensuring quick and accurate retrieval even at scale.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Vector Databases",
+ "url": "https://developers.cloudflare.com/vectorize/reference/what-is-a-vector-database/",
+ "type": "article"
+ },
+ {
+ "title": "What are Vector Databases?",
+ "url": "https://www.mongodb.com/resources/basics/databases/vector-databases",
+ "type": "article"
+ }
+ ]
},
"9JwWIK0Z2MK8-6EQQJsCO": {
"title": "RAG",
- "description": "",
- "links": []
+ "description": "Retrieval-Augmented Generation (RAG) is an AI approach that combines information retrieval with language generation to create more accurate, contextually relevant outputs. It works by first retrieving relevant data from a knowledge base or external source, then using a language model to generate a response based on that information. This method enhances the accuracy of generative models by grounding their outputs in real-world data, making RAG ideal for tasks like question answering, summarization, and chatbots that require reliable, up-to-date information.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Retrieval Augmented Generation (RAG)?",
+ "url": "https://www.datacamp.com/blog/what-is-retrieval-augmented-generation-rag",
+ "type": "article"
+ },
+ {
+ "title": "What is Retrieval-Augmented Generation? Google",
+ "url": "https://cloud.google.com/use-cases/retrieval-augmented-generation",
+ "type": "article"
+ },
+ {
+ "title": "What is Retrieval-Augmented Generation? IBM",
+ "url": "https://www.youtube.com/watch?v=T-D1OfcDW1M",
+ "type": "video"
+ }
+ ]
},
"Dc15ayFlzqMF24RqIF_-X": {
"title": "Prompt Engineering",
- "description": "",
- "links": []
+ "description": "Prompt engineering is the process of crafting effective inputs (prompts) to guide AI models, like GPT, to generate desired outputs. It involves strategically designing prompts to optimize the model’s performance by providing clear instructions, context, and examples. Effective prompt engineering can improve the quality, relevance, and accuracy of responses, making it essential for applications like chatbots, content generation, and automated support. By refining prompts, developers can better control the model’s behavior, reduce ambiguity, and achieve more consistent results, enhancing the overall effectiveness of AI-driven systems.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Prompt Engineering Roadmap",
+ "url": "https://roadmap.sh/prompt-engineering",
+ "type": "article"
+ },
+ {
+ "title": "What is Prompt Engineering?",
+ "url": "https://www.youtube.com/watch?v=nf1e-55KKbg",
+ "type": "video"
+ }
+ ]
},
"9XCxilAQ7FRet7lHQr1gE": {
"title": "AI Agents",
- "description": "In AI engineering, \"agents\" refer to autonomous systems or components that can perceive their environment, make decisions, and take actions to achieve specific goals. Agents often interact with external systems, users, or other agents to carry out complex tasks. They can vary in complexity, from simple rule-based bots to sophisticated AI-powered agents that leverage machine learning models, natural language processing, and reinforcement learning.\n\nVisit the following resources to learn more:\n\n\\-[@article@Building an AI Agent Tutorial - LangChain](https://python.langchain.com/docs/tutorials/agents/) -[@article@Ai agents and their types](https://play.ht/blog/ai-agents-use-cases/) -[@video@The Complete Guide to Building AI Agents for Beginners](https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX)",
- "links": []
+ "description": "In AI engineering, \"agents\" refer to autonomous systems or components that can perceive their environment, make decisions, and take actions to achieve specific goals. Agents often interact with external systems, users, or other agents to carry out complex tasks. They can vary in complexity, from simple rule-based bots to sophisticated AI-powered agents that leverage machine learning models, natural language processing, and reinforcement learning.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Building an AI Agent Tutorial - LangChain",
+ "url": "https://python.langchain.com/docs/tutorials/agents/",
+ "type": "article"
+ },
+ {
+ "title": "Ai agents and their types",
+ "url": "https://play.ht/blog/ai-agents-use-cases/",
+ "type": "article"
+ },
+ {
+ "title": "The Complete Guide to Building AI Agents for Beginners",
+ "url": "https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX",
+ "type": "video"
+ }
+ ]
},
"5QdihE1lLpMc3DFrGy46M": {
"title": "AI vs AGI",
- "description": "",
- "links": []
+ "description": "AI (Artificial Intelligence) refers to systems designed to perform specific tasks by mimicking aspects of human intelligence, such as pattern recognition, decision-making, and language processing. These systems, known as \"narrow AI,\" are highly specialized, excelling in defined areas like image classification or recommendation algorithms but lacking broader cognitive abilities. In contrast, AGI (Artificial General Intelligence) represents a theoretical form of intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a human-like level. AGI would have the capacity for abstract thinking, reasoning, and adaptability similar to human cognitive abilities, making it far more versatile than today’s AI systems. While current AI technology is powerful, AGI remains a distant goal and presents complex challenges in safety, ethics, and technical feasibility.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is AGI?",
+ "url": "https://aws.amazon.com/what-is/artificial-general-intelligence/",
+ "type": "article"
+ },
+ {
+ "title": "The crucial difference between AI and AGI",
+ "url": "https://www.forbes.com/sites/bernardmarr/2024/05/20/the-crucial-difference-between-ai-and-agi/",
+ "type": "article"
+ }
+ ]
},
"qJVgKe9uBvXc-YPfvX_Y7": {
"title": "Impact on Product Development",
- "description": "",
- "links": []
+ "description": "AI engineering transforms product development by automating tasks, enhancing data-driven decision-making, and enabling the creation of smarter, more personalized products. It speeds up design cycles, optimizes processes, and allows for predictive maintenance, quality control, and efficient resource management. By integrating AI, companies can innovate faster, reduce costs, and improve user experiences, giving them a competitive edge in the market.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "AI in Product Development: Netflix, BMW, and PepsiCo",
+ "url": "https://www.virtasant.com/ai-today/ai-in-product-development-netflix-bmw#:~:text=AI%20can%20help%20make%20product,and%20gain%20a%20competitive%20edge.",
+ "type": "article"
+ },
+ {
+ "title": "AI Product Development: Why Are Founders So Fascinated By The Potential?",
+ "url": "https://www.techmagic.co/blog/ai-product-development/",
+ "type": "article"
+ }
+ ]
},
"K9EiuFgPBFgeRxY4wxAmb": {
"title": "Roles and Responsiblities",
- "description": "",
- "links": []
+ "description": "AI Engineers are responsible for designing, developing, and deploying AI systems that solve real-world problems. Their roles include building machine learning models, implementing data processing pipelines, and integrating AI solutions into existing software or platforms. They work on tasks like data collection, cleaning, and labeling, as well as model training, testing, and optimization to ensure high performance and accuracy. AI Engineers also focus on scaling models for production use, monitoring their performance, and troubleshooting issues. Additionally, they collaborate with data scientists, software developers, and other stakeholders to align AI projects with business goals, ensuring that solutions are reliable, efficient, and ethically sound.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "AI Engineer Job Description",
+ "url": "https://resources.workable.com/ai-engineer-job-description",
+ "type": "article"
+ },
+ {
+ "title": "How To Become an AI Engineer (Plus Job Duties and Skills)",
+ "url": "https://www.indeed.com/career-advice/finding-a-job/ai-engineer",
+ "type": "article"
+ }
+ ]
},
"d7fzv_ft12EopsQdmEsel": {
"title": "Pre-trained Models",
@@ -82,263 +265,792 @@
},
"1Ga6DbOPc6Crz7ilsZMYy": {
"title": "Benefits of Pre-trained Models",
- "description": "",
- "links": []
+ "description": "Pre-trained models offer several benefits in AI engineering by significantly reducing development time and computational resources because these models are trained on large datasets and can be fine-tuned for specific tasks, which enables quicker deployment and better performance with less data. They help overcome the challenge of needing vast amounts of labeled data and computational power for training from scratch. Additionally, pre-trained models often demonstrate improved accuracy, generalization, and robustness across different tasks, making them ideal for applications in natural language processing, computer vision, and other AI domains.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Why Pre-Trained Models Matter For Machine Learning",
+ "url": "https://www.ahead.com/resources/why-pre-trained-models-matter-for-machine-learning/",
+ "type": "article"
+ },
+ {
+ "title": "Why You Should Use Pre-Trained Models Versus Building Your Own",
+ "url": "https://cohere.com/blog/pre-trained-vs-in-house-nlp-models",
+ "type": "article"
+ }
+ ]
},
"MXqbQGhNM3xpXlMC2ib_6": {
"title": "Limitations and Considerations",
- "description": "",
- "links": []
+ "description": "Pre-trained models, while powerful, come with several limitations and considerations. They may carry biases present in the training data, leading to unintended or discriminatory outcomes, these models are also typically trained on general data, so they might not perform well on niche or domain-specific tasks without further fine-tuning. Another concern is the \"black-box\" nature of many pre-trained models, which can make their decision-making processes hard to interpret and explain.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Pretrained Topic Models: Advantages and Limitation",
+ "url": "https://www.kaggle.com/code/amalsalilan/pretrained-topic-models-advantages-and-limitation",
+ "type": "article"
+ },
+ {
+ "title": "Should You Use Open Source Large Language Models?",
+ "url": "https://www.youtube.com/watch?v=y9k-U9AuDeM",
+ "type": "video"
+ }
+ ]
},
"2WbVpRLqwi3Oeqk1JPui4": {
"title": "Open AI Models",
- "description": "",
- "links": []
+ "description": "OpenAI provides a variety of models designed for diverse tasks. GPT models like GPT-3 and GPT-4 handle text generation, conversation, and translation, offering context-aware responses, while Codex specializes in generating and debugging code across multiple languages. DALL-E creates images from text descriptions, supporting applications in design and content creation, and Whisper is a speech recognition model that converts spoken language to text for transcription and voice-to-text tasks.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Models Overview",
+ "url": "https://platform.openai.com/docs/models",
+ "type": "article"
+ },
+ {
+ "title": "OpenAI’s new “deep-thinking” o1 model crushes coding benchmarks",
+ "url": "https://www.youtube.com/watch?v=6xlPJiNpCVw",
+ "type": "video"
+ }
+ ]
},
"vvpYkmycH0_W030E-L12f": {
"title": "Capabilities / Context Length",
- "description": "",
- "links": []
+ "description": "A key aspect of the OpenAI models is their context length, which refers to the amount of input text the model can process at once. Earlier models like GPT-3 had a context length of up to 4,096 tokens (words or word pieces), while more recent models like GPT-4 can handle significantly larger context lengths, some supporting up to 32,768 tokens. This extended context length enables the models to handle more complex tasks, such as maintaining long conversations or processing lengthy documents, which enhances their utility in real-world applications like legal document analysis or code generation.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Managing Context",
+ "url": "https://platform.openai.com/docs/guides/text-generation/managing-context-for-text-generation",
+ "type": "article"
+ },
+ {
+ "title": "Capabilities",
+ "url": "https://platform.openai.com/docs/guides/text-generation",
+ "type": "article"
+ }
+ ]
},
"LbB2PeytxRSuU07Bk0KlJ": {
"title": "Cut-off Dates / Knowledge",
- "description": "",
- "links": []
+ "description": "OpenAI models, such as GPT-3.5 and GPT-4, have a knowledge cutoff date, which refers to the last point in time when the model was trained on data. For instance, as of the current version of GPT-4, the knowledge cutoff is October 2023. This means the model does not have awareness or knowledge of events, advancements, or data that occurred after that date. Consequently, the model may lack information on more recent developments, research, or real-time events unless explicitly updated in future versions. This limitation is important to consider when using the models for time-sensitive tasks or inquiries involving recent knowledge.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Knowledge Cutoff Dates of all LLMs explained",
+ "url": "https://otterly.ai/blog/knowledge-cutoff/",
+ "type": "article"
+ },
+ {
+ "title": "Knowledge Cutoff Dates For ChatGPT, Meta Ai, Copilot, Gemini, Claude",
+ "url": "https://computercity.com/artificial-intelligence/knowledge-cutoff-dates-llms",
+ "type": "article"
+ }
+ ]
},
"hy6EyKiNxk1x84J63dhez": {
"title": "Anthropic's Claude",
- "description": "",
- "links": []
+ "description": "Anthropic's Claude is an AI language model designed to facilitate safe and scalable AI systems. Named after Claude Shannon, the father of information theory, Claude focuses on responsible AI use, emphasizing safety, alignment with human intentions, and minimizing harmful outputs. Built as a competitor to models like OpenAI's GPT, Claude is designed to handle natural language tasks such as generating text, answering questions, and supporting conversations, with a strong focus on aligning AI behavior with user goals while maintaining transparency and avoiding harmful biases.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Claude Website",
+ "url": "https://claude.ai",
+ "type": "article"
+ },
+ {
+ "title": "How To Use Claude Pro For Beginners",
+ "url": "https://www.youtube.com/watch?v=J3X_JWQkvo8",
+ "type": "video"
+ }
+ ]
},
"oe8E6ZIQWuYvHVbYJHUc1": {
"title": "Google's Gemini",
- "description": "",
- "links": []
+ "description": "Google Gemini is an advanced AI model by Google DeepMind, designed to integrate natural language processing with multimodal capabilities, enabling it to understand and generate not just text but also images, videos, and other data types. It combines generative AI with reasoning skills, making it effective for complex tasks requiring logical analysis and contextual understanding. Built on Google's extensive knowledge base and infrastructure, Gemini aims to offer high accuracy, efficiency, and safety, positioning it as a competitor to models like OpenAI's GPT-4.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Google Gemini",
+ "url": "https://workspace.google.com/solutions/ai/",
+ "type": "article"
+ },
+ {
+ "title": "Welcome to the Gemini era",
+ "url": "https://www.youtube.com/watch?v=_fuimO6ErKI",
+ "type": "video"
+ }
+ ]
},
"3PQVZbcr4neNMRr6CuNzS": {
"title": "Azure AI",
- "description": "",
- "links": []
+ "description": "Azure AI is a suite of AI services and tools provided by Microsoft through its Azure cloud platform. It includes pre-built AI models for natural language processing, computer vision, and speech, as well as tools for developing custom machine learning models using services like Azure Machine Learning. Azure AI enables developers to integrate AI capabilities into applications with APIs for tasks like sentiment analysis, image recognition, and language translation. It also supports responsible AI development with features for model monitoring, explainability, and fairness, aiming to make AI accessible, scalable, and secure across industries.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Azure AI",
+ "url": "https://azure.microsoft.com/en-gb/solutions/ai",
+ "type": "article"
+ },
+ {
+ "title": "How to Choose the Right Models for Your Apps",
+ "url": "https://www.youtube.com/watch?v=sx_uGylH8eg",
+ "type": "video"
+ }
+ ]
},
"OkYO-aSPiuVYuLXHswBCn": {
"title": "AWS Sagemaker",
- "description": "",
- "links": []
+ "description": "AWS SageMaker is a fully managed machine learning service from Amazon Web Services that enables developers and data scientists to build, train, and deploy machine learning models at scale. It provides an integrated development environment, simplifying the entire ML workflow, from data preparation and model development to training, tuning, and inference. SageMaker supports popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn, and offers features like automated model tuning, model monitoring, and one-click deployment. It's designed to make machine learning more accessible and scalable, even for large enterprise applications.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "AWS SageMaker",
+ "url": "https://aws.amazon.com/sagemaker/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to Amazon SageMaker",
+ "url": "https://www.youtube.com/watch?v=Qv_Tr_BCFCQ",
+ "type": "video"
+ }
+ ]
},
"8XjkRqHOdyH-DbXHYiBEt": {
"title": "Hugging Face Models",
- "description": "",
- "links": []
+ "description": "Hugging Face models are a collection of pre-trained machine learning models available through the Hugging Face platform, covering a wide range of tasks like natural language processing, computer vision, and audio processing. The platform includes models for tasks such as text classification, translation, summarization, question answering, and more, with popular models like BERT, GPT, T5, and CLIP. Hugging Face provides easy-to-use tools and APIs that allow developers to access, fine-tune, and deploy these models, fostering a collaborative community where users can share, modify, and contribute models to improve AI research and application development.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Hugging Face Models",
+ "url": "https://huggingface.co/models",
+ "type": "article"
+ }
+ ]
},
"n-Ud2dXkqIzK37jlKItN4": {
"title": "Mistral AI",
- "description": "",
- "links": []
+ "description": "Mistral AI is a company focused on developing open-weight, large language models (LLMs) to provide high-performance AI solutions. Mistral aims to create models that are both efficient and versatile, making them suitable for a wide range of natural language processing tasks, including text generation, translation, and summarization. By releasing open-weight models, Mistral promotes transparency and accessibility, allowing developers to customize and deploy AI solutions more flexibly compared to proprietary models.\n\nLearn more from the resources:",
+ "links": [
+ {
+ "title": "Minstral AI Website",
+ "url": "https://mistral.ai/",
+ "type": "article"
+ },
+ {
+ "title": "Mistral AI: The Gen AI Start-up you did not know existed",
+ "url": "https://www.youtube.com/watch?v=vzrRGd18tAg",
+ "type": "video"
+ }
+ ]
},
"a7qsvoauFe5u953I699ps": {
"title": "Cohere",
- "description": "",
- "links": []
+ "description": "Cohere is an AI platform that specializes in natural language processing (NLP) by providing large language models designed to help developers build and deploy text-based applications. Cohere’s models are used for tasks such as text classification, language generation, semantic search, and sentiment analysis. Unlike some other providers, Cohere emphasizes simplicity and scalability, offering an easy-to-use API that allows developers to fine-tune models on custom data for specific use cases. Additionally, Cohere provides robust multilingual support and focuses on ensuring that its NLP solutions are both accessible and enterprise-ready, catering to a wide range of industries.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Cohere Website",
+ "url": "https://cohere.com/",
+ "type": "article"
+ },
+ {
+ "title": "What Does Cohere Do?",
+ "url": "https://medium.com/geekculture/what-does-cohere-do-cdadf6d70435",
+ "type": "article"
+ }
+ ]
},
"5ShWZl1QUqPwO-NRGN85V": {
"title": "OpenAI Models",
- "description": "",
- "links": []
+ "description": "OpenAI provides a variety of models designed for diverse tasks. GPT models like GPT-3 and GPT-4 handle text generation, conversation, and translation, offering context-aware responses, while Codex specializes in generating and debugging code across multiple languages. DALL-E creates images from text descriptions, supporting applications in design and content creation, and Whisper is a speech recognition model that converts spoken language to text for transcription and voice-to-text tasks.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Models Overview",
+ "url": "https://platform.openai.com/docs/models",
+ "type": "article"
+ }
+ ]
},
"zdeuA4GbdBl2DwKgiOA4G": {
"title": "OpenAI API",
- "description": "",
+ "description": "The OpenAI API provides access to powerful AI models like GPT, Codex, DALL-E, and Whisper, enabling developers to integrate capabilities such as text generation, code assistance, image creation, and speech recognition into their applications via a simple, scalable interface.",
"links": []
},
"_bPTciEA1GT1JwfXim19z": {
"title": "Chat Completions API",
- "description": "",
- "links": []
+ "description": "The OpenAI Chat Completions API is a powerful interface that allows developers to integrate conversational AI into applications by utilizing models like GPT-3.5 and GPT-4. It is designed to manage multi-turn conversations, keeping context across interactions, making it ideal for chatbots, virtual assistants, and interactive AI systems. With the API, users can structure conversations by providing messages in a specific format, where each message has a role (e.g., \"system\" to guide the model, \"user\" for input, and \"assistant\" for responses).\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Create Chat Completions",
+ "url": "https://platform.openai.com/docs/api-reference/chat/create",
+ "type": "article"
+ },
+ {
+ "title": "",
+ "url": "https://medium.com/the-ai-archives/getting-started-with-openais-chat-completions-api-in-2024-462aae00bf0a",
+ "type": "article"
+ }
+ ]
},
"9-5DYeOnKJq9XvEMWP45A": {
"title": "Writing Prompts",
- "description": "",
- "links": []
+ "description": "Prompts for the OpenAI API are carefully crafted inputs designed to guide the language model in generating specific, high-quality content. These prompts can be used to direct the model to create stories, articles, dialogue, or even detailed responses on particular topics. Effective prompts set clear expectations by providing context, specifying the format, or including examples, such as \"Write a short sci-fi story about a future where humans can communicate with animals,\" or \"Generate a detailed summary of the key benefits of using renewable energy.\" Well-designed prompts help ensure that the API produces coherent, relevant, and creative outputs, making it easier to achieve desired results across various applications.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "",
+ "url": "https://roadmap.sh/prompt-engineering",
+ "type": "article"
+ },
+ {
+ "title": "How to write AI prompts",
+ "url": "https://www.descript.com/blog/article/how-to-write-ai-prompts",
+ "type": "article"
+ },
+ {
+ "title": "Prompt Engineering Guide",
+ "url": "https://www.promptingguide.ai/",
+ "type": "article"
+ }
+ ]
},
"nyBgEHvUhwF-NANMwkRJW": {
"title": "Open AI Playground",
- "description": "",
- "links": []
+ "description": "The OpenAI Playground is an interactive web interface that allows users to experiment with OpenAI's language models, such as GPT-3 and GPT-4, without needing to write code. It provides a user-friendly environment where you can input prompts, adjust parameters like temperature and token limits, and see how the models generate responses in real-time. The Playground helps users test different use cases, from text generation to question answering, and refine prompts for better outputs. It's a valuable tool for exploring the capabilities of OpenAI models, prototyping ideas, and understanding how the models behave before integrating them into applications.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Playground",
+ "url": "https://platform.openai.com/playground/chat",
+ "type": "article"
+ },
+ {
+ "title": "How to Use OpenAi Playground Like a Pro",
+ "url": "https://www.youtube.com/watch?v=PLxpvtODiqs",
+ "type": "video"
+ }
+ ]
},
"15XOFdVp0IC-kLYPXUJWh": {
"title": "Fine-tuning",
- "description": "",
- "links": []
+ "description": "Fine-tuning the OpenAI API involves adapting pre-trained models, such as GPT, to specific use cases by training them on custom datasets. This process allows you to refine the model's behavior and improve its performance on specialized tasks, like generating domain-specific text or following particular patterns. By providing labeled examples of the desired input-output pairs, you guide the model to better understand and predict the appropriate responses for your use case.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Fine-tuning Documentation",
+ "url": "https://platform.openai.com/docs/guides/fine-tuning",
+ "type": "article"
+ },
+ {
+ "title": "Fine-tuning ChatGPT with OpenAI Tutorial",
+ "url": "https://www.youtube.com/watch?v=VVKcSf6r3CM",
+ "type": "video"
+ }
+ ]
},
"qzvp6YxWDiGakA2mtspfh": {
"title": "Maximum Tokens",
- "description": "",
- "links": []
+ "description": "The OpenAI API has different maximum token limits depending on the model being used. For instance, GPT-3 has a limit of 4,096 tokens, while GPT-4 can support larger inputs, with some versions allowing up to 8,192 tokens, and extended versions reaching up to 32,768 tokens. Tokens include both the input text and the generated output, so longer inputs mean less space for responses. Managing token limits is crucial to ensure the model can handle the entire input and still generate a complete response, especially for tasks involving lengthy documents or multi-turn conversations.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Maximum Tokens",
+ "url": "https://platform.openai.com/docs/guides/rate-limits",
+ "type": "article"
+ },
+ {
+ "title": "The Ins and Outs of GPT Token Limits",
+ "url": "https://www.supernormal.com/blog/gpt-token-limits",
+ "type": "article"
+ }
+ ]
},
"FjV3oD7G2Ocq5HhUC17iH": {
"title": "Token Counting",
- "description": "",
- "links": []
+ "description": "Token counting refers to tracking the number of tokens processed during interactions with language models, including both input and output text. Tokens are units of text that can be as short as a single character or as long as a word, and models like GPT process text by splitting it into these tokens. Knowing how many tokens are used is crucial because the API has token limits (e.g., 4,096 for GPT-3 and up to 32,768 for some versions of GPT-4), and costs are typically calculated based on the total number of tokens processed.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Tokenizer Tool",
+ "url": "https://platform.openai.com/tokenizer",
+ "type": "article"
+ },
+ {
+ "title": "How to count tokens with Tiktoken",
+ "url": "https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken",
+ "type": "article"
+ }
+ ]
},
"DZPM9zjCbYYWBPLmQImxQ": {
"title": "Pricing Considerations",
- "description": "",
- "links": []
+ "description": "When using the OpenAI API, pricing considerations depend on factors like the model type, usage volume, and specific features utilized. Different models, such as GPT-3.5, GPT-4, or DALL-E, have varying cost structures based on the complexity of the model and the number of tokens processed (inputs and outputs). For cost efficiency, you should optimize prompt design, monitor usage, and consider rate limits or volume discounts offered by OpenAI for high usage.",
+ "links": [
+ {
+ "title": "OpenAI API Pricing",
+ "url": "https://openai.com/api/pricing/",
+ "type": "article"
+ }
+ ]
},
"8ndKHDJgL_gYwaXC7XMer": {
"title": "AI Safety and Ethics",
- "description": "",
- "links": []
+ "description": "AI safety and ethics involve establishing guidelines and best practices to ensure that artificial intelligence systems are developed, deployed, and used in a manner that prioritizes human well-being, fairness, and transparency. This includes addressing risks such as bias, privacy violations, unintended consequences, and ensuring that AI operates reliably and predictably, even in complex environments. Ethical considerations focus on promoting accountability, avoiding discrimination, and aligning AI systems with human values and societal norms. Frameworks like explainability, human-in-the-loop design, and robust monitoring are often used to build systems that not only achieve technical objectives but also uphold ethical standards and mitigate potential harms.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Understanding artificial intelligence ethics and safety",
+ "url": "https://www.turing.ac.uk/news/publications/understanding-artificial-intelligence-ethics-and-safety",
+ "type": "article"
+ },
+ {
+ "title": "What is AI Ethics?",
+ "url": "https://www.youtube.com/watch?v=aGwYtUzMQUk",
+ "type": "video"
+ }
+ ]
},
"cUyLT6ctYQ1pgmodCKREq": {
"title": "Prompt Injection Attacks",
- "description": "",
- "links": []
+ "description": "Prompt injection attacks are a type of security vulnerability where malicious inputs are crafted to manipulate or exploit AI models, like language models, to produce unintended or harmful outputs. These attacks involve injecting deceptive or adversarial content into the prompt to bypass filters, extract confidential information, or make the model respond in ways it shouldn't. For instance, a prompt injection could trick a model into revealing sensitive data or generating inappropriate responses by altering its expected behavior.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Prompt Injection in LLMs",
+ "url": "https://www.promptingguide.ai/prompts/adversarial-prompting/prompt-injection",
+ "type": "article"
+ },
+ {
+ "title": "What is a prompt injection attack?",
+ "url": "https://www.wiz.io/academy/prompt-injection-attack",
+ "type": "article"
+ }
+ ]
},
"lhIU0ulpvDAn1Xc3ooYz_": {
"title": "Bias and Fareness",
- "description": "",
- "links": []
+ "description": "Bias and fairness in AI refer to the challenges of ensuring that machine learning models do not produce discriminatory or skewed outcomes. Bias can arise from imbalanced training data, flawed assumptions, or biased algorithms, leading to unfair treatment of certain groups based on race, gender, or other factors. Fairness aims to address these issues by developing techniques to detect, mitigate, and prevent biases in AI systems. Ensuring fairness involves improving data diversity, applying fairness constraints during model training, and continuously monitoring models in production to avoid unintended consequences, promoting ethical and equitable AI use.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What Do We Do About the Biases in AI?",
+ "url": "https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai",
+ "type": "article"
+ },
+ {
+ "title": "AI Bias - What Is It and How to Avoid It?",
+ "url": "https://levity.ai/blog/ai-bias-how-to-avoid",
+ "type": "article"
+ },
+ {
+ "title": "What about fairness, bias and discrimination?",
+ "url": "https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/",
+ "type": "article"
+ }
+ ]
},
"sWBT-j2cRuFqRFYtV_5TK": {
"title": "Security and Privacy Concerns",
- "description": "",
- "links": []
+ "description": "Security and privacy concerns in AI revolve around the protection of data and the responsible use of models. Key issues include ensuring that sensitive data, such as personal information, is handled securely during collection, processing, and storage, to prevent unauthorized access and breaches. AI models can also inadvertently expose sensitive data if not properly designed, leading to privacy risks through data leakage or misuse. Additionally, there are concerns about model bias, data misuse, and ensuring transparency in how AI decisions are made.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Examining Privacy Risks in AI Systems",
+ "url": "https://transcend.io/blog/ai-and-privacy",
+ "type": "article"
+ },
+ {
+ "title": "AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED",
+ "url": "https://www.youtube.com/watch?v=eXdVDhOGqoE",
+ "type": "video"
+ }
+ ]
},
"Pt-AJmSJrOxKvolb5_HEv": {
"title": "Conducting adversarial testing",
- "description": "",
- "links": []
+ "description": "Adversarial testing involves intentionally exposing machine learning models to deceptive, perturbed, or carefully crafted inputs to evaluate their robustness and identify vulnerabilities. The goal is to simulate potential attacks or edge cases where the model might fail, such as subtle manipulations in images, text, or data that cause the model to misclassify or produce incorrect outputs. This type of testing helps to improve model resilience, particularly in sensitive applications like cybersecurity, autonomous systems, and finance.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Adversarial Testing for Generative AI",
+ "url": "https://developers.google.com/machine-learning/resources/adv-testing",
+ "type": "article"
+ },
+ {
+ "title": "Adversarial Testing: Definition, Examples and Resources",
+ "url": "https://www.leapwork.com/blog/adversarial-testing",
+ "type": "article"
+ }
+ ]
},
"ljZLa3yjQpegiZWwtnn_q": {
"title": "OpenAI Moderation API",
- "description": "",
- "links": []
+ "description": "The OpenAI Moderation API helps detect and filter harmful content by analyzing text for issues like hate speech, violence, self-harm, and adult content. It uses machine learning models to identify inappropriate or unsafe language, allowing developers to create safer online environments and maintain community guidelines. The API is designed to be integrated into applications, websites, and platforms, providing real-time content moderation to reduce the spread of harmful or offensive material.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Moderation",
+ "url": "https://platform.openai.com/docs/guides/moderation",
+ "type": "article"
+ },
+ {
+ "title": "How to user the moderation API",
+ "url": "https://cookbook.openai.com/examples/how_to_use_moderation",
+ "type": "article"
+ }
+ ]
},
"4Q5x2VCXedAWISBXUIyin": {
"title": "Adding end-user IDs in prompts",
- "description": "Sending end-user IDs in your requests can be a useful tool to help OpenAI monitor and detect abuse. This allows OpenAI to provide your team with more actionable feedback in the event that we detect any policy violations in your application.\n\nVisit the following resources to learn more:\n\n\\-[@official@Sending end-user IDs - OpenAi](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids)",
- "links": []
+ "description": "Sending end-user IDs in your requests can be a useful tool to help OpenAI monitor and detect abuse. This allows OpenAI to provide your team with more actionable feedback in the event that we detect any policy violations in your application.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Sending end-user IDs - OpenAi",
+ "url": "https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids",
+ "type": "article"
+ }
+ ]
},
"qmx6OHqx4_0JXVIv8dASp": {
"title": "Robust prompt engineering",
- "description": "",
- "links": []
+ "description": "Robust prompt engineering involves carefully crafting inputs to guide AI models toward producing accurate, relevant, and reliable outputs. It focuses on minimizing ambiguity and maximizing clarity by providing specific instructions, examples, or structured formats. Effective prompts anticipate potential issues, such as misinterpretation or inappropriate responses, and address them through testing and refinement. This approach enhances the consistency and quality of the model's behavior, making it especially useful for complex tasks like multi-step reasoning, content generation, and interactive systems.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Building Robust Prompt Engineering Capability",
+ "url": "https://aimresearch.co/product/building-robust-prompt-engineering-capability",
+ "type": "article"
+ },
+ {
+ "title": "Effective Prompt Engineering: A Comprehensive Guide",
+ "url": "https://medium.com/@nmurugs/effective-prompt-engineering-a-comprehensive-guide-803160c571ed",
+ "type": "article"
+ }
+ ]
},
"t1SObMWkDZ1cKqNNlcd9L": {
"title": "Know your Customers / Usecases",
- "description": "",
- "links": []
+ "description": "To know your customer means deeply understanding the needs, behaviors, and expectations of your target users. This ensures the tools you create are tailored precisely for their intended purpose, while also being designed to prevent misuse or unintended applications. By clearly defining the tool’s functionality and boundaries, you can align its features with the users’ goals while incorporating safeguards that limit its use in contexts it wasn’t designed for. This approach enhances both the tool’s effectiveness and safety, reducing the risk of improper use.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Assigning Roles",
+ "url": "https://learnprompting.org/docs/basics/roles",
+ "type": "article"
+ }
+ ]
},
"ONLDyczNacGVZGojYyJrU": {
"title": "Constraining outputs and inputs",
- "description": "",
- "links": []
+ "description": "Constraining outputs and inputs in AI models refers to implementing limits or rules that guide both the data the model processes (inputs) and the results it generates (outputs). Input constraints ensure that only valid, clean, and well-formed data enters the model, which helps to reduce errors and improve performance. This can include setting data type restrictions, value ranges, or specific formats. Output constraints, on the other hand, ensure that the model produces appropriate, safe, and relevant results, often by limiting output length, specifying answer formats, or applying filters to avoid harmful or biased responses. These constraints are crucial for improving model safety, alignment, and utility in practical applications.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Preventing Prompt Injection",
+ "url": "https://learnprompting.org/docs/prompt_hacking/defensive_measures/introduction",
+ "type": "article"
+ },
+ {
+ "title": "Introducing Structured Outputs in the API - OpenAI",
+ "url": "https://openai.com/index/introducing-structured-outputs-in-the-api/",
+ "type": "article"
+ }
+ ]
},
"a_3SabylVqzzOyw3tZN5f": {
"title": "OpenSource AI",
- "description": "",
- "links": []
+ "description": "Open-source AI refers to AI models, tools, and frameworks that are freely available for anyone to use, modify, and distribute. Examples include TensorFlow, PyTorch, and models like BERT and Stable Diffusion. Open-source AI fosters transparency, collaboration, and innovation by allowing developers to inspect code, adapt models for specific needs, and contribute improvements. This approach accelerates the development of AI technologies, enabling faster experimentation and reducing dependency on proprietary solutions.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Open Source AI Is the Path Forward",
+ "url": "https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/",
+ "type": "article"
+ },
+ {
+ "title": "Should You Use Open Source Large Language Models?",
+ "url": "https://www.youtube.com/watch?v=y9k-U9AuDeM",
+ "type": "video"
+ }
+ ]
},
"RBwGsq9DngUsl8PrrCbqx": {
"title": "Open vs Closed Source Models",
- "description": "",
- "links": []
+ "description": "Open-source models are freely available for customization and collaboration, promoting transparency and flexibility, while closed-source models are proprietary, offering ease of use but limiting modification and transparency.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI vs. open-source LLM",
+ "url": "https://ubiops.com/openai-vs-open-source-llm/",
+ "type": "article"
+ },
+ {
+ "title": "AI360 | Open-Source vs Closed-Source LLMs",
+ "url": "https://www.youtube.com/watch?v=710PDpuLwOc",
+ "type": "video"
+ }
+ ]
},
"97eu-XxYUH9pYbD_KjAtA": {
"title": "Popular Open Source Models",
- "description": "",
- "links": []
+ "description": "Open-source large language models (LLMs) are models whose source code and architecture are publicly available for use, modification, and distribution. They are built using machine learning algorithms that process and generate human-like text, and being open-source, they promote transparency, innovation, and community collaboration in their development and application.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "The best large language models (LLMs) in 2024",
+ "url": "https://zapier.com/blog/best-llm/",
+ "type": "article"
+ },
+ {
+ "title": "8 Top Open-Source LLMs for 2024 and Their Uses",
+ "url": "https://www.datacamp.com/blog/top-open-source-llms",
+ "type": "article"
+ }
+ ]
},
"v99C5Bml2a6148LCJ9gy9": {
"title": "Hugging Face",
- "description": "",
- "links": []
+ "description": "Hugging Face is a leading AI company and open-source platform that provides tools, models, and libraries for natural language processing (NLP), computer vision, and other machine learning tasks. It is best known for its \"Transformers\" library, which simplifies the use of pre-trained models like BERT, GPT, T5, and CLIP, making them accessible for tasks such as text classification, translation, summarization, and image recognition.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Hugging Face Official Video Course",
+ "url": "https://www.youtube.com/watch?v=00GKzGyWFEs&list=PLo2EIpI_JMQvWfQndUesu0nPBAtZ9gP1o",
+ "type": "course"
+ },
+ {
+ "title": "Hugging Face Website",
+ "url": "https://huggingface.co",
+ "type": "article"
+ },
+ {
+ "title": "What is Hugging Face? - Machine Learning Hub Explained",
+ "url": "https://www.youtube.com/watch?v=1AUjKfpRZVo",
+ "type": "video"
+ }
+ ]
},
"YLOdOvLXa5Fa7_mmuvKEi": {
"title": "Hugging Face Hub",
- "description": "",
- "links": []
+ "description": "The Hugging Face Hub is a comprehensive platform that hosts over 900,000 machine learning models, 200,000 datasets, and 300,000 demo applications, facilitating collaboration and sharing within the AI community. It serves as a central repository where users can discover, upload, and experiment with various models and datasets across multiple domains, including natural language processing, computer vision, and audio tasks. It also supports version control.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "nlp-official",
+ "url": "https://huggingface.co/learn/nlp-course/en/chapter4/1",
+ "type": "course"
+ },
+ {
+ "title": "Documentation",
+ "url": "https://huggingface.co/docs/hub/en/index",
+ "type": "article"
+ }
+ ]
},
"YKIPOiSj_FNtg0h8uaSMq": {
"title": "Hugging Face Tasks",
- "description": "",
- "links": []
+ "description": "Hugging Face supports text classification, named entity recognition, question answering, summarization, and translation. It also extends to multimodal tasks that involve both text and images, such as visual question answering (VQA) and image-text matching. Each task is done by various pre-trained models that can be easily accessed and fine-tuned through the Hugging Face library.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Task and Model",
+ "url": "https://huggingface.co/learn/computer-vision-course/en/unit4/multimodal-models/tasks-models-part1",
+ "type": "article"
+ },
+ {
+ "title": "Task Summary",
+ "url": "https://huggingface.co/docs/transformers/v4.14.1/en/task_summary",
+ "type": "article"
+ },
+ {
+ "title": "Task Manager",
+ "url": "https://huggingface.co/docs/optimum/en/exporters/task_manager",
+ "type": "article"
+ }
+ ]
},
"3kRTzlLNBnXdTsAEXVu_M": {
"title": "Inference SDK",
- "description": "",
- "links": []
+ "description": "The Hugging Face Inference SDK is a powerful tool that allows developers to easily integrate and run inference on large language models hosted on the Hugging Face Hub. By using the `InferenceClient`, users can make API calls to various models for tasks such as text generation, image creation, and more. The SDK supports both synchronous and asynchronous operations thus compatible with existing workflows.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Inference",
+ "url": "https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client",
+ "type": "article"
+ },
+ {
+ "title": "Endpoint Setup",
+ "url": "https://www.npmjs.com/package/@huggingface/inference",
+ "type": "article"
+ }
+ ]
},
"bGLrbpxKgENe2xS1eQtdh": {
"title": "Transformers.js",
- "description": "",
- "links": []
+ "description": "Transformers.js is a JavaScript library that enables transformer models, like those from Hugging Face, to run directly in the browser or Node.js, without needing cloud services. It supports tasks such as text generation, sentiment analysis, and translation within web apps or server-side scripts. Using WebAssembly (Wasm) and efficient JavaScript, Transformers.js offers powerful NLP capabilities with low latency, enhanced privacy, and offline functionality, making it ideal for real-time, interactive applications where local processing is essential for performance and security.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Transformers.js on Hugging Face",
+ "url": "https://huggingface.co/docs/transformers.js/en/index",
+ "type": "article"
+ },
+ {
+ "title": "How Transformer.js Can Help You Create Smarter AI In Your Browser",
+ "url": "https://www.youtube.com/watch?v=MNJHu9zjpqg",
+ "type": "video"
+ }
+ ]
},
"rTT2UnvqFO3GH6ThPLEjO": {
"title": "Ollama",
- "description": "",
- "links": []
+ "description": "Ollama is a platform that offers large language models (LLMs) designed to run locally on personal devices, enabling AI functionality without relying on cloud services. It focuses on privacy, performance, and ease of use by allowing users to deploy models directly on laptops, desktops, or edge devices, providing fast, offline AI capabilities. With tools like the Ollama SDK, developers can integrate these models into their applications for tasks such as text generation, summarization, and more, benefiting from reduced latency, greater data control, and seamless local processing.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Ollama Website",
+ "url": "https://ollama.com/",
+ "type": "article"
+ },
+ {
+ "title": "Ollama: Easily run LLMs locally",
+ "url": "https://klu.ai/glossary/ollama",
+ "type": "article"
+ }
+ ]
},
"ro3vY_sp6xMQ-hfzO-rc1": {
"title": "Ollama Models",
- "description": "",
- "links": []
+ "description": "Ollama provides a collection of large language models (LLMs) designed to run locally on personal devices, enabling privacy-focused and efficient AI applications without relying on cloud services. These models can perform tasks like text generation, translation, summarization, and question answering, similar to popular models like GPT. Ollama emphasizes ease of use, offering models that are optimized for lower resource consumption, making it possible to deploy AI capabilities directly on laptops or edge devices.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Ollama Model Library",
+ "url": "https://ollama.com/library",
+ "type": "article"
+ },
+ {
+ "title": "What are the different types of models? Ollama Course",
+ "url": "https://www.youtube.com/watch?v=f4tXwCNP1Ac",
+ "type": "video"
+ }
+ ]
},
"TsG_I7FL-cOCSw8gvZH3r": {
"title": "Ollama SDK",
- "description": "",
- "links": []
+ "description": "The Ollama SDK is a community-driven tool that allows developers to integrate and run large language models (LLMs) locally through a simple API. Enabling users to easily import the Ollama provider and create customized instances for various models, such as Llama 2 and Mistral. The SDK supports functionalities like `text generation` and `embeddings`, making it versatile for applications ranging from `chatbots` to `content generation`. Also Ollama SDK enhances privacy and control over data while offering seamless integration with existing workflows.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "SDK Provider",
+ "url": "https://sdk.vercel.ai/providers/community-providers/ollama",
+ "type": "article"
+ },
+ {
+ "title": "Beginner's Guide",
+ "url": "https://dev.to/jayantaadhikary/using-the-ollama-api-to-run-llms-and-generate-responses-locally-18b7",
+ "type": "article"
+ },
+ {
+ "title": "Setup",
+ "url": "https://klu.ai/glossary/ollama",
+ "type": "article"
+ }
+ ]
},
"--ig0Ume_BnXb9K2U7HJN": {
"title": "What are Embeddings",
- "description": "",
+ "description": "Embeddings are dense, numerical vector representations of data, such as words, sentences, images, or audio, that capture their semantic meaning and relationships. By converting data into fixed-length vectors, embeddings allow machine learning models to process and understand the data more effectively. For example, word embeddings represent similar words with similar vectors, enabling tasks like semantic search, recommendation systems, and clustering. Embeddings make it easier to compare, search, and analyze complex, unstructured data by mapping similar items close together in a high-dimensional space.",
"links": []
},
"eMfcyBxnMY_l_5-8eg6sD": {
"title": "Semantic Search",
- "description": "",
- "links": []
+ "description": "Embeddings are used for semantic search by converting text, such as queries and documents, into high-dimensional vectors that capture the underlying meaning and context, rather than just exact words. These embeddings represent the semantic relationships between words or phrases, allowing the system to understand the query’s intent and retrieve relevant information, even if the exact terms don’t match.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is semantic search?",
+ "url": "https://www.elastic.co/what-is/semantic-search",
+ "type": "article"
+ },
+ {
+ "title": "What is Semantic Search? Cohere",
+ "url": "https://www.youtube.com/watch?v=fFt4kR4ntAA",
+ "type": "video"
+ }
+ ]
},
"HQe9GKy3p0kTUPxojIfSF": {
- "title": "Recommendation Systems",
- "description": "",
- "links": []
+ "title": "Recommendation Systems",
+ "description": "In the context of embeddings, recommendation systems use vector representations to capture similarities between items, such as products or content. By converting items and user preferences into embeddings, these systems can measure how closely related different items are based on vector proximity, allowing them to recommend similar products or content based on a user's past interactions. This approach improves recommendation accuracy and efficiency by enabling meaningful, scalable comparisons of complex data.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What role does AI play in recommendation systems and engines?",
+ "url": "https://www.algolia.com/blog/ai/what-role-does-ai-play-in-recommendation-systems-and-engines/",
+ "type": "article"
+ },
+ {
+ "title": "What is a recommendation engine?",
+ "url": "https://www.ibm.com/think/topics/recommendation-engine",
+ "type": "article"
+ }
+ ]
},
"AglWJ7gb9rTT2rMkstxtk": {
"title": "Anomaly Detection",
- "description": "",
- "links": []
+ "description": "Anomaly detection with embeddings works by transforming data, such as text, images, or time-series data, into vector representations that capture their patterns and relationships. In this high-dimensional space, similar data points are positioned close together, while anomalies stand out as those that deviate significantly from the typical distribution. This approach is highly effective for detecting outliers in tasks like fraud detection, network security, and quality control.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Anomoly in Embeddings",
+ "url": "https://ai.google.dev/gemini-api/tutorials/anomaly_detection",
+ "type": "article"
+ }
+ ]
},
"06Xta-OqSci05nV2QMFdF": {
"title": "Data Classification",
- "description": "",
- "links": []
+ "description": "Once data is embedded, a classification algorithm, such as a neural network or a logistic regression model, can be trained on these embeddings to classify the data into different categories. The advantage of using embeddings is that they capture underlying relationships and similarities between data points, even if the raw data is complex or high-dimensional, improving classification accuracy in tasks like text classification, image categorization, and recommendation systems.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Text Embeddings, Classification, and Semantic Search (w/ Python Code)",
+ "url": "https://www.youtube.com/watch?v=sNa_uiqSlJo",
+ "type": "video"
+ }
+ ]
},
"l6priWeJhbdUD5tJ7uHyG": {
"title": "Open AI Embeddings API",
- "description": "",
- "links": []
+ "description": "The OpenAI Embeddings API allows developers to generate dense vector representations of text, which capture semantic meaning and relationships. These embeddings can be used for various tasks, such as semantic search, recommendation systems, and clustering, by enabling the comparison of text based on similarity in vector space. The API supports easy integration and scalability, making it possible to handle large datasets and perform tasks like finding similar documents, organizing content, or building recommendation engines. Learn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Embeddings API",
+ "url": "https://platform.openai.com/docs/api-reference/embeddings/create",
+ "type": "article"
+ },
+ {
+ "title": "Master OpenAI EMBEDDING API",
+ "url": "https://www.youtube.com/watch?v=9oCS-VQupoc",
+ "type": "video"
+ }
+ ]
},
"y0qD5Kb4Pf-ymIwW-tvhX": {
"title": "Open AI Embedding Models",
- "description": "",
- "links": []
+ "description": "OpenAI's embedding models convert text into dense vector representations that capture semantic meaning, allowing for efficient similarity searches, clustering, and recommendations. These models are commonly used for tasks like semantic search, where similar phrases are mapped to nearby points in a vector space, and for building recommendation systems by comparing embeddings to find related content. OpenAI's embedding models offer versatility, supporting a range of applications from document retrieval to content classification, and can be easily integrated through the OpenAI API for scalable and efficient deployment.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Embedding Models",
+ "url": "https://platform.openai.com/docs/guides/embeddings/embedding-models",
+ "type": "article"
+ },
+ {
+ "title": "OpenAI Embeddings Explained in 5 Minutes",
+ "url": "https://www.youtube.com/watch?v=8kJStTRuMcs",
+ "type": "video"
+ }
+ ]
},
"4GArjDYipit4SLqKZAWDf": {
"title": "Pricing Considerations",
- "description": "",
- "links": []
+ "description": "The pricing for the OpenAI Embedding API is based on the number of tokens processed and the specific embedding model used. Costs are determined by the total tokens needed to generate embeddings, so longer texts will result in higher charges. To manage costs, developers can optimize by shortening inputs or batching requests. Additionally, selecting the right embedding model for your performance and budget requirements, along with monitoring token usage, can help control expenses.",
+ "links": [
+ {
+ "title": "OpenAI API Pricing",
+ "url": "https://openai.com/api/pricing/",
+ "type": "article"
+ }
+ ]
},
"apVYIV4EyejPft25oAvdI": {
"title": "Open-Source Embeddings",
- "description": "",
+ "description": "Open-source embeddings are pre-trained vector representations of data, usually text, that are freely available for use and modification. These embeddings capture semantic meanings, making them useful for tasks like semantic search, text classification, and clustering. Examples include Word2Vec, GloVe, and FastText, which represent words as vectors based on their context in large corpora, and more advanced models like Sentence-BERT and CLIP that provide embeddings for sentences and images. Open-source embeddings allow developers to leverage pre-trained models without starting from scratch, enabling faster development and experimentation in natural language processing and other AI applications.",
"links": []
},
"ZV_V6sqOnRodgaw4mzokC": {
"title": "Sentence Transformers",
- "description": "",
- "links": []
+ "description": "Sentence Transformers are a type of model designed to generate high-quality embeddings for sentences, allowing them to capture the semantic meaning of text. Unlike traditional word embeddings, which represent individual words, Sentence Transformers understand the context of entire sentences, making them ideal for tasks that require semantic similarity, such as sentence clustering, semantic search, and paraphrase detection. Built on top of transformer models like BERT and RoBERTa, they convert sentences into dense vectors, where similar sentences are placed closer together in vector space.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is BERT?",
+ "url": "https://h2o.ai/wiki/bert/",
+ "type": "article"
+ },
+ {
+ "title": "SentenceTransformers Documentation",
+ "url": "https://sbert.net/",
+ "type": "article"
+ },
+ {
+ "title": "Using Sentence Transformers at Hugging Face",
+ "url": "https://huggingface.co/docs/hub/sentence-transformers",
+ "type": "article"
+ }
+ ]
},
"dLEg4IA3F5jgc44Bst9if": {
"title": "Models on Hugging Face",
@@ -347,233 +1059,755 @@
},
"tt9u3oFlsjEMfPyojuqpc": {
"title": "Vector Databases",
- "description": "",
+ "description": "Vector databases are systems specialized in storing, indexing, and retrieving high-dimensional vectors, often used as embeddings for data like text, images, or audio. Unlike traditional databases, they excel at managing unstructured data by enabling fast similarity searches, where vectors are compared to find the closest matches. This makes them essential for tasks like semantic search, recommendation systems, and content discovery. Using techniques like approximate nearest neighbor (ANN) search, vector databases handle large datasets efficiently, ensuring quick and accurate retrieval even at scale.",
"links": []
},
"WcjX6p-V-Rdd77EL8Ega9": {
"title": "Purpose and Functionality",
- "description": "",
- "links": []
+ "description": "A vector database is designed to store, manage, and retrieve high-dimensional vectors (embeddings) generated by AI models. Its primary purpose is to perform fast and efficient similarity searches, enabling applications to find data points that are semantically or visually similar to a given query. Unlike traditional databases, which handle structured data, vector databases excel at managing unstructured data like text, images, and audio by converting them into dense vector representations. They use indexing techniques, such as approximate nearest neighbor (ANN) algorithms, to quickly search large datasets and return relevant results. Vector databases are essential for applications like recommendation systems, semantic search, and content discovery, where understanding and retrieving similar items is crucial.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is a Vector Database? Top 12 Use Cases",
+ "url": "https://lakefs.io/blog/what-is-vector-databases/",
+ "type": "article"
+ },
+ {
+ "title": "Vector Databases: Intro, Use Cases",
+ "url": "https://www.v7labs.com/blog/vector-databases",
+ "type": "article"
+ }
+ ]
},
"dSd2C9lNl-ymmCRT9_ZC3": {
"title": "Chroma",
- "description": "Chroma is an open-source vector database and AI-native embedding database designed to handle and store large-scale embeddings and semantic vectors. It is used in applications that require fast, efficient similarity searches, such as natural language processing (NLP), machine learning (ML), and AI systems dealing with text, images, and other high-dimensional data.\n\nVisit the following resources to learn more:\n\n\\-[@official@Chroma](https://www.trychroma.com/) -[@article@Chroma Tutorials](https://lablab.ai/tech/chroma) -[@video@Chroma - Chroma - Vector Database for LLM Applications](https://youtu.be/Qs_y0lTJAp0?si=Z2-eSmhf6PKrEKCW)",
- "links": []
+ "description": "Chroma is an open-source vector database and AI-native embedding database designed to handle and store large-scale embeddings and semantic vectors. It is used in applications that require fast, efficient similarity searches, such as natural language processing (NLP), machine learning (ML), and AI systems dealing with text, images, and other high-dimensional data.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Chroma",
+ "url": "https://www.trychroma.com/",
+ "type": "article"
+ },
+ {
+ "title": "Chroma Tutorials",
+ "url": "https://lablab.ai/tech/chroma",
+ "type": "article"
+ },
+ {
+ "title": "Chroma - Chroma - Vector Database for LLM Applications",
+ "url": "https://youtu.be/Qs_y0lTJAp0?si=Z2-eSmhf6PKrEKCW",
+ "type": "video"
+ }
+ ]
},
"_Cf7S1DCvX7p1_3-tP3C3": {
"title": "Pinecone",
- "description": "",
- "links": []
+ "description": "Pinecone is a managed vector database designed for efficient similarity search and real-time retrieval of high-dimensional data, such as embeddings. It allows developers to store, index, and query vector representations, making it easy to build applications like recommendation systems, semantic search, and AI-driven content discovery. Pinecone is scalable, handles large datasets, and provides fast, low-latency searches using optimized indexing techniques.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Pinecone Website",
+ "url": "https://www.pinecone.io",
+ "type": "article"
+ },
+ {
+ "title": "Everything you need to know about Pinecone",
+ "url": "https://www.packtpub.com/article-hub/everything-you-need-to-know-about-pinecone-a-vector-database?srsltid=AfmBOorXsy9WImpULoLjd-42ERvTzj3pQb7C2EFgamWlRobyGJVZKKdz",
+ "type": "article"
+ },
+ {
+ "title": "Introducing Pinecone Serverless",
+ "url": "https://www.youtube.com/watch?v=iCuR6ihHQgc",
+ "type": "video"
+ }
+ ]
},
"VgUnrZGKVjAAO4n_llq5-": {
"title": "Weaviate",
- "description": "",
- "links": []
+ "description": "Weaviate is an open-source vector database that allows users to store, search, and manage high-dimensional vectors, often used for tasks like semantic search and recommendation systems. It enables efficient similarity searches by converting data (like text, images, or audio) into embeddings and indexing them for fast retrieval. Weaviate also supports integrating external data sources and schemas, making it easy to combine structured and unstructured data.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Weaviate Website",
+ "url": "https://weaviate.io/",
+ "type": "article"
+ },
+ {
+ "title": "Advanced AI Agents with RAG",
+ "url": "https://www.youtube.com/watch?v=UoowC-hsaf0&list=PLTL2JUbrY6tVmVxY12e6vRDmY-maAXzR1",
+ "type": "video"
+ }
+ ]
},
"JurLbOO1Z8r6C3yUqRNwf": {
"title": "FAISS",
- "description": "",
- "links": []
+ "description": "FAISS (Facebook AI Similarity Search) is a library developed by Facebook AI for efficient similarity search and clustering of dense vectors, particularly useful for large-scale datasets. It is optimized to handle embeddings (vector representations) and enables fast nearest neighbor search, allowing you to retrieve similar items from a large collection of vectors based on distance or similarity metrics like cosine similarity or Euclidean distance. FAISS is widely used in applications such as image and text retrieval, recommendation systems, and large-scale search systems where embeddings are used to represent items. It offers several indexing methods and can scale to billions of vectors, making it a powerful tool for handling real-time, large-scale similarity search problems efficiently.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "FAISS",
+ "url": "https://ai.meta.com/tools/faiss/",
+ "type": "article"
+ },
+ {
+ "title": "What Is Faiss (Facebook AI Similarity Search)?",
+ "url": "https://www.datacamp.com/blog/faiss-facebook-ai-similarity-search",
+ "type": "article"
+ },
+ {
+ "title": "FAISS Vector Library with LangChain and OpenAI",
+ "url": "https://www.youtube.com/watch?v=ZCSsIkyCZk4",
+ "type": "video"
+ }
+ ]
},
"rjaCNT3Li45kwu2gXckke": {
"title": "LanceDB",
- "description": "",
- "links": []
+ "description": "LanceDB is a vector database designed for efficient storage, retrieval, and management of embeddings. It enables users to perform fast similarity searches, particularly useful in applications like recommendation systems, semantic search, and AI-driven content retrieval. LanceDB focuses on scalability and speed, allowing large-scale datasets of embeddings to be indexed and queried quickly, which is essential for real-time AI applications. It integrates well with machine learning workflows, making it easier to deploy models that rely on vector-based data processing, and helps manage the complexities of handling high-dimensional vector data efficiently.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "LanceDB on GitHub",
+ "url": "https://github.com/lancedb/lancedb",
+ "type": "opensource"
+ },
+ {
+ "title": "LanceDB Website",
+ "url": "https://lancedb.com/",
+ "type": "article"
+ }
+ ]
},
"DwOAL5mOBgBiw-EQpAzQl": {
"title": "Qdrant",
- "description": "",
- "links": []
+ "description": "Qdrant is an open-source vector database designed for efficient similarity search and real-time data retrieval. It specializes in storing and indexing high-dimensional vectors (embeddings) to enable fast and accurate searches across large datasets. Qdrant is particularly suited for applications like recommendation systems, semantic search, and AI-driven content discovery, where finding similar items quickly is essential. It supports advanced filtering, scalable indexing, and real-time updates, making it easy to integrate into machine learning workflows.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Qdrant on GitHub",
+ "url": "https://github.com/qdrant/qdrant",
+ "type": "opensource"
+ },
+ {
+ "title": "Qdrant Website",
+ "url": "https://qdrant.tech/",
+ "type": "article"
+ },
+ {
+ "title": "Getting started with Qdrant",
+ "url": "https://www.youtube.com/watch?v=LRcZ9pbGnno",
+ "type": "video"
+ }
+ ]
},
"9kT7EEQsbeD2WDdN9ADx7": {
"title": "Supabase",
- "description": "",
- "links": []
+ "description": "Supabase Vector is an extension of the Supabase platform, specifically designed for AI and machine learning applications that require vector operations. It leverages PostgreSQL's pgvector extension to provide efficient vector storage and similarity search capabilities. This makes Supabase Vector particularly useful for applications involving embeddings, semantic search, and recommendation systems. With Supabase Vector, developers can store and query high-dimensional vector data alongside regular relational data, all within the same PostgreSQL database.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Supabase Vector website",
+ "url": "https://supabase.com/vector",
+ "type": "article"
+ },
+ {
+ "title": "Supabase Vector: The Postgres Vector database",
+ "url": "https://www.youtube.com/watch?v=MDxEXKkxf2Q",
+ "type": "video"
+ }
+ ]
},
"j6bkm0VUgLkHdMDDJFiMC": {
"title": "MongoDB Atlas",
- "description": "",
- "links": []
+ "description": "MongoDB Atlas, traditionally known for its document database capabilities, now includes vector search functionality, making it a strong option as a vector database. This feature allows developers to store and query high-dimensional vector data alongside regular document data. With Atlas’s vector search, users can perform similarity searches on embeddings of text, images, or other complex data, making it ideal for AI and machine learning applications like recommendation systems, image similarity search, and natural language processing tasks. The seamless integration of vector search within the MongoDB ecosystem allows developers to leverage familiar tools and interfaces while benefiting from advanced vector-based operations for sophisticated data analysis and retrieval.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Vector Search in MongoDB Atlas",
+ "url": "https://www.mongodb.com/products/platform/atlas-vector-search",
+ "type": "article"
+ }
+ ]
},
"5TQnO9B4_LTHwqjI7iHB1": {
"title": "Indexing Embeddings",
- "description": "",
- "links": []
+ "description": "Embeddings are stored in a vector database by first converting data, such as text, images, or audio, into high-dimensional vectors using machine learning models. These vectors, also called embeddings, capture the semantic relationships and patterns within the data. Once generated, each embedding is indexed in the vector database along with its associated metadata, such as the original data (e.g., text or image) or an identifier. The vector database then organizes these embeddings to support efficient similarity searches, typically using techniques like approximate nearest neighbor (ANN) search.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Indexing & Embeddings",
+ "url": "https://docs.llamaindex.ai/en/stable/understanding/indexing/indexing/",
+ "type": "article"
+ },
+ {
+ "title": "Vector Databases simply explained! (Embeddings & Indexes)",
+ "url": "https://www.youtube.com/watch?v=dN0lsF2cvm4",
+ "type": "video"
+ }
+ ]
},
"ZcbRPtgaptqKqWBgRrEBU": {
"title": "Performing Similarity Search",
- "description": "",
+ "description": "In a similarity search, the process begins by converting the user’s query (such as a piece of text or an image) into an embedding—a vector representation that captures the query’s semantic meaning. This embedding is generated using a pre-trained model, such as BERT for text or a neural network for images. Once the query is converted into a vector, it is compared to the embeddings stored in the vector database.",
"links": []
},
"lVhWhZGR558O-ljHobxIi": {
"title": "RAG & Implementation",
- "description": "",
+ "description": "Retrieval-Augmented Generation (RAG) combines information retrieval with language generation to produce more accurate, context-aware responses. It uses two components: a retriever, which searches a database to find relevant information, and a generator, which crafts a response based on the retrieved data. Implementing RAG involves using a retrieval model (e.g., embeddings and vector search) alongside a generative language model (like GPT). The process starts by converting a query into embeddings, retrieving relevant documents from a vector database, and feeding them to the language model, which then generates a coherent, informed response. This approach grounds outputs in real-world data, resulting in more reliable and detailed answers.",
"links": []
},
"GCn4LGNEtPI0NWYAZCRE-": {
"title": "RAG Usecases",
- "description": "",
- "links": []
+ "description": "Retrieval-Augmented Generation (RAG) enhances applications like chatbots, customer support, and content summarization by combining information retrieval with language generation. It retrieves relevant data from a knowledge base and uses it to generate accurate, context-aware responses, making it ideal for tasks such as question answering, document generation, and semantic search. RAG’s ability to ground outputs in real-world information leads to more reliable and informative results, improving user experience across various domains.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Retrieval augmented generation use cases: Transforming data into insights",
+ "url": "https://www.glean.com/blog/retrieval-augmented-generation-use-cases",
+ "type": "article"
+ },
+ {
+ "title": "Retrieval Augmented Generation (RAG) – 5 Use Cases",
+ "url": "https://theblue.ai/blog/rag-news/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to RAG",
+ "url": "https://www.youtube.com/watch?v=LmiFeXH-kq8&list=PL-pTHQz4RcBbz78Z5QXsZhe9rHuCs1Jw-",
+ "type": "video"
+ }
+ ]
},
"qlBEXrbV88e_wAGRwO9hW": {
"title": "RAG vs Fine-tuning",
- "description": "",
- "links": []
+ "description": "RAG (Retrieval-Augmented Generation) and fine-tuning are two approaches to enhancing language models, but they differ in methodology and use cases. Fine-tuning involves training a pre-trained model on a specific dataset to adapt it to a particular task, making it more accurate for that context but limited to the knowledge present in the training data. RAG, on the other hand, combines real-time information retrieval with generation, enabling the model to access up-to-date external data and produce contextually relevant responses. While fine-tuning is ideal for specialized, static tasks, RAG is better suited for dynamic tasks that require real-time, fact-based responses.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "RAG vs Fine Tuning: How to Choose the Right Method",
+ "url": "https://www.montecarlodata.com/blog-rag-vs-fine-tuning/",
+ "type": "article"
+ },
+ {
+ "title": "RAG vs Finetuning — Which Is the Best Tool to Boost Your LLM Application?",
+ "url": "https://towardsdatascience.com/rag-vs-finetuning-which-is-the-best-tool-to-boost-your-llm-application-94654b1eaba7",
+ "type": "article"
+ },
+ {
+ "title": "RAG vs Fine-tuning",
+ "url": "https://www.youtube.com/watch?v=00Q0G84kq3M",
+ "type": "video"
+ }
+ ]
},
"mX987wiZF7p3V_gExrPeX": {
"title": "Chunking",
- "description": "",
- "links": []
+ "description": "The chunking step in Retrieval-Augmented Generation (RAG) involves breaking down large documents or data sources into smaller, manageable chunks. This is done to ensure that the retriever can efficiently search through large volumes of data while staying within the token or input limits of the model. Each chunk, typically a paragraph or section, is converted into an embedding, and these embeddings are stored in a vector database. When a query is made, the retriever searches for the most relevant chunks rather than the entire document, enabling faster and more accurate retrieval.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Understanding LangChain's RecursiveCharacterTextSplitter",
+ "url": "https://dev.to/eteimz/understanding-langchains-recursivecharactertextsplitter-2846",
+ "type": "article"
+ },
+ {
+ "title": "Chunking Strategies for LLM Applications",
+ "url": "https://www.pinecone.io/learn/chunking-strategies/",
+ "type": "article"
+ },
+ {
+ "title": "A Guide to Chunking Strategies for Retrieval Augmented Generation",
+ "url": "https://zilliz.com/learn/guide-to-chunking-strategies-for-rag",
+ "type": "article"
+ }
+ ]
},
"grTcbzT7jKk_sIUwOTZTD": {
"title": "Embedding",
- "description": "",
- "links": []
+ "description": "In Retrieval-Augmented Generation (RAG), embeddings are essential for linking information retrieval with natural language generation. Embeddings represent both the user query and documents as dense vectors in a shared space, enabling the system to retrieve relevant information based on similarity. This retrieved information is then fed into a generative model, such as GPT, to produce contextually informed and accurate responses. By using embeddings, RAG enhances the model's ability to generate content grounded in external knowledge, making it effective for tasks like question answering and summarization.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Understanding the role of embeddings in RAG LLMs",
+ "url": "https://www.aporia.com/learn/understanding-the-role-of-embeddings-in-rag-llms/",
+ "type": "article"
+ },
+ {
+ "title": "Mastering RAG: How to Select an Embedding Model",
+ "url": "https://www.rungalileo.io/blog/mastering-rag-how-to-select-an-embedding-model",
+ "type": "article"
+ }
+ ]
},
"zZA1FBhf1y4kCoUZ-hM4H": {
"title": "Vector Database",
- "description": "",
- "links": []
+ "description": "When implementing Retrieval-Augmented Generation (RAG), a vector database is used to store and efficiently retrieve embeddings, which are vector representations of data like documents, images, or other knowledge sources. During the RAG process, when a query is made, the system converts it into an embedding and searches the vector database for the most relevant, similar embeddings (e.g., related documents or snippets). These retrieved pieces of information are then fed to a generative model, which uses them to produce a more accurate, context-aware response.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to Implement Graph RAG Using Knowledge Graphs and Vector Databases",
+ "url": "https://towardsdatascience.com/how-to-implement-graph-rag-using-knowledge-graphs-and-vector-databases-60bb69a22759",
+ "type": "article"
+ },
+ {
+ "title": "Retrieval Augmented Generation (RAG) with vector databases: Expanding AI Capabilities",
+ "url": "https://objectbox.io/retrieval-augmented-generation-rag-with-vector-databases-expanding-ai-capabilities/",
+ "type": "article"
+ }
+ ]
},
"OCGCzHQM2LQyUWmiqe6E0": {
"title": "Retrieval Process",
- "description": "",
- "links": []
+ "description": "The retrieval process in Retrieval-Augmented Generation (RAG) involves finding relevant information from a large dataset or knowledge base to support the generation of accurate, context-aware responses. When a query is received, the system first converts it into a vector (embedding) and uses this vector to search a database of pre-indexed embeddings, identifying the most similar or relevant data points. Techniques like approximate nearest neighbor (ANN) search are often used to speed up this process.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Retrieval-Augmented Generation (RAG)?",
+ "url": "https://cloud.google.com/use-cases/retrieval-augmented-generation",
+ "type": "article"
+ },
+ {
+ "title": "What Is Retrieval-Augmented Generation, aka RAG?",
+ "url": "https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/",
+ "type": "article"
+ }
+ ]
},
"2jJnS9vRYhaS69d6OxrMh": {
"title": "Generation",
- "description": "",
- "links": []
+ "description": "Generation refers to the process where a generative language model, such as GPT, creates a response based on the information retrieved during the retrieval phase. After relevant documents or data snippets are identified using embeddings, they are passed to the generative model, which uses this information to produce coherent, context-aware, and informative responses. The retrieved content helps the model stay grounded and factual, enhancing its ability to answer questions, provide summaries, or engage in dialogue by combining retrieved knowledge with its natural language generation capabilities. This synergy between retrieval and generation makes RAG systems effective for tasks that require detailed, accurate, and contextually relevant outputs.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is RAG (Retrieval-Augmented Generation)?",
+ "url": "https://aws.amazon.com/what-is/retrieval-augmented-generation/",
+ "type": "article"
+ },
+ {
+ "title": "Retrieval Augmented Generation (RAG) Explained in 8 Minutes!",
+ "url": "https://www.youtube.com/watch?v=HREbdmOSQ18",
+ "type": "video"
+ }
+ ]
},
"WZVW8FQu6LyspSKm1C_sl": {
"title": "Using SDKs Directly",
- "description": "",
- "links": []
+ "description": "While tools like Langchain and LlamaIndex make it easy to implement RAG, you don't have to necessarily learn and use them. If you know about the different steps of implementing RAG you can simply do it all yourself e.g. do the chunking using @langchain/textsplitters package, create embeddings using any LLM e.g. use OpenAI Embedding API through their SDK, save the embeddings to any vector database e.g. if you are using Supabase Vector DB, you can use their SDK and similarly you can use the relevant SDKs for the rest of the steps as well.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Langchain Text Splitter Package",
+ "url": "https://www.npmjs.com/package/@langchain/textsplitters",
+ "type": "article"
+ },
+ {
+ "title": "OpenAI Embedding API",
+ "url": "https://platform.openai.com/docs/guides/embeddings",
+ "type": "article"
+ },
+ {
+ "title": "Supabase AI & Vector Documentation",
+ "url": "https://supabase.com/docs/guides/ai",
+ "type": "article"
+ }
+ ]
},
"ebXXEhNRROjbbof-Gym4p": {
"title": "Langchain",
- "description": "",
- "links": []
+ "description": "LangChain is a development framework that simplifies building applications powered by language models, enabling seamless integration of multiple AI models and data sources. It focuses on creating chains, or sequences, of operations where language models can interact with databases, APIs, and other models to perform complex tasks. LangChain offers tools for prompt management, data retrieval, and workflow orchestration, making it easier to develop robust, scalable applications like chatbots, automated data analysis, and multi-step reasoning systems.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "LangChain Website",
+ "url": "https://www.langchain.com/",
+ "type": "article"
+ },
+ {
+ "title": "What is LangChain?",
+ "url": "https://www.youtube.com/watch?v=1bUy-1hGZpI",
+ "type": "video"
+ }
+ ]
},
"d0ontCII8KI8wfP-8Y45R": {
"title": "Llama Index",
- "description": "",
- "links": []
+ "description": "LlamaIndex, formerly known as GPT Index, is a tool designed to facilitate the integration of large language models (LLMs) with structured and unstructured data sources. It acts as a data framework that helps developers build retrieval-augmented generation (RAG) applications by indexing various types of data, such as documents, databases, and APIs, enabling LLMs to query and retrieve relevant information efficiently.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "llamaindex Website",
+ "url": "https://docs.llamaindex.ai/en/stable/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to LlamaIndex with Python (2024)",
+ "url": "https://www.youtube.com/watch?v=cCyYGYyCka4",
+ "type": "video"
+ }
+ ]
},
"eOqCBgBTKM8CmY3nsWjre": {
"title": "Open AI Assistant API",
- "description": "",
- "links": []
+ "description": "The OpenAI Assistant API enables developers to create advanced conversational systems using models like GPT-4. It supports multi-turn conversations, allowing the AI to maintain context across exchanges, which is ideal for chatbots, virtual assistants, and interactive applications. Developers can customize interactions by defining roles, such as system, user, and assistant, to guide the assistant's behavior. With features like temperature control, token limits, and stop sequences, the API offers flexibility to ensure responses are relevant, safe, and tailored to specific use cases.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Assistants API – Course for Beginners",
+ "url": "https://www.youtube.com/watch?v=qHPonmSX4Ms",
+ "type": "course"
+ },
+ {
+ "title": "Assistants API",
+ "url": "https://platform.openai.com/docs/assistants/overview",
+ "type": "article"
+ }
+ ]
},
"c0RPhpD00VIUgF4HJgN2T": {
"title": "Replicate",
- "description": "",
- "links": []
+ "description": "Replicate is a platform that allows developers to run machine learning models in the cloud without needing to manage infrastructure. It provides a simple API for deploying and scaling models, making it easy to integrate AI capabilities like image generation, text processing, and more into applications. Users can select from a library of pre-trained models or deploy their own, with the platform handling tasks like scaling, monitoring, and versioning.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Replicate Website",
+ "url": "https://replicate.com/",
+ "type": "article"
+ },
+ {
+ "title": "Replicate.com Beginners Tutorial",
+ "url": "https://www.youtube.com/watch?v=y0_GE5ErqY8",
+ "type": "video"
+ }
+ ]
},
"AeHkNU-uJ_gBdo5-xdpEu": {
"title": "AI Agents",
- "description": "",
- "links": []
+ "description": "In AI engineering, \"agents\" refer to autonomous systems or components that can perceive their environment, make decisions, and take actions to achieve specific goals. Agents often interact with external systems, users, or other agents to carry out complex tasks. They can vary in complexity, from simple rule-based bots to sophisticated AI-powered agents that leverage machine learning models, natural language processing, and reinforcement learning.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Building an AI Agent Tutorial - LangChain",
+ "url": "https://python.langchain.com/docs/tutorials/agents/",
+ "type": "article"
+ },
+ {
+ "title": "Ai agents and their types",
+ "url": "https://play.ht/blog/ai-agents-use-cases/",
+ "type": "article"
+ },
+ {
+ "title": "The Complete Guide to Building AI Agents for Beginners",
+ "url": "https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX",
+ "type": "video"
+ }
+ ]
},
"778HsQzTuJ_3c9OSn5DmH": {
"title": "Agents Usecases",
- "description": "AI Agents have a variety of usecases ranging from customer support, workflow automation, cybersecurity, finance, marketing and sales, and more.\n\nVisit the following resources to learn more:\n\n* [@article@Top 15 Use Cases Of AI Agents In Business](https://www.ampcome.com/post/15-use-cases-of-ai-agents-in-business) -[@article@A Brief Guide on AI Agents: Benefits and Use Cases](https://www.codica.com/blog/brief-guide-on-ai-agents/) -[@video@The Complete Guide to Building AI Agents for Beginners](https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX)",
- "links": []
+ "description": "AI Agents have a variety of usecases ranging from customer support, workflow automation, cybersecurity, finance, marketing and sales, and more.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Top 15 Use Cases Of AI Agents In Business",
+ "url": "https://www.ampcome.com/post/15-use-cases-of-ai-agents-in-business",
+ "type": "article"
+ },
+ {
+ "title": "A Brief Guide on AI Agents: Benefits and Use Cases",
+ "url": "https://www.codica.com/blog/brief-guide-on-ai-agents/",
+ "type": "article"
+ },
+ {
+ "title": "The Complete Guide to Building AI Agents for Beginners",
+ "url": "https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX",
+ "type": "video"
+ }
+ ]
},
"voDKcKvXtyLzeZdx2g3Qn": {
"title": "ReAct Prompting",
- "description": "",
- "links": []
+ "description": "ReAct prompting is a technique that combines reasoning and action by guiding language models to think through a problem step-by-step and then take specific actions based on the reasoning. It encourages the model to break down tasks into logical steps (reasoning) and perform operations, such as calling APIs or retrieving information (actions), to reach a solution. This approach helps in scenarios where the model needs to process complex queries, interact with external systems, or handle tasks requiring a sequence of actions, improving the model's ability to provide accurate and context-aware responses.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "ReAct Prompting",
+ "url": "https://www.promptingguide.ai/techniques/react",
+ "type": "article"
+ },
+ {
+ "title": "ReAct Prompting: How We Prompt for High-Quality Results from LLMs",
+ "url": "https://www.width.ai/post/react-prompting",
+ "type": "article"
+ }
+ ]
},
"6xaRB34_g0HGt-y1dGYXR": {
"title": "Manual Implementation",
- "description": "",
- "links": []
+ "description": "Services like [Open AI functions](https://platform.openai.com/docs/guides/function-calling) and Tools or [Vercel's AI SDK](https://sdk.vercel.ai/docs/foundations/tools) make it really easy to make SDK agents however it is a good idea to learn how these tools work under the hood. You can also create fully custom implementation of agents using by implementing custom loop.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Function Calling",
+ "url": "https://platform.openai.com/docs/guides/function-calling",
+ "type": "article"
+ },
+ {
+ "title": "Vercel AI SDK",
+ "url": "https://sdk.vercel.ai/docs/foundations/tools",
+ "type": "article"
+ }
+ ]
},
"Sm0Ne5Nx72hcZCdAcC0C2": {
"title": "OpenAI Functions / Tools",
- "description": "",
- "links": []
+ "description": "OpenAI Functions, also known as tools, enable developers to extend the capabilities of language models by integrating external APIs and functionalities, allowing the models to perform specific actions, fetch real-time data, or interact with other software systems. This feature enhances the model's utility by bridging it with services like web searches, databases, and custom business applications, enabling more dynamic and task-oriented responses.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Function Calling",
+ "url": "https://platform.openai.com/docs/guides/function-calling",
+ "type": "article"
+ },
+ {
+ "title": "How does OpenAI Function Calling work?",
+ "url": "https://www.youtube.com/watch?v=Qor2VZoBib0",
+ "type": "video"
+ }
+ ]
},
"mbp2NoL-VZ5hZIIblNBXt": {
"title": "OpenAI Assistant API",
- "description": "",
- "links": []
+ "description": "The OpenAI Assistant API enables developers to create advanced conversational systems using models like GPT-4. It supports multi-turn conversations, allowing the AI to maintain context across exchanges, which is ideal for chatbots, virtual assistants, and interactive applications. Developers can customize interactions by defining roles, such as system, user, and assistant, to guide the assistant's behavior. With features like temperature control, token limits, and stop sequences, the API offers flexibility to ensure responses are relevant, safe, and tailored to specific use cases.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Assistants API – Course for Beginners",
+ "url": "https://www.youtube.com/watch?v=qHPonmSX4Ms",
+ "type": "course"
+ },
+ {
+ "title": "Assistants API",
+ "url": "https://platform.openai.com/docs/assistants/overview",
+ "type": "article"
+ }
+ ]
},
"W7cKPt_UxcUgwp8J6hS4p": {
"title": "Multimodal AI",
- "description": "",
- "links": []
+ "description": "Multimodal AI is an approach that combines and processes data from multiple sources, such as text, images, audio, and video, to understand and generate responses. By integrating different data types, it enables more comprehensive and accurate AI systems, allowing for tasks like visual question answering, interactive virtual assistants, and enhanced content understanding. This capability helps create richer, more context-aware applications that can analyze and respond to complex, real-world scenarios.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "A Multimodal World - Hugging Face",
+ "url": "https://huggingface.co/learn/computer-vision-course/en/unit4/multimodal-models/a_multimodal_world",
+ "type": "article"
+ },
+ {
+ "title": "Multimodal AI - Google",
+ "url": "https://cloud.google.com/use-cases/multimodal-ai?hl=en",
+ "type": "article"
+ },
+ {
+ "title": "What Is Multimodal AI? A Complete Introduction",
+ "url": "https://www.splunk.com/en_us/blog/learn/multimodal-ai.html",
+ "type": "article"
+ }
+ ]
},
"sGR9qcro68KrzM8qWxcH8": {
"title": "Multimodal AI Usecases",
- "description": "",
- "links": []
+ "description": "Multimodal AI powers applications like visual question answering, content moderation, and enhanced search engines. It drives smarter virtual assistants and interactive AR apps, combining text, images, and audio for richer, more intuitive user experiences across e-commerce, accessibility, and entertainment.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Hugging Face Multimodal Models",
+ "url": "https://huggingface.co/learn/computer-vision-course/en/unit4/multimodal-models/a_multimodal_world",
+ "type": "article"
+ }
+ ]
},
"fzVq4hGoa2gdbIzoyY1Zp": {
"title": "Image Understanding",
- "description": "",
- "links": []
+ "description": "Multimodal AI enhances image understanding by integrating visual data with other types of information, such as text or audio. By combining these inputs, AI models can interpret images more comprehensively, recognizing objects, scenes, and actions, while also understanding context and related concepts. For example, an AI system could analyze an image and generate descriptive captions, or provide explanations based on both visual content and accompanying text.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Low or high fidelity image understanding - OpenAI",
+ "url": "https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding",
+ "type": "article"
+ }
+ ]
},
"49BWxYVFpIgZCCqsikH7l": {
"title": "Image Generation",
- "description": "",
- "links": []
+ "description": "Image generation is a process in artificial intelligence where models create new images based on input prompts or existing data. It involves using generative models like GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), or more recently, transformer-based models like DALL-E and Stable Diffusion.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "DALL-E Website",
+ "url": "https://openai.com/index/dall-e-2/",
+ "type": "article"
+ },
+ {
+ "title": "How DALL-E 2 Actually Works",
+ "url": "https://www.assemblyai.com/blog/how-dall-e-2-actually-works/",
+ "type": "article"
+ },
+ {
+ "title": "How AI Image Generators Work (Stable Diffusion / Dall-E)",
+ "url": "https://www.youtube.com/watch?v=1CIpzeNxIhU",
+ "type": "video"
+ }
+ ]
},
"TxaZCtTCTUfwCxAJ2pmND": {
"title": "Video Understanding",
- "description": "",
+ "description": "Video understanding with multimodal AI involves analyzing and interpreting both visual and audio content to provide a more comprehensive understanding of videos. Common use cases include video summarization, where AI extracts key scenes and generates summaries; content moderation, where the system detects inappropriate visuals or audio; and video indexing for easier search and retrieval of specific moments within a video. Other applications include enhancing video-based recommendations, security surveillance, and interactive entertainment, where video and audio are processed together for real-time user interaction.",
"links": []
},
"mxQYB820447DC6kogyZIL": {
"title": "Audio Processing",
- "description": "",
- "links": []
+ "description": "Audio processing in multimodal AI enables a wide range of use cases by combining sound with other data types, such as text, images, or video, to create more context-aware systems. Use cases include speech recognition paired with real-time transcription and visual analysis in meetings or video conferencing tools, voice-controlled virtual assistants that can interpret commands in conjunction with on-screen visuals, and multimedia content analysis where audio and visual elements are analyzed together for tasks like content moderation or video indexing.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "The State of Audio Processing",
+ "url": "https://appwrite.io/blog/post/state-of-audio-processing",
+ "type": "article"
+ },
+ {
+ "title": "Audio Signal Processing for Machine Learning",
+ "url": "https://www.youtube.com/watch?v=iCwMQJnKk2c",
+ "type": "video"
+ }
+ ]
},
"GCERpLz5BcRtWPpv-asUz": {
"title": "Text-to-Speech",
- "description": "",
- "links": []
+ "description": "In the context of multimodal AI, text-to-speech (TTS) technology converts written text into natural-sounding spoken language, allowing AI systems to communicate verbally. When integrated with other modalities, such as visual or interactive elements, TTS can enhance user experiences in applications like virtual assistants, educational tools, and accessibility features. For example, a multimodal AI could read aloud text from an on-screen document while highlighting relevant sections, or narrate information about objects recognized in an image. By combining TTS with other forms of data processing, multimodal AI creates more engaging, accessible, and interactive systems for users.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Text-to-Speech?",
+ "url": "https://aws.amazon.com/polly/what-is-text-to-speech/",
+ "type": "article"
+ },
+ {
+ "title": "From Text to Speech: The Evolution of Synthetic Voices",
+ "url": "https://ignitetech.ai/about/blogs/text-speech-evolution-synthetic-voices",
+ "type": "article"
+ }
+ ]
},
"jQX10XKd_QM5wdQweEkVJ": {
"title": "Speech-to-Text",
- "description": "",
- "links": []
+ "description": "In the context of multimodal AI, speech-to-text technology converts spoken language into written text, enabling seamless integration with other data types like images and text. This allows AI systems to process audio input and combine it with visual or textual information, enhancing applications such as virtual assistants, interactive chatbots, and multimedia content analysis. For example, a multimodal AI can transcribe a video’s audio while simultaneously analyzing on-screen visuals and text, providing richer and more context-aware insights.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is speech to text? Amazon",
+ "url": "https://aws.amazon.com/what-is/speech-to-text/",
+ "type": "article"
+ },
+ {
+ "title": "Turn speech into text using Google AI",
+ "url": "https://cloud.google.com/speech-to-text",
+ "type": "article"
+ },
+ {
+ "title": "How is Speech to Text Used? ",
+ "url": "https://h2o.ai/wiki/speech-to-text/",
+ "type": "article"
+ }
+ ]
},
"CRrqa-dBw1LlOwVbrZhjK": {
"title": "OpenAI Vision API",
- "description": "",
- "links": []
+ "description": "The OpenAI Vision API enables models to analyze and understand images, allowing them to identify objects, recognize text, and interpret visual content. It integrates image processing with natural language capabilities, enabling tasks like visual question answering, image captioning, and extracting information from photos. This API can be used for applications in accessibility, content moderation, and automation, providing a seamless way to combine visual understanding with text-based interactions.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Vision",
+ "url": "https://platform.openai.com/docs/guides/vision",
+ "type": "article"
+ },
+ {
+ "title": "OpenAI Vision API Crash Course",
+ "url": "https://www.youtube.com/watch?v=ZjkS11DSeEk",
+ "type": "video"
+ }
+ ]
},
"LKFwwjtcawJ4Z12X102Cb": {
"title": "DALL-E API",
- "description": "",
- "links": []
+ "description": "The DALL-E API is a tool provided by OpenAI that allows developers to integrate the DALL-E image generation model into applications. DALL-E is an AI model designed to generate images from textual descriptions, capable of producing highly detailed and creative visuals. The API enables users to provide a descriptive prompt, and the model generates corresponding images, opening up possibilities in fields like design, advertising, content creation, and art.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "OpenAI Image Generation",
+ "url": "https://platform.openai.com/docs/guides/images",
+ "type": "article"
+ },
+ {
+ "title": "DALL E API - Introduction (Generative AI Pictures from OpenAI)",
+ "url": "https://www.youtube.com/watch?v=Zr6vAWwjHN0",
+ "type": "video"
+ }
+ ]
},
"OTBd6cPUayKaAM-fLWdSt": {
"title": "Whisper API",
- "description": "",
- "links": []
+ "description": "The Whisper API by OpenAI enables developers to integrate speech-to-text capabilities into their applications. It uses OpenAI's Whisper model, a powerful speech recognition system, to convert spoken language into accurate, readable text. The API supports multiple languages and can handle various accents, making it ideal for tasks like transcription, voice commands, and automated captions. With the ability to process audio in real time or from pre-recorded files, the Whisper API simplifies adding robust speech recognition features to applications, enhancing accessibility and enabling new interactive experiences.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Whisper on GitHub",
+ "url": "https://github.com/openai/whisper",
+ "type": "opensource"
+ },
+ {
+ "title": "OpenAI Whisper",
+ "url": "https://openai.com/index/whisper/",
+ "type": "article"
+ }
+ ]
},
"EIDbwbdolR_qsNKVDla6V": {
"title": "Hugging Face Models",
- "description": "",
- "links": []
+ "description": "Hugging Face models are a collection of pre-trained machine learning models available through the Hugging Face platform, covering a wide range of tasks like natural language processing, computer vision, and audio processing. The platform includes models for tasks such as text classification, translation, summarization, question answering, and more, with popular models like BERT, GPT, T5, and CLIP. Hugging Face provides easy-to-use tools and APIs that allow developers to access, fine-tune, and deploy these models, fostering a collaborative community where users can share, modify, and contribute models to improve AI research and application development.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Hugging Face Models",
+ "url": "https://huggingface.co/models",
+ "type": "article"
+ },
+ {
+ "title": "How to Use Pretrained Models from Hugging Face in a Few Lines of Code",
+ "url": "https://www.youtube.com/watch?v=ntz160EnWIc",
+ "type": "video"
+ }
+ ]
},
"j9zD3pHysB1CBhLfLjhpD": {
"title": "LangChain for Multimodal Apps",
- "description": "",
- "links": []
+ "description": "LangChain is a framework designed to build applications that integrate multiple AI models, especially those focusing on language understanding, generation, and multimodal capabilities. For multimodal apps, LangChain facilitates seamless interaction between text, image, and even audio models, enabling developers to create complex workflows that can process and analyze different types of data.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "LangChain Website",
+ "url": "https://www.langchain.com/",
+ "type": "article"
+ },
+ {
+ "title": "Build a Multimodal GenAI App with LangChain and Gemini LLMs",
+ "url": "https://www.youtube.com/watch?v=bToMzuiOMhg",
+ "type": "video"
+ }
+ ]
},
"akQTCKuPRRelj2GORqvsh": {
"title": "LlamaIndex for Multimodal Apps",
- "description": "",
- "links": []
+ "description": "LlamaIndex enables multi-modal apps by linking language models (LLMs) to diverse data sources, including text and images. It indexes and retrieves information across formats, allowing LLMs to process and integrate data from multiple modalities. This supports applications like visual question answering, content summarization, and interactive systems by providing structured, context-aware inputs from various content types.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "LlamaIndex Multy-modal",
+ "url": "https://docs.llamaindex.ai/en/stable/use_cases/multimodal/",
+ "type": "article"
+ },
+ {
+ "title": "Multi-modal Retrieval Augmented Generation with LlamaIndex",
+ "url": "https://www.youtube.com/watch?v=35RlrrgYDyU",
+ "type": "video"
+ }
+ ]
},
"NYge7PNtfI-y6QWefXJ4d": {
"title": "Development Tools",
- "description": "",
- "links": []
+ "description": "AI has given rise to a collection of AI powered development tools of various different varieties. We have IDEs like Cursor that has AI baked into it, live context capturing tools such as Pieces and a variety of brower based tools like V0, Claude and more.",
+ "links": [
+ {
+ "title": "v0 Website",
+ "url": "https://v0.dev",
+ "type": "article"
+ },
+ {
+ "title": "Aider - AI Pair Programming in Terminal",
+ "url": "https://github.com/Aider-AI/aider",
+ "type": "article"
+ },
+ {
+ "title": "Replit AI",
+ "url": "https://replit.com/ai",
+ "type": "article"
+ },
+ {
+ "title": "Pieces Website",
+ "url": "https://pieces.app",
+ "type": "article"
+ }
+ ]
},
"XcKeQfpTA5ITgdX51I4y-": {
"title": "AI Code Editors",
@@ -584,6 +1818,11 @@
"url": "https://www.cursor.com/",
"type": "website"
},
+ {
+ "title": "PearAI - The Open Source, Extendable AI Code Editor",
+ "url": "https://trypear.ai/",
+ "type": "website"
+ },
{
"title": "Bolt - Prompt, run, edit, and deploy full-stack web apps",
"url": "https://bolt.new",
@@ -603,7 +1842,28 @@
},
"TifVhqFm1zXNssA8QR3SM": {
"title": "Code Completion Tools",
- "description": "",
- "links": []
+ "description": "Code completion tools are AI-powered development assistants designed to enhance productivity by automatically suggesting code snippets, functions, and entire blocks of code as developers type. These tools, such as GitHub Copilot and Tabnine, leverage machine learning models trained on vast code repositories to predict and generate contextually relevant code. They help reduce repetitive coding tasks, minimize errors, and accelerate the development process by offering real-time, intelligent suggestions.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "GitHub Copilot",
+ "url": "https://github.com/features/copilot",
+ "type": "article"
+ },
+ {
+ "title": "Codeium",
+ "url": "https://codeium.com/",
+ "type": "article"
+ },
+ {
+ "title": "Supermaven",
+ "url": "https://supermaven.com/",
+ "type": "article"
+ },
+ {
+ "title": "Tabnine",
+ "url": "https://www.tabnine.com/",
+ "type": "article"
+ }
+ ]
}
}
\ No newline at end of file
diff --git a/public/roadmap-content/api-design.json b/public/roadmap-content/api-design.json
index 1e2f61572..d60196624 100644
--- a/public/roadmap-content/api-design.json
+++ b/public/roadmap-content/api-design.json
@@ -9,7 +9,17 @@
"description": "APIs, or Application Programming Interfaces, provide a manner in which software applications communicate with each other. They abstract the complexity of applications to allow developers to use only the essentials of the software they are working with. They define the methods and data formats an application should use in order to perform tasks, like sending, retrieving, or modifying data. Understanding APIs is integral to mastering modern software development, primarily because they allow applications to exchange data and functionality with ease, thus enabling integration and convergence of technological services. Therefore, a solid understanding of what APIs are forms the basic cornerstone of API design.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What is an API?",
+ "title": "Getting Started with APIs - Postman",
+ "url": "https://www.postman.com/what-is-an-api/",
+ "type": "article"
+ },
+ {
+ "title": "API - IBM",
+ "url": "https://www.ibm.com/topics/api",
+ "type": "article"
+ },
+ {
+ "title": "What is an API? - AWS",
"url": "https://aws.amazon.com/what-is/api/",
"type": "article"
},
@@ -61,7 +71,7 @@
"type": "article"
},
{
- "title": "HTTP: 1.0 vs. 1.1 vs 2.0 vs. 3.0",
+ "title": "HTTP: 1.0 vs 1.1 vs 2.0 vs 3.0",
"url": "https://www.baeldung.com/cs/http-versions",
"type": "article"
}
@@ -72,7 +82,7 @@
"description": "HTTP (Hypertext Transfer Protocol) Methods play a significant role in API design. They define the type of request a client can make to a server, providing the framework for interaction between client and server. Understanding HTTP methods is paramount to creating a robust and effective API. Some of the common HTTP methods used in API design include GET, POST, PUT, DELETE, and PATCH. Each of these methods signifies a different type of request, allowing for various interactions with your API endpoints. This in turn creates a more dynamic, functional, and user-friendly API.\n\nLearn more from the following resources:",
"links": [
{
- "title": "HTTP request methods",
+ "title": "HTTP Methods - MDN",
"url": "https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods",
"type": "article"
},
@@ -82,7 +92,7 @@
"type": "article"
},
{
- "title": "What are HTTP Methods?",
+ "title": "What are HTTP Methods? - Postman",
"url": "https://blog.postman.com/what-are-http-methods/",
"type": "article"
}
@@ -124,12 +134,12 @@
"type": "article"
},
{
- "title": "What are HTTP headers?",
+ "title": "What are HTTP Headers?",
"url": "https://blog.postman.com/what-are-http-headers/",
"type": "article"
},
{
- "title": "What are HTTP Headers & Understand different types of HTTP headers",
+ "title": "What are HTTP Headers & Types of HTTP headers",
"url": "https://requestly.com/blog/what-are-http-headers-understand-different-types-of-http-headers/",
"type": "article"
}
@@ -150,7 +160,7 @@
"type": "article"
},
{
- "title": "Path parameters",
+ "title": "Path Parameters",
"url": "https://help.iot-x.com/api/how-to-use-the-api/parameters/path-parameters",
"type": "article"
}
@@ -166,7 +176,7 @@
"type": "article"
},
{
- "title": "Cookes - Mozilla",
+ "title": "Cookies - Mozilla",
"url": "https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/cookies",
"type": "article"
}
@@ -182,7 +192,7 @@
"type": "article"
},
{
- "title": "Content Negotiation in practice",
+ "title": "Content Negotiation in Practice",
"url": "https://softwaremill.com/content-negotiation-in-practice/",
"type": "article"
}
@@ -258,8 +268,19 @@
},
"o8i093VQv-T5Qf1yGqU0R": {
"title": "Different API Styles",
- "description": "Application Programming Interface (API) design isn't a one-size-fits-all endeavor. APIs can be structured in various styles, each with its own unique characteristics, advantages, and use cases. Early identification of the appropriate API style is crucial in ensuring a functional, efficient and seamless end-user experience. Commonly used API styles include REST, SOAP, GraphQL, and gRPC. Understanding these diverse API styles would help in making better design choices, fostering efficient overall system architecture, and promoting an intuitive and easy-to-use application.",
- "links": []
+ "description": "Application Programming Interface (API) design isn't a one-size-fits-all endeavor. APIs can be structured in various styles, each with its own unique characteristics, advantages, and use cases. Early identification of the appropriate API style is crucial in ensuring a functional, efficient and seamless end-user experience. Commonly used API styles include REST, SOAP, GraphQL, and gRPC. Understanding these diverse API styles would help in making better design choices, fostering efficient overall system architecture, and promoting an intuitive and easy-to-use application.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "API Styles",
+ "url": "https://www.redhat.com/architect/api-styles",
+ "type": "article"
+ },
+ {
+ "title": "Top API Styles",
+ "url": "https://www.youtube.com/watch?v=4vLxWqE94l4",
+ "type": "video"
+ }
+ ]
},
"BvwdASMvuNQ9DNgzdSZ4o": {
"title": "RESTful APIs",
@@ -287,45 +308,30 @@
"description": "Simple JSON (JavaScript Object Notation) APIs are a popular form of API or \"Application Programming Interface\" which utilise JSON to exchange data between servers and web applications. This method has gained prominence mainly for its simplicity, light weight, and easy readability. In the context of API design, a well-structured JSON API allows developers to efficiently interact with the backend and retrieve only the data they need in a consistent and comprehensible manner. From reducing redundant data to enabling quick parsing, Simple JSON APIs provide numerous benefits to improve the overall performance of applications. Designing a good JSON API requires careful planning, sound knowledge of HTTP methods, endpoints, error handling mechanisms, and most importantly, a clear understanding of the application's data requirements.\n\nLearn more from the following resources:",
"links": [
{
- "title": "A specification for building JSON APIs",
+ "title": "Specification for Building JSON APIs",
"url": "https://github.com/json-api/json-api",
"type": "opensource"
},
{
- "title": "JSON API: Explained in 4 minutes (+ EXAMPLES)",
+ "title": "JSON API: Explained in 4 Minutes",
"url": "https://www.youtube.com/watch?v=N-4prIh7t38",
"type": "video"
}
]
},
"Wwd-0PjrtViMFWxRGaQey": {
- "title": "gRPC APIs",
- "description": "gRPC is a platform agnostic serialization protocol that is used to communicate between services. Designed by Google in 2015, it is a modern alternative to REST APIs. It is a binary protocol that uses HTTP/2 as a transport layer. It is a high performance, open source, general-purpose RPC framework that puts mobile and HTTP/2 first.\n\nIt's main use case is for communication between two different languages within the same application. You can use Python to communicate with Go, or Java to communicate with C#.\n\ngRPC uses the protocol buffer language to define the structure of the data that is\n\nVisit the following resources to learn more:",
+ "title": "SOAP APIs",
+ "description": "SOAP (Simple Object Access Protocol) APIs are a standard communication protocol system that permits programs that run on different operating systems (like Linux and Windows) to communicate using Hypertext Transfer Protocol (HTTP) and its Extensible Markup Language (XML). In the context of API Design, SOAP APIs offer a robust and well-defined process for interaction between various software applications, mostly over a network. They are highly extensible, versatile and support a wide range of communications protocols. Despite being more complex compared to other API types like REST, SOAP APIs ensure high reliability and security, making them the choice for certain business-focused, high-transaction applications.\n\nLearn more from the following resources:",
"links": [
{
- "title": "gRPC Website",
- "url": "https://grpc.io/",
+ "title": "What are SOAP APIs?",
+ "url": "https://www.indeed.com/career-advice/career-development/what-is-soap-api",
"type": "article"
},
{
- "title": "gRPC Introduction",
- "url": "https://grpc.io/docs/what-is-grpc/introduction/",
+ "title": "SOAP vs REST 101: Understand The Differences",
+ "url": "https://www.soapui.org/learn/api/soap-vs-rest-api/",
"type": "article"
- },
- {
- "title": "gRPC Core Concepts",
- "url": "https://grpc.io/docs/what-is-grpc/core-concepts/",
- "type": "article"
- },
- {
- "title": "Explore top posts about gRPC",
- "url": "https://app.daily.dev/tags/grpc?ref=roadmapsh",
- "type": "article"
- },
- {
- "title": "Stephane Maarek - gRPC Introduction",
- "url": "https://youtu.be/XRXTsQwyZSU",
- "type": "video"
}
]
},
@@ -338,13 +344,18 @@
"url": "https://github.com/graphql-kit/graphql-apis",
"type": "opensource"
},
+ {
+ "title": "Visit Dedicated GraphQL Roadmap",
+ "url": "https://roadmap.sh/graphql",
+ "type": "article"
+ },
{
"title": "GraphQL Website",
"url": "https://graphql.org/",
"type": "article"
},
{
- "title": "GraphQL explained in 100 seconds",
+ "title": "GraphQL Explained in 100 Seconds",
"url": "https://www.youtube.com/watch?v=eIQh02xuVw4",
"type": "video"
}
@@ -355,12 +366,12 @@
"description": "Building JSON/RESTful APIs involves designing and implementing APIs that adhere to the architectural constraints of Representational State Transfer (REST). These APIs use JSON (JavaScript Object Notation) as a format for information interchange, due to its lightweight, easy-to-understand, and universally accepted nature. A well-designed RESTful API, utilizing JSON, is key in developing applications that are scalable, maintainable, and easily integrated with other systems. This design approach enables the resources on a server to be accessed and manipulated using standard HTTP protocols, facilitating communication between different services and systems. Furthermore, it enables client-server interactions to be stateless, meaning each request from a client must contain all the information needed by the server to understand and process the request.\n\nLearn more from the following resources:",
"links": [
{
- "title": "A specification for building APIs in JSON",
+ "title": "Specification for Building APIs in JSON",
"url": "https://jsonapi.org/",
"type": "article"
},
{
- "title": "How to make a REST API",
+ "title": "How to Make a RESTful API",
"url": "https://www.integrate.io/blog/how-to-make-a-rest-api/",
"type": "article"
},
@@ -392,7 +403,7 @@
"description": "URI (Uniform Resource Identifier) is a string of characters used to identify a name or a resource on the Internet. Designing URIs carefully is a crucial part of creating a smooth API interface that is easy to understand, remember and use. Good URI design ensures that related resources are grouped together in a logical manner and can greatly impact the usability and maintainability of an API. It involves crafting standardised, intuitive HTTP paths that take advantage of the hierarchical nature of URLs to provide a better structure to the API. This hierarchy can then be used to expand the API over time without breaking existing clients' functionality.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Guidelines for URI design",
+ "title": "Guidelines for URI Design",
"url": "https://css-tricks.com/guidelines-for-uri-design/",
"type": "article"
},
@@ -408,12 +419,12 @@
"description": "API Versioning is a critical component of API Design and Management. As the APIs evolve over time to meet the new business requirements and functionality enhancements, it is crucial to manage the changes in a way that doesn't break the existing client applications. This calls for effective versioning strategies in API design. There are different versioning strategies like URI versioning, Request Header versioning, and Media Type versioning which are adopted based on the ease of implementation, client compatibility, and accessibility. Understanding each strategy and its pros and cons can lead to better API Design and maintainability.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is API versioning?",
+ "title": "What is API Versioning?",
"url": "https://www.postman.com/api-platform/api-versioning/",
"type": "article"
},
{
- "title": "4 API versioning best practices",
+ "title": "API Versioning Best Practices",
"url": "https://kodekloud.com/blog/api-versioning-best-practices/",
"type": "article"
},
@@ -513,7 +524,7 @@
"description": "Error Handling is a crucial aspect of API design that ensures the stability, usability, and reliability of the API in production. APIs are designed to help systems communicate with each other. However, there can be instances where these systems might encounter exceptions or errors. The process of predicting, catching, and managing these error occurrences is what we refer to as 'Error Handling'. In the context of API Design, it involves defining and implementing specific strategies to detect, manage and inform consumers of any exception or error that occurs while executing requests. Configuring this appropriately provides a more robust and seamless communication experience, enabling developers to debug and rectify issues more efficiently.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Best practices for API error handling",
+ "title": "Best Practices for API Error Handling",
"url": "https://blog.postman.com/best-practices-for-api-error-handling/",
"type": "article"
},
@@ -563,13 +574,24 @@
},
"cQnQ9v3mH27MGNwetz3JW": {
"title": "Authentication Methods",
- "description": "Application Programming Interfaces (APIs) are critical components in software development that allow different software systems to communicate and share functionality. To ensure secure communication, it's essential to authenticate the parties involved in the API transactions. The authentication process confirms the identity of the API user. There are numerous authentication methods available when designing an API, each with its own pros and cons. This includes Basic Authentication, API Key Authentication, OAuth, and JWT among others. Understanding these different methods and their best use cases is fundamental to designing secure and effective APIs.",
- "links": []
+ "description": "Application Programming Interfaces (APIs) are critical components in software development that allow different software systems to communicate and share functionality. To ensure secure communication, it's essential to authenticate the parties involved in the API transactions. The authentication process confirms the identity of the API user. There are numerous authentication methods available when designing an API, each with its own pros and cons. This includes Basic Authentication, API Key Authentication, OAuth, and JWT among others. Understanding these different methods and their best use cases is fundamental to designing secure and effective APIs.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "API Authentication",
+ "url": "https://www.postman.com/api-platform/api-authentication/",
+ "type": "article"
+ }
+ ]
},
"0FzHERK5AeYL5wv1FBJbH": {
"title": "Basic Auth",
"description": "Basic Auth, short for Basic Authentication, is a simple method often used in API design for handling user authentication. In this method, client credentials, consisting of a username and password pair, are passed to the API server in a field in the HTTP header. The server then verifies these credentials before granting access to protected resources. Although Basic Auth is straightforward to implement, it is less secure compared to more advanced methods since it involves transmitting credentials in an encoded, but not encrypted, format. It is often used in cases where simplicity is paramount, or High security levels are not required.\n\nLearn more from the following resources:",
"links": [
+ {
+ "title": "Basic Authentication",
+ "url": "https://roadmap.sh/guides/basic-authentication",
+ "type": "article"
+ },
{
"title": "Basic Auth Generation Header",
"url": "https://www.debugbear.com/basic-auth-header-generator",
@@ -591,6 +613,11 @@
"title": "Token Based Auth",
"description": "Token-based authentication is a crucial aspect of API design. It involves providing the user with a token that validates their identity after they have successfully logged in. Once the token is obtained, users can use it to access resources and services provided by the API. This token is usually passed in the headers of subsequent HTTP requests done by the client. One key advantage of token-based auth is that tokens can be created and checked by the server without storing them persistently, which can help to scale applications more easily. This authentication method enhances the security and scalability of web applications and it is mainly used in modern API strategies, including RESTful APIs.\n\nLearn more from the following resources:",
"links": [
+ {
+ "title": "Token Based Authentication",
+ "url": "https://roadmap.sh/guides/token-authentication",
+ "type": "article"
+ },
{
"title": "What Is Token-Based Authentication?",
"url": "https://www.okta.com/uk/identity-101/what-is-token-based-authentication/",
@@ -600,11 +627,6 @@
"title": "Session vs Token Authentication in 100 Seconds",
"url": "https://www.youtube.com/watch?v=UBUNrFtufWo",
"type": "video"
- },
- {
- "title": "Token based auth",
- "url": "https://www.youtube.com/watch?v=woNZJMSNbuo",
- "type": "video"
}
]
},
@@ -612,6 +634,11 @@
"title": "JWT ",
"description": "JSON Web Tokens, or JWT, are a popular and secure method of transferring information between two parties in the domain of API design. As a compact, URL-safe means of representing claims to be transferred between two parties, they play a vital role in security and authorization in modern APIs. By encoding these claims, the information can be verified and trusted with a digital signature - ensuring that the API end-points can handle requests in a secure and reliable way. JWT is a relatively lightweight and scalable method that brings improved authentication and information exchange processes in API design.\n\nLearn more from the following resources:",
"links": [
+ {
+ "title": "JWT Authentication",
+ "url": "https://roadmap.sh/guides/jwt-authentication",
+ "type": "article"
+ },
{
"title": "Introduction to JSON Web Tokens",
"url": "https://jwt.io/introduction",
@@ -633,6 +660,11 @@
"title": "OAuth 2.0",
"description": "OAuth 2.0 is an authorization framework that allows applications to obtain limited access to user accounts on an HTTP service, such as Facebook, GitHub, DigitalOcean, and others. It works by delegating user authentication to the service that hosts the user account and authorizing third-party applications to access the user account. OAuth 2.0 defines four roles: resource owner, client, resource server and authorization server. With regards to API design, OAuth 2.0 can be used to protect API endpoints by ensuring that the client applications having valid access tokens can only interact with the API. It provides detailed workflow processes and a set of protocols for the client application to get authorization to access resources.\n\nLearn more from the following resources:",
"links": [
+ {
+ "title": "OAuth",
+ "url": "https://roadmap.sh/guides/oauth",
+ "type": "article"
+ },
{
"title": "OAuth Website",
"url": "https://oauth.net/2/",
@@ -655,7 +687,7 @@
"description": "Application Programming Interfaces (APIs) are critical for building software applications. Among several key considerations during API design, one is deciding how to implement authentication and security. Session Based Authentication is one popular way to apply security in API design.\n\nThis method revolves around the server creating a session for the user after they successfully log in, associating it with a session identifier. This Session ID is then stored client-side within a cookie. On subsequent requests, the server validates the Session ID before processing the API call. The server will destroy the session after the user logs out, thereby invalidating the Session ID.\n\nUnderstanding Session Based Authentication is crucial for secure API design, especially in scenarios where security is a top priority or in legacy systems where this method is prevalent.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Session Based Authentication - Roadmap.sh",
+ "title": "Session Based Authentication",
"url": "https://roadmap.sh/guides/session-based-authentication",
"type": "article"
},
@@ -673,8 +705,14 @@
},
"nHbn8_sMY7J8o6ckbD-ER": {
"title": "Authorization Methods",
- "description": "In API design, authorization methods play a crucial role in ensuring the security and integrity of data transactions. They are the mechanisms through which an API identifies and validates a user, system, or application before granting them access to specific resources. These methods include Basic Authentication, OAuth, Token-based authentication, JSON Web Tokens (JWT), and API Key based, among others. So, understanding these methods enhances the ability to design APIs that effectively protect resources while allowing necessary access. Each method has its own pros and cons, usage scenarios and security features that make them more suitable for certain situations rather than others.",
- "links": []
+ "description": "In API design, authorization methods play a crucial role in ensuring the security and integrity of data transactions. They are the mechanisms through which an API identifies and validates a user, system, or application before granting them access to specific resources. These methods include Basic Authentication, OAuth, Token-based authentication, JSON Web Tokens (JWT), and API Key based, among others. So, understanding these methods enhances the ability to design APIs that effectively protect resources while allowing necessary access. Each method has its own pros and cons, usage scenarios and security features that make them more suitable for certain situations rather than others.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "API Authorization Methods",
+ "url": "https://konghq.com/blog/engineering/common-api-authentication-methods",
+ "type": "article"
+ }
+ ]
},
"wFsbmMi5Ey9UyDADdbdPW": {
"title": "Role Based Access Control (RBAC)",
@@ -686,12 +724,12 @@
"type": "article"
},
{
- "title": "What is role-based access control (RBAC)?",
+ "title": "What is Role-based Access Control (RBAC)?",
"url": "https://www.redhat.com/en/topics/security/what-is-role-based-access-control",
"type": "article"
},
{
- "title": "Role-based access control (RBAC) vs. Attribute-based access control (ABAC)",
+ "title": "Role-based Access Control (RBAC) vs. Attribute-based Access Control (ABAC)",
"url": "https://www.youtube.com/watch?v=rvZ35YW4t5k",
"type": "video"
}
@@ -718,12 +756,12 @@
"description": "API keys and management is an integral part of API design. An API key is a unique identifier used to authenticate a user, developer, or calling program to an API. This ensures security and control over API endpoints, as only those with a valid API key can make requests. API Management, on the other hand, refers to the practices and tools that enable an organization to govern and monitor its API usage. It involves all the aspects of managing APIs including design, deployment, documentation, security, versioning, and analytics. Both elements play crucial roles in securing and organizing API access for efficient and controlled data sharing and communication.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is API key management?",
+ "title": "What is API Key Management?",
"url": "https://www.akeyless.io/secrets-management-glossary/api-key-management/",
"type": "article"
},
{
- "title": "API Key Management | Definition and Best Practices",
+ "title": "API Key Management - Definition and Best Practices",
"url": "https://infisical.com/blog/api-key-management",
"type": "article"
}
@@ -760,13 +798,13 @@
"type": "article"
},
{
- "title": "What is Swagger?",
- "url": "https://blog.hubspot.com/website/what-is-swagger",
+ "title": "OpenAPI Inititive",
+ "url": "https://www.openapis.org/",
"type": "article"
},
{
- "title": "OpenAPI Inititive",
- "url": "https://www.openapis.org/",
+ "title": "What is Swagger?",
+ "url": "https://blog.hubspot.com/website/what-is-swagger",
"type": "article"
}
]
@@ -781,7 +819,12 @@
"type": "article"
},
{
- "title": "Postman Api Testing Tutorial for beginners",
+ "title": "Postman Docs",
+ "url": "https://www.postman.com/api-documentation-tool/",
+ "type": "article"
+ },
+ {
+ "title": "Postman Tutorial for Beginners",
"url": "https://www.youtube.com/watch?v=MFxk5BZulVU",
"type": "video"
}
@@ -792,12 +835,12 @@
"description": "[Readme.com](http://Readme.com) is an invaluable tool in the realm of API Design, renowned for providing a collaborative platform for creating beautiful, dynamic and intuitive documentation. It's a tool which aids developers in outlining clear, comprehensive documentation for their API interfaces. The API documentation created with [Readme.com](http://Readme.com) is not just about the presentation of information, but enhances the reader's understanding by making it interactive. This interactive approach encourages practical learning and offers insights into how the API will behave under different circumstances. With [Readme.com](http://Readme.com), developers can create a user-focused documentation environment that streamlines the learning process and makes their APIs easier to consume and implement.\n\nLearn more from the following resources:",
"links": [
{
- "title": "readmeio",
- "url": "https://github.com/readmeio",
+ "title": "ReadMe",
+ "url": "https://github.com/orgs/readmeio/repositories?type=source",
"type": "opensource"
},
{
- "title": "readme.com",
+ "title": "ReadMe Website",
"url": "https://readme.com",
"type": "article"
}
@@ -856,12 +899,12 @@
"description": "API design has rapidly emerged as a vital component of software development. When designing an API, it is crucial to follow best practices to ensure optimization, scalability, and efficiency. The best practices in API design revolve around principles such as simplicity, consistency, security, and proper documentation among others. These practices not only smoothens the development process but also makes the API more user-friendly, stable, and easily maintainable. Thus, following the best practices in API design is not an option but rather a must for developers and organizations looking to create APIs that last longer and perform better.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Best practices for REST API design",
+ "title": "Best Practices for REST API Design",
"url": "https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/",
"type": "article"
},
{
- "title": "Best practices in API design",
+ "title": "Best Practices in API Design",
"url": "https://swagger.io/resources/articles/best-practices-in-api-design/",
"type": "article"
}
@@ -893,7 +936,7 @@
"type": "article"
},
{
- "title": "How does API monitoring improve API performance?",
+ "title": "How does API Monitoring Improves API Performance?",
"url": "https://tyk.io/blog/api-product-metrics-what-you-need-to-know/",
"type": "article"
}
@@ -909,7 +952,7 @@
"type": "article"
},
{
- "title": "Using caching strategies to improve API performance",
+ "title": "Using Caching Strategies to Improve API Performance",
"url": "https://www.lonti.com/blog/using-caching-strategies-to-improve-api-performance",
"type": "article"
},
@@ -925,17 +968,22 @@
"description": "Load Balancing plays a crucial role in the domain of API Design. It primarily revolves around evenly and efficiently distributing network traffic across a group of backend servers, also known as a server farm or server pool. When it comes to API design, implementing load balancing algorithms is of immense importance to ensure that no single server bears too much demand. This allows for high availability and reliability by rerouting the traffic in case of server failure, effectively enhancing application performance and contributing to a positive user experience. Therefore, it's a vital tactic in ensuring the scalability and robustness of system architectures which heavily rely on API interactions.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is load balancing?",
+ "title": "What is Load Balancing?",
"url": "https://www.cloudflare.com/en-gb/learning/performance/what-is-load-balancing/",
"type": "article"
},
+ {
+ "title": "Load Balancers in API",
+ "url": "https://learn.microsoft.com/en-us/rest/api/load-balancer/",
+ "type": "article"
+ },
{
"title": "API Gateway vs Load Balancer: Which is Right for Your Application?",
"url": "https://konghq.com/blog/engineering/api-gateway-vs-load-balancer",
"type": "article"
},
{
- "title": "What is a load balancer?",
+ "title": "What is a Load Balancer?",
"url": "https://www.youtube.com/watch?v=sCR3SAVdyCc",
"type": "video"
}
@@ -967,7 +1015,7 @@
"description": "Profiling and monitoring are critical aspects of API design and implementation. Profiling, in this context, refers to the process of analyzing the behavior of your API in order to understand various performance metrics including response times, request rates, error rates, and the overall health and functionality of your API. On the other hand, monitoring is the ongoing process of checking the status of your API to ensure it's functioning as expected while also providing an early warning system for potential issues and improvements. Together, profiling and monitoring your API can lead to a more reliable, efficient, and high-performing service.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Monitor health and performance of your APIs",
+ "title": "Monitor Health and Performance of your APIs",
"url": "https://learning.postman.com/docs/monitoring-your-api/intro-monitors/",
"type": "article"
},
@@ -1004,7 +1052,7 @@
"type": "article"
},
{
- "title": "API Integration Patterns",
+ "title": "API Integration Patterns - Devoteam",
"url": "https://uk.devoteam.com/expert-view/api-integration-patterns/",
"type": "article"
}
@@ -1036,12 +1084,12 @@
"description": "Event-driven architecture (EDA) is a software design concept that revolves around the production, interpretation, and consumption of events. With regards to API design, EDA grants systems the flexibility to decentralize analytics, microservices, and operations, thus promoting real-time information sharing and reaction. Event-driven APIs prioritize asynchronous communication, allowing applications to stay responsive even when tackling heavy data loads. For an effective API, adhering to EDA provides data reliability, maturity with a scalable structure, and efficient real-time data processing capabilities.\n\nLearn more form the following resources:",
"links": [
{
- "title": "Event-driven architecture style",
+ "title": "Event Driven Architecture Style",
"url": "https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/event-driven",
"type": "article"
},
{
- "title": "Event-driven architecture",
+ "title": "Event-driven Architecture",
"url": "https://aws.amazon.com/event-driven-architecture/",
"type": "article"
},
@@ -1083,7 +1131,7 @@
"type": "article"
},
{
- "title": "Microservices explained in 5 minutes",
+ "title": "Microservices Explained in 5 Minutes",
"url": "https://www.youtube.com/watch?v=lL_j7ilk7rc",
"type": "video"
}
@@ -1094,12 +1142,12 @@
"description": "Messaging Queues play a fundamental role in API design, particularly in creating robust, decoupled, and efficient systems. These queues act like a buffer, storing messages or data sent from a sender (producer), allowing a receiver (consumer) to retrieve and process them at its own pace. In the context of API design, this concept enables developers to handle high-volume data processing requirements, providing an asynchronous communication protocol between multiple services. The benefits of messaging queues in API design include better system scalability, fault tolerance, and increased overall system resiliency.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is a message queue?",
+ "title": "What is a Message Queue?",
"url": "https://aws.amazon.com/message-queue/",
"type": "article"
},
{
- "title": "REST API message queues explained",
+ "title": "REST API Message Queues Explained",
"url": "https://www.youtube.com/watch?v=2idPgA6IN_Q",
"type": "video"
}
@@ -1126,12 +1174,12 @@
"description": "Batch Processing refers to the method of handling bulk data requests in API design. Here, multiple API requests are packed and processed as a single group or 'batch'. Instead of making numerous individual API calls, a user can make one batch request with numerous operations. This approach can increase performance and efficiency by reducing the overhead of establishing and closing multiple connections. The concept of 'batch processing' in API design is particularly useful in data-intensive applications or systems where the need for processing high volumes of data is prevalent.\n\nLearn more from the following resources:",
"links": [
{
- "title": "API design guidance: bulk vs batch import",
+ "title": "API Design Guidance: Bulk vs Batch Import",
"url": "https://tyk.io/blog/api-design-guidance-bulk-and-batch-import/",
"type": "article"
},
{
- "title": "Stream vs Batch processing explained with examples",
+ "title": "Stream vs Batch Processing Explained with Examples",
"url": "https://www.youtube.com/watch?v=1xgBQTF24mU",
"type": "video"
}
@@ -1139,7 +1187,7 @@
},
"H22jAI2W5QLL-b1rq-c56": {
"title": "Rabbit MQ",
- "description": "RabbitMQ is an open-source message-broker software/system that plays a crucial role in API design, specifically in facilitating effective and efficient inter-process communication. It implements the Advanced Message Queuing Protocol (AMQP) to enable secure and reliable data transmission in various formats such as text, binary, or serialized objects.\n\nIn API design, RabbitMQ comes in handy in decoupling application processes for scalability and robustness, whilst ensuring that data delivery occurs safely and seamlessly. It introduces queuing as a way of handling multiple users or service calls at once hence enhancing responsiveness and performance of APIs. Its queue system elegantly digests API request loads, allowing services to evenly process data while preventing overloading.\n\nLearn more from the following resources:",
+ "description": "RabbitMQ is an open-source message-broker software/system that plays a crucial role in API design, specifically in facilitating effective and efficient inter-process communication. It implements the Advanced Message Queuing Protocol (AMQP) to enable secure and reliable data transmission in various formats such as text, binary, or serialized objects. RabbitMQ comes in handy in decoupling application processes for scalability and robustness, whilst ensuring that data delivery occurs safely and seamlessly. It introduces queuing as a way of handling multiple users or service calls at once hence enhancing responsiveness and performance of APIs. Its queue system elegantly digests API request loads, allowing services to evenly process data while preventing overloading.\n\nLearn more from the following resources:",
"links": [
{
"title": "RabbitMQ Website",
@@ -1179,7 +1227,7 @@
"description": "API Testing refers to the process of checking the functionality, reliability, performance, and security of Application Programming Interfaces (APIs). It plays a crucial role in API design as it ensures that the APIs work correctly and as expected. This kind of testing does not require a user interface and mainly focuses on the business logic layer of the software architecture. API Testing is integral to guarantee that the data communication and responses between different software systems are error-free and streamlined.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is API testing?",
+ "title": "What is API Testing?",
"url": "https://www.postman.com/api-platform/api-testing/",
"type": "article"
},
@@ -1224,7 +1272,7 @@
},
"6lm3wy9WTAERTqXCn6pFt": {
"title": "Functional Testing",
- "description": "Functional testing in the context of API design involves validating the endpoints and key-value pairs of an API. It ensures the server response works as expected and assesses the functionality of the API -- whether it is performing all the intended functions correctly. Various approaches like testing request-response pairs, error codes, and data accuracy are used. Functional testing can provide invaluable insights into how well an API meets the specified requirements and whether it is ready for integration into applications.\n\nLearn more from the following resources:",
+ "description": "Functional testing in the context of API design involves validating the endpoints and key-value pairs of an API. It ensures the server response works as expected and assesses the functionality of the API whether it is performing all the intended functions correctly. Various approaches like testing request-response pairs, error codes, and data accuracy are used. Functional testing can provide invaluable insights into how well an API meets the specified requirements and whether it is ready for integration into applications.\n\nLearn more from the following resources:",
"links": [
{
"title": "API Functional Testing – Why Is It Important And How to Test",
@@ -1243,17 +1291,17 @@
"description": "Load testing is a crucial aspect of API design that ensures reliability, efficiency and performance under varying loads. It primarily focuses on identifying the maximum capacity of the API in terms of the volume of requests it can handle and its subsequent behavior when this threshold is reached or overloaded. By simulating varying degrees of user load, developers can identify and rectify bottlenecks or breakdown points in the system, hence enhancing overall API resilience.\n\nLearn more from the following resources:",
"links": [
{
- "title": "API load testing - a beginners guide",
+ "title": "API Load Testing - Beginners Guide",
"url": "https://grafana.com/blog/2024/01/30/api-load-testing/",
"type": "article"
},
{
- "title": "Test your API’s performance by simulating real-world traffic",
+ "title": "Test Your API’s Performance by Simulating Real-world Traffic",
"url": "https://blog.postman.com/postman-api-performance-testing/",
"type": "article"
},
{
- "title": "Load testing your API's",
+ "title": "Load Testing API's",
"url": "https://www.youtube.com/watch?v=a5hWE4hMOoY",
"type": "video"
}
@@ -1285,12 +1333,12 @@
"description": "Contract Testing is a critical aspect of maintaining a robust and reliable API infrastructure. In the realm of API design, Contract Testing refers to the method of ensuring that APIs work as anticipated and that changes to them do not break their intended functionality. This approach validates the interaction between two different systems, typically consumer and provider ( API), ensuring they comply with their agreed-upon contract. By defining clear and concise contracts for our APIs, developers can avoid common deployment issues and enhance system integration processes.\n\nLearn more from the following resources:",
"links": [
{
- "title": "A complete guide to Contract Testing",
+ "title": "Complete Guide to Contract Testing",
"url": "https://testsigma.com/blog/api-contract-testing/",
"type": "article"
},
{
- "title": "Get started with API Contract Testing",
+ "title": "Geting Started with API Contract Testing",
"url": "https://saucelabs.com/resources/blog/getting-started-with-api-contract-testing",
"type": "article"
},
@@ -1303,7 +1351,7 @@
},
"XD1vDtrRQFbLyKJaD1AlA": {
"title": "Error Handling / Retries",
- "description": "When creating effective API designs, addressing Error Handling and Retries forms an essential facet. This is primarily due to the fact that APIs aren't always error-free and instances of network hiccups or input inaccuracies from users can occur. Without robust error handling, such occurrences can easily lead to catastrophic application failure or unsatisfactory user experiences.\n\nIn this context, error handling can refer to validating inputs, managing exceptions, and returning appropriate error message or status codes to the user. Meanwhile, the concept of retries comes into play to ensure maximum request success amidst transient failures. Through correctly implemented retries, an API can repeatedly attempt to execute a request until it is successful, thus ensuring seamless operation. The criteria and mechanisms of retries, including the count, delay, and conditions for retries, are crucial aspects to solidify during the API design.\n\nLearn more from the following resources:",
+ "description": "When creating effective API designs, addressing Error Handling and Retries forms an essential facet. This is primarily due to the fact that APIs aren't always error-free and instances of network hiccups or input inaccuracies from users can occur. Without robust error handling, such occurrences can easily lead to catastrophic application failure or unsatisfactory user experiences. Error handling can refer to validating inputs, managing exceptions, and returning appropriate error message or status codes to the user. Meanwhile, the concept of retries comes into play to ensure maximum request success amidst transient failures. Through correctly implemented retries, an API can repeatedly attempt to execute a request until it is successful, thus ensuring seamless operation.\n\nLearn more from the following resources:",
"links": [
{
"title": "How To Improve Your Backend By Adding Retries to Your API Calls",
@@ -1311,7 +1359,7 @@
"type": "article"
},
{
- "title": "How to make resilient web applications with retries",
+ "title": "How to Make Resilient Web Applications with Retries",
"url": "https://www.youtube.com/watch?v=Gly94hp3Eec",
"type": "video"
}
@@ -1343,12 +1391,12 @@
"type": "article"
},
{
- "title": "What are websockets?",
+ "title": "What are Web Sockets?",
"url": "https://www.pubnub.com/guides/websockets/",
"type": "article"
},
{
- "title": "How web sockets work",
+ "title": "How Web Sockets Work",
"url": "https://www.youtube.com/watch?v=pnj3Jbho5Ck",
"type": "video"
}
@@ -1372,20 +1420,20 @@
},
"yvdfoly5WHHTq2Puss355": {
"title": "Standards and Compliance",
- "description": "When designing APIs, it's crucial to consider the concept of standards and compliance. Standards represent the set of rules and best practices that guide developers to create well-structured and easily maintainable APIs. They can range from the proper structure of the endpoints, the standardization of error responses, to naming conventions, and the usage of HTTP verbs.\n\nCompliance on the other hand, emphasizes on meeting protocol requirements or standards such as REST or SOAP. Furthermore, operating within regulated industries can also necessitate certain compliance measures like GDPR, HIPAA and others. Compliance in API Design ensures interoperability and safety of data transmission between systems.\n\nIn essence, Standards and Compliance in API Design contributes towards building more secure, robust, and efficient APIs that are user-friendly and universally understandable.\n\nLearn more from the following resources:",
+ "description": "When designing APIs, it's crucial to consider the concept of standards and compliance. Standards represent the set of rules and best practices that guide developers to create well-structured and easily maintainable APIs. They can range from the proper structure of the endpoints, the standardization of error responses, to naming conventions, and the usage of HTTP verbs. Compliance on the other hand, emphasizes on meeting protocol requirements or standards such as REST or SOAP. Furthermore, operating within regulated industries can also necessitate certain compliance measures like GDPR, HIPAA and others. Compliance in API Design ensures interoperability and safety of data transmission between systems.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is API compliance?",
+ "title": "What is API Compliance?",
"url": "https://tyk.io/learning-center/api-compliance/",
"type": "article"
},
{
- "title": "What is API compliance and why is it important?",
+ "title": "What is API Compliance and Why is it important?",
"url": "https://www.traceable.ai/blog-post/achieve-api-compliance",
"type": "article"
},
{
- "title": "REST API standards",
+ "title": "REST API Standards",
"url": "https://www.integrate.io/blog/rest-api-standards/",
"type": "article"
}
@@ -1412,17 +1460,17 @@
"description": "API Lifecycle Management is a crucial aspect in API design that oversees the process of creating, managing, and retiring APIs. This involves various stages from initial planning, designing, testing, deployment, to eventual retirement of the API. Proper lifecycle management ensures that an API meets the requirements, is reliable, and that it evolves with the needs of end users and developers. Moreover, it helps in maintaining the security, performance, and accessibility of the API throughout its lifetime. This comprehensive approach enables organizations to make the most of their APIs, mitigate issues, and facilitate successful digital transformation.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What is the API lifecycle?",
+ "title": "What is the API Lifecycle?",
"url": "https://www.postman.com/api-platform/api-lifecycle/",
"type": "article"
},
{
- "title": "What is API lifescycle management?",
+ "title": "What is API Lifecycle Management?",
"url": "https://swagger.io/blog/api-strategy/what-is-api-lifecycle-management/",
"type": "article"
},
{
- "title": "Day in the lifecycle of an API",
+ "title": "Day in the Lifecycle of an API",
"url": "https://www.youtube.com/watch?v=VxY_cz0VQXE",
"type": "video"
}
@@ -1491,5 +1539,10 @@
"type": "article"
}
]
+ },
+ "grpc-apis@1DrqtOwxCuFtWQXQ6ZALp.md": {
+ "title": "gRPC APIs",
+ "description": "",
+ "links": []
}
}
\ No newline at end of file
diff --git a/public/roadmap-content/backend.json b/public/roadmap-content/backend.json
index c2e27752c..8c7df1fe5 100644
--- a/public/roadmap-content/backend.json
+++ b/public/roadmap-content/backend.json
@@ -295,7 +295,7 @@
},
{
"title": "Explore top posts about C#",
- "url": "https://app.daily.dev/tags/c#?ref=roadmapsh",
+ "url": "https://app.daily.dev/tags/csharp?ref=roadmapsh",
"type": "article"
},
{
@@ -1310,7 +1310,12 @@
"description": "Unit testing is a software testing method where individual components or units of a program are tested in isolation to ensure they function correctly. This approach focuses on verifying the smallest testable parts of an application, such as functions or methods, by executing them with predefined inputs and comparing the results to expected outcomes. Unit tests are typically automated and written by developers during the coding phase to catch bugs early, facilitate code refactoring, and ensure that each unit of code performs as intended. By isolating and testing each component, unit testing helps improve code reliability and maintainability.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Unit Testing Tutorial",
+ "title": "Unit testing",
+ "url": "https://en.wikipedia.org/wiki/Unit_testing",
+ "type": "article"
+ },
+ {
+ "title": "What is Unit Testing?",
"url": "https://www.guru99.com/unit-testing-guide.html",
"type": "article"
},
@@ -1695,6 +1700,16 @@
"url": "https://app.daily.dev/tags/kafka?ref=roadmapsh",
"type": "article"
},
+ {
+ "title": "Apache Kafka Streams",
+ "url": "https://docs.confluent.io/platform/current/streams/concepts.html",
+ "type": "article"
+ },
+ {
+ "title": "Kafka Streams Confluent",
+ "url": "https://kafka.apache.org/documentation/streams/",
+ "type": "article"
+ },
{
"title": "Apache Kafka Fundamentals",
"url": "https://www.youtube.com/watch?v=B5j3uNBH8X4",
diff --git a/public/roadmap-content/blockchain.json b/public/roadmap-content/blockchain.json
index a48931edf..36b1e60cc 100644
--- a/public/roadmap-content/blockchain.json
+++ b/public/roadmap-content/blockchain.json
@@ -81,12 +81,12 @@
"type": "article"
},
{
- "title": "Bitcoin blockchain transactions | Bitcoin Developer",
+ "title": "Bitcoin Blockchain Transactions",
"url": "https://developer.bitcoin.org/reference/transactions.html",
"type": "article"
},
{
- "title": "Ethereum blockchain transactions | ethereum.org",
+ "title": "Ethereum Blockchain Transactions",
"url": "https://ethereum.org/en/developers/docs/transactions/",
"type": "article"
},
@@ -96,7 +96,7 @@
"type": "article"
},
{
- "title": "How Bitcoin blockchain actually work (Video)",
+ "title": "How Bitcoin Blockchain Actually",
"url": "https://www.youtube.com/watch?v=bBC-nXj3Ng4",
"type": "video"
}
@@ -117,7 +117,7 @@
"type": "article"
},
{
- "title": "Ethereum blockchain transactions | ethereum.org",
+ "title": "Ethereum Blockchain Transactions",
"url": "https://ethereum.org/en/developers/docs/transactions/",
"type": "article"
},
@@ -138,7 +138,7 @@
"type": "article"
},
{
- "title": "What is decentralization?",
+ "title": "What is Decentralization?",
"url": "https://aws.amazon.com/blockchain/decentralization-in-blockchain/",
"type": "article"
},
@@ -148,7 +148,7 @@
"type": "article"
},
{
- "title": "How does a blockchain work?",
+ "title": "How does a Blockchain Work?",
"url": "https://youtu.be/SSo_EIwHSd4",
"type": "video"
},
@@ -164,7 +164,12 @@
"description": "In blockchain, decentralization refers to the transfer of control and decision-making from a centralized entity (individual, organization, or group thereof) to a distributed network. Decentralized networks strive to reduce the level of trust that participants must place in one another, and deter their ability to exert authority or control over one another in ways that degrade the functionality of the network.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What is decentralization?",
+ "title": "Decentralization in Blockchain",
+ "url": "https://www.investopedia.com/decentralized-finance-defi-5113835",
+ "type": "article"
+ },
+ {
+ "title": "What is Decentralization?",
"url": "https://aws.amazon.com/blockchain/decentralization-in-blockchain/",
"type": "article"
},
@@ -187,13 +192,18 @@
},
"bA4V_9AbV3uQi3qrtLWk0": {
"title": "General Blockchain Knowledge",
- "description": "Visit the following resources to learn more:",
+ "description": "A blockchain is a decentralized, distributed ledger technology that records transactions across many computers in such a way that the registered transactions cannot be altered retroactively. This technology is the backbone of cryptocurrencies like Bitcoin and Ethereum, but its applications extend far beyond digital currencies.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "The Complete Course On Understanding Blockchain Technology",
"url": "https://www.udemy.com/course/understanding-blockchain-technology/",
"type": "course"
},
+ {
+ "title": "What is a Blockchain?",
+ "url": "https://www.wired.com/story/guide-blockchain/",
+ "type": "article"
+ },
{
"title": "Explore top posts about Blockchain",
"url": "https://app.daily.dev/tags/blockchain?ref=roadmapsh",
@@ -318,7 +328,7 @@
},
"e_I-4Q6_qIW09Hcn-pgKm": {
"title": "Cryptography",
- "description": "Cryptography, or cryptology, is the practice and study of techniques for secure communication in the presence of adversarial behavior.\n\nVisit the following resources to learn more:",
+ "description": "Cryptography, or cryptology, is the practice and study of techniques for secure communication in the presence of adversarial behavior. Cryptography is the technique of protecting information and communications by using codes, so that only those intended to receive the information can read and process it. It involves various algorithms and protocols to secure communication by converting plain text into unreadable formats, making it incomprehensible to unauthorized parties.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Cryptography",
@@ -336,7 +346,7 @@
"type": "article"
},
{
- "title": "Asymmetric Encryption - Simply explained",
+ "title": "Asymmetric Encryption - Simply Explained",
"url": "https://youtu.be/AQDCe585Lnc",
"type": "video"
},
@@ -398,11 +408,6 @@
"title": "Solana",
"description": "Solana is a public blockchain platform with smart contract functionality. Its native cryptocurrency is SOL.\n\nVisit the following resources to learn more:",
"links": [
- {
- "title": "What is Solana, and how does it work?",
- "url": "https://cointelegraph.com/news/what-is-solana-and-how-does-it-work",
- "type": "article"
- },
{
"title": "Beginners Guide To Solana",
"url": "https://solana.com/news/getting-started-with-solana-development",
@@ -424,8 +429,8 @@
"type": "article"
},
{
- "title": "Start Building Solana!",
- "url": "https://beta.solpg.io/?utm_source=solana.com",
+ "title": "What is Solana, and How does it work?",
+ "url": "https://cointelegraph.com/news/what-is-solana-and-how-does-it-work",
"type": "article"
},
{
@@ -440,12 +445,7 @@
"description": "TON is a fully decentralized layer-1 blockchain designed by Telegram to onboard billions of users. It boasts ultra-fast transactions, tiny fees, easy-to-use apps, and is environmentally friendly.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "TON Telegram integration highlights synergy of blockchain community",
- "url": "https://cointelegraph.com/news/ton-telegram-integration-highlights-synergy-of-blockchain-community",
- "type": "article"
- },
- {
- "title": "Start building on The Open Network",
+ "title": "Start Building on The Open Network",
"url": "https://ton.org/dev",
"type": "article"
},
@@ -455,7 +455,7 @@
"type": "article"
},
{
- "title": "Blockchain analysis",
+ "title": "Blockchain Analysis",
"url": "https://ton.org/analysis",
"type": "article"
}
@@ -463,11 +463,16 @@
},
"tSJyp46rkJcOtDqVpJX1s": {
"title": "EVM-Based",
- "description": "The Ethereum Virtual Machine (EVM) is a dedicated software virtual stack that executes smart contract bytecode and is integrated into each Ethereum node. Simply said, EVM is a software framework that allows developers to construct Ethereum-based decentralized applications (DApps). All Ethereum accounts and smart contracts are stored on this virtual computer.\n\nMany blockchains have forked the Ethereum blockchain and added functionality on top, these blockchains are referred to as EVM-based blockchains.\n\nVisit the following resources to learn more:",
+ "description": "The Ethereum Virtual Machine (EVM) is a dedicated software virtual stack that executes smart contract bytecode and is integrated into each Ethereum node. Simply said, EVM is a software framework that allows developers to construct Ethereum-based decentralized applications (DApps). All Ethereum accounts and smart contracts are stored on this virtual computer. Many blockchains have forked the Ethereum blockchain and added functionality on top, these blockchains are referred to as EVM-based blockchains.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "EVM - Ethereum Virtual Machine",
+ "url": "https://ethereum.org/en/developers/docs/evm/",
+ "type": "article"
+ },
{
"title": "What is Ethereum Virtual Machine?",
- "url": "https://moralis.io/evm-explained-what-is-ethereum-virtual-machine/",
+ "url": "https://astrodev.hashnode.dev/blockchain-ethereum-evm",
"type": "article"
},
{
@@ -500,20 +505,20 @@
},
"JLXIbP-y8C2YktIk3R12m": {
"title": "Ethereum",
- "description": "Ethereum is a programmable blockchain platform with the capacity to support smart contracts, dapps (decentralized apps), and other DeFi projects. The Ethereum native token is the Ether (ETH), and it’s used to fuel operations on the blockchain.\n\nThe Ethereum platform launched in 2015, and it’s now the second largest form of crypto next to Bitcoin (BTC).\n\nVisit the following resources to learn more:",
+ "description": "Ethereum is a programmable blockchain platform with the capacity to support smart contracts, dapps (decentralized apps), and other DeFi projects. The Ethereum native token is the Ether (ETH), and it’s used to fuel operations on the blockchain. The Ethereum platform launched in 2015, and it’s now the second largest form of crypto next to Bitcoin (BTC).\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Ethereum whitepaper",
- "url": "https://ethereum.org/en/whitepaper/",
+ "title": "Introduction to Ethereum",
+ "url": "https://ethereum.org/en/developers/docs/intro-to-ethereum/",
"type": "article"
},
{
- "title": "Intro to Ethereum",
- "url": "https://ethereum.org/en/developers/docs/intro-to-ethereum/",
+ "title": "Ethereum Whitepaper",
+ "url": "https://ethereum.org/en/whitepaper/",
"type": "article"
},
{
- "title": "A gentle introduction to Ethereum",
+ "title": "A Gentle Introduction to Ethereum",
"url": "https://bitsonblocks.net/2016/10/02/gentle-introduction-ethereum/",
"type": "article"
},
@@ -526,8 +531,13 @@
},
"JNilHFQnnVDOz-Gz6eNo5": {
"title": "Polygon",
- "description": "Polygon, formerly known as the Matic Network, is a protocol that allows anyone to create and exchange value, powered by zero-knowledge technology. Polygon provides multiple solutions including",
+ "description": "Polygon, formerly known as the Matic Network, is a protocol that allows anyone to create and exchange value, powered by zero-knowledge technology. Polygon provides multiple solutions including Polygon zkEVM, Polygon PoS, Polygon CDK, and Polygon ID.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Introduction to Polygon",
+ "url": "https://wiki.polygon.technology/",
+ "type": "article"
+ },
{
"title": "Polygon zkEVM",
"url": "https://polygon.technology/polygon-zkevm",
@@ -548,11 +558,6 @@
"url": "https://polygon.technology/polygon-id",
"type": "article"
},
- {
- "title": "Introduction to Polygon",
- "url": "https://wiki.polygon.technology/",
- "type": "article"
- },
{
"title": "Polygon POL whitepaper",
"url": "https://polygon.technology/papers/pol-whitepaper",
@@ -565,13 +570,18 @@
"description": "Binance Smart Chain (also known as BNB Chain) is a blockchain project initiated by Binance as a central piece of their cryptocurrency exchange, which is the largest exchange in the world in terms of daily trading volume of cryptocurrencies.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Binance whitepaper",
- "url": "https://www.exodus.com/assets/docs/binance-coin-whitepaper.pdf",
+ "title": "BNB Chain",
+ "url": "https://www.binance.com/en/blog/all/bnb-chain-blockchain-for-exchanging-the-world-304219301536473088",
"type": "article"
},
{
- "title": "BNB Chain overview",
- "url": "https://www.binance.com/en/blog/all/bnb-chain-blockchain-for-exchanging-the-world-304219301536473088",
+ "title": "Binance Website",
+ "url": "https://www.binance.com/en",
+ "type": "article"
+ },
+ {
+ "title": "Binance Whitepaper",
+ "url": "https://www.exodus.com/assets/docs/binance-coin-whitepaper.pdf",
"type": "article"
},
{
@@ -586,13 +596,18 @@
"description": "Gnosis is a blockchain based on Ethereum, which changed the consensus model to PoS to solve major issues on the Ethereum mainnet. While the platform solves problems surrounding transaction fees and speed, it also means that the Gnosis chain is less decentralized, as it is somewhat reliant on the Ethereum chain.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Gnosis whitepaper",
- "url": "https://blockchainlab.com/pdf/gnosis_whitepaper.pdf",
+ "title": "Gnosis Chain",
+ "url": "https://www.gnosischain.com/",
+ "type": "article"
+ },
+ {
+ "title": "Gnosis Docs",
+ "url": "https://www.docs.gnosischain.com/",
"type": "article"
},
{
- "title": "Gnosis overview",
- "url": "https://developers.gnosischain.com/#gnosis-chain",
+ "title": "Gnosis Whitepaper",
+ "url": "https://blockchainlab.com/pdf/gnosis_whitepaper.pdf",
"type": "article"
}
]
@@ -602,13 +617,13 @@
"description": "Huobi's ECO Chain (also known as HECO) is a public blockchain that provides developers with a low-cost onchain environment for running decentralized apps (dApps) of smart contracts and storing digital assets.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Huobi Eco Chain whitepaper",
- "url": "https://www.hecochain.com/developer.133bd45.pdf",
+ "title": "Introduction to HECO Chain",
+ "url": "https://docs.hecochain.com/#/",
"type": "article"
},
{
- "title": "Introduction to HECO Chain",
- "url": "https://docs.hecochain.com/#/",
+ "title": "Huobi Eco Chain whitepaper",
+ "url": "https://www.hecochain.com/developer.133bd45.pdf",
"type": "article"
}
]
@@ -618,13 +633,18 @@
"description": "Avalanche describes itself as an “open, programmable smart contracts platform for decentralized applications.” What does that mean? Like many other decentralized protocols, Avalanche has its own token called AVAX, which is used to pay transaction fees and can be staked to secure the network.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Avalanche whitepaper",
- "url": "https://assets.website-files.com/5d80307810123f5ffbb34d6e/6008d7bbf8b10d1eb01e7e16_Avalanche%20Platform%20Whitepaper.pdf",
+ "title": "Avalanche",
+ "url": "https://www.avax.network/",
"type": "article"
},
{
- "title": "Avalanche official website",
- "url": "https://www.avax.network/",
+ "title": "Getting Started with Avalanche",
+ "url": "https://www.avax.network/developers",
+ "type": "article"
+ },
+ {
+ "title": "Avalanche Whitepaper",
+ "url": "https://assets.website-files.com/5d80307810123f5ffbb34d6e/6008d7bbf8b10d1eb01e7e16_Avalanche%20Platform%20Whitepaper.pdf",
"type": "article"
}
]
@@ -634,23 +654,28 @@
"description": "Fantom is a decentralized, open-source smart contract platform that supports decentralized applications (dApps) and digital assets. It's one of many blockchain networks built as a faster, more efficient alternative to Ethereum, it uses the proof-of-stake consensus mechanism.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Fantom whitepaper",
- "url": "https://arxiv.org/pdf/1810.10360.pdf",
+ "title": "Fantom Overview",
+ "url": "https://docs.fantom.foundation/",
"type": "article"
},
{
- "title": "Fantom overview",
- "url": "https://docs.fantom.foundation/",
+ "title": "Fantom Whitepaper",
+ "url": "https://arxiv.org/pdf/1810.10360.pdf",
"type": "article"
}
]
},
"VVbvueVMJKLUoJYhbJB1z": {
"title": "Moonbeam / Moonriver",
- "description": "Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure.\n\nMoonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.\n\nVisit the following resources to learn more:",
+ "description": "Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure. Moonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "About Moonbream",
+ "title": "Moonbeam",
+ "url": "https://moonbeam.network/",
+ "type": "article"
+ },
+ {
+ "title": "About Moonbeam",
"url": "https://docs.moonbeam.network/learn/platform/networks/moonbeam/",
"type": "article"
},
@@ -666,33 +691,38 @@
"description": "Everscale is a layer-1 PoS blockchain network of the 5th generation. It is one of the most technologically advanced blockchain networks, and that is not a marketing exaggeration. Everscale incorporates all the blockchain innovations and concepts of recent years. Its versatility helps it develop as a decentralized hub for many blockchains and resource-demanding applications such as GameFi, DeFi, micro-transactions, real-time bidding, etc.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Everscale site",
+ "title": "Everscale",
"url": "https://everscale.network",
"type": "article"
},
{
- "title": "Everscale Whitepaper",
- "url": "https://everscale.network/docs/everscale-whitepaper.pdf",
+ "title": "Everscale Documentation",
+ "url": "https://docs.everscale.network/",
"type": "article"
},
{
- "title": "Documentation",
- "url": "https://docs.everscale.network/",
+ "title": "Everscale Guide",
+ "url": "https://everscale.guide/",
"type": "article"
},
{
- "title": "Guide",
- "url": "https://everscale.guide/",
+ "title": "Everscale - Getting Started",
+ "url": "https://everscale.network/getting-started",
+ "type": "article"
+ },
+ {
+ "title": "Everscale Whitepaper",
+ "url": "https://everscale.network/docs/everscale-whitepaper.pdf",
"type": "article"
}
]
},
"5MGtl00EEZdSnJdrNYPJ7": {
"title": "Gosh",
- "description": "Gosh is a development platform that is purpose-built for securing the software supply chain and extracting the value locked in projects. It is the first blockchain-based platform for software development, which allows developers and businesses to create products in a familiar, straightforward, and safe way.\n\nOn Gosh, every operation, commit, and transaction is trustless, traceable, and transparent. This means that developers can build composable, censorship-resistant repositories, and monetize their open source projects by turning them into a DAO.\n\nGosh is built on cryptography, decentralization, and consensus, which means that repositories have no owner and are managed in a decentralized way. Developers can use Gosh like they use Git and turn any Gosh repository into a DAO and configure it to suit their needs. They can also fund their DAO and use DeFi applications to incentivize code security.\n\nWith Gosh, builds are no longer at risk. From source code on Gosh to Docker container, developers can be sure that their build is safe. Mission-critical applications can also write their scripts as formally verified smart contracts to get rid of holes in the CI/CD process.\n\nVisit the following resources to learn more:",
+ "description": "Gosh is a development platform that is purpose-built for securing the software supply chain and extracting the value locked in projects. It is the first blockchain-based platform for software development, which allows developers and businesses to create products in a familiar, straightforward, and safe way. On Gosh, every operation, commit, and transaction is trustless, traceable, and transparent. This means that developers can build composable, censorship-resistant repositories, and monetize their open source projects by turning them into a DAO.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Gosh site",
+ "title": "Gosh",
"url": "https://gosh.sh/",
"type": "article"
},
@@ -723,12 +753,7 @@
"description": "TON is a fully decentralized layer-1 blockchain designed by Telegram to onboard billions of users. It boasts ultra-fast transactions, tiny fees, easy-to-use apps, and is environmentally friendly.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "TON Telegram integration highlights synergy of blockchain community",
- "url": "https://cointelegraph.com/news/ton-telegram-integration-highlights-synergy-of-blockchain-community",
- "type": "article"
- },
- {
- "title": "Start building on The Open Network",
+ "title": "Start Building on The Open Network",
"url": "https://ton.org/dev",
"type": "article"
},
@@ -738,7 +763,7 @@
"type": "article"
},
{
- "title": "Blockchain analysis",
+ "title": "Blockchain Analysis",
"url": "https://ton.org/analysis",
"type": "article"
}
@@ -746,25 +771,25 @@
},
"3HCpgWWPIkhK3gPRJuJQf": {
"title": "Venom",
- "description": "The Venom Foundation is the first crypto foundation licensed in UAE's ADGM and is set to launch its blockchain platform soon. The platform uses asynchronous blockchain technology of dynamical sharding, which enables boundless scalability, higher security guarantees with decentralization, and manages the gross data transaction flows without faltering by increasing fees and transaction times. The foundation aims to develop and support a self-sufficient blockchain ecosystem with non-custodial wallet options, transparent transaction histories, interchain transactions, staking on validator nodes, and a native decentralized exchange, among others.\n\nThe MENA region entrepreneurs are considered pioneers in global crypto trend adoption, and the foundation's customizable approach is well-suited to bridging different dimensions of market participants. The platform has a panel of industry leaders and seasoned investors, and the project is generating attention in MENA due to its transactional management possibilities, higher security, and inbound governmental database projects. The foundation will work with ecosystem participants to offer new products such as NFT marketplace, derivative exchange, fiat-backed stablecoin, and others to come with the potential to become a bridge towards wide adoption of CBDC in the UAE, other MENA countries and globally.\n\nVisit the following resources to learn more:",
+ "description": "The Venom Foundation is the first crypto foundation licensed in UAE's ADGM and is set to launch its blockchain platform soon. The platform uses asynchronous blockchain technology of dynamical sharding, which enables boundless scalability, higher security guarantees with decentralization, and manages the gross data transaction flows without faltering by increasing fees and transaction times. The foundation aims to develop and support a self-sufficient blockchain ecosystem with non-custodial wallet options, transparent transaction histories, interchain transactions, staking on validator nodes, and a native decentralized exchange, among others.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Venom site",
+ "title": "Venom",
"url": "https://venom.foundation",
"type": "article"
},
{
- "title": "Venom whitepaper",
- "url": "https://venom.foundation/Venom_Whitepaper.pdf",
+ "title": "Venom Documentation",
+ "url": "https://docs.venom.foundation/",
"type": "article"
},
{
- "title": "Venom Documentation",
- "url": "https://docs.venom.foundation/",
+ "title": "Venom Whitepaper",
+ "url": "https://venom.foundation/Venom_Whitepaper.pdf",
"type": "article"
},
{
- "title": "Explore Grants",
+ "title": "Venom Explore Grants",
"url": "https://venom.foundation/#explore_grants",
"type": "article"
}
@@ -777,7 +802,7 @@
},
"i_Dw3kUZ7qKPG-tk-sFPf": {
"title": "L2 Blockchains",
- "description": "Layer-2 refers to a network or technology that operates on top of an underlying blockchain protocol to improve its scalability and efficiency.\n\nThis category of scaling solutions entails shifting a portion of Ethereum's transactional burden to an adjacent system architecture, which then handles the brunt of the network’s processing and only subsequently reports back to Ethereum to finalize its results.\n\nVisit the following resources to learn more:",
+ "description": "Layer-2 refers to a network or technology that operates on top of an underlying blockchain protocol to improve its scalability and efficiency. This category of scaling solutions entails shifting a portion of Ethereum's transactional burden to an adjacent system architecture, which then handles the brunt of the network’s processing and only subsequently reports back to Ethereum to finalize its results.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Layer-1 and Layer-2 Blockchain Scaling Solutions",
@@ -806,21 +831,31 @@
"description": "Arbitrum aims to reduce transaction fees and congestion by moving as much computation and data storage off of Ethereum's main blockchain (layer 1) as it can. Storing data off of Ethereum's blockchain is known as Layer 2 scaling solutions.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Arbitrum whitepaper",
- "url": "https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-kalodner.pdf",
+ "title": "Arbitrum - The Future of Ethereum",
+ "url": "https://arbitrum.io/",
+ "type": "article"
+ },
+ {
+ "title": "Getting Started with Arbitrum",
+ "url": "https://docs.arbitrum.io/welcome/get-started",
"type": "article"
},
{
- "title": "Inside Arbitrum",
- "url": "https://developer.offchainlabs.com/docs/Inside_Arbitrum",
+ "title": "Arbitrum Whitepaper",
+ "url": "https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-kalodner.pdf",
"type": "article"
}
]
},
"Ib9STGxQa8yeoB-GFeGDE": {
"title": "Moonbeam / Moonriver",
- "description": "Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure.\n\nMoonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.\n\nVisit the following resources to learn more:",
+ "description": "Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure. Moonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Moonbeam",
+ "url": "https://moonbeam.network/",
+ "type": "article"
+ },
{
"title": "About Moonbeam",
"url": "https://docs.moonbeam.network/learn/platform/networks/moonbeam/",
@@ -838,12 +873,12 @@
"description": "TVM-based blockchain is a type of blockchain that uses the Telegram Open Network Virtual Machine (TVM) for executing smart contracts. This allows for fast and efficient execution of smart contracts and enables developers to create decentralized applications.\n\nBoC stands for Bag of Cells, and it refers to the data structure used in the TVM-based blockchain to store all the information related to a smart contract. This includes the code of the contract, its state, and other relevant data. The Bag of Cells is a highly efficient data structure that allows for fast and secure storage of smart contract data.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Original specification",
+ "title": "Original Specification",
"url": "https://ton.org/tvm.pdf",
"type": "article"
},
{
- "title": "Everscale VM specification",
+ "title": "Everscale VM Specification",
"url": "https://docs.everscale.network/tvm.pdf",
"type": "article"
}
@@ -880,7 +915,7 @@
"type": "article"
},
{
- "title": "A complete guide to understand hybrid smart contracts",
+ "title": "Guide to Hybrid Smart Contracts",
"url": "https://www.leewayhertz.com/hybrid-smart-contracts/",
"type": "article"
}
@@ -932,6 +967,11 @@
"title": "Smart Contracts",
"description": "A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document legally relevant events and actions according to the terms of a contract or an agreement.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Smart Contracts",
+ "url": "https://www.ibm.com/topics/smart-contracts",
+ "type": "article"
+ },
{
"title": "What Are Smart Contracts and How Do They Work?",
"url": "https://chain.link/education/smart-contracts",
@@ -943,7 +983,7 @@
"type": "article"
},
{
- "title": "Smart contracts - Simply Explained",
+ "title": "Smart Contracts - Simply Explained",
"url": "https://youtu.be/ZE2HxTmxfrI",
"type": "video"
}
@@ -951,7 +991,7 @@
},
"chaIKoE1uE8rpZLkDSfV-": {
"title": "Solidity",
- "description": "Solidity is an object-oriented programming language created specifically by Ethereum Network team for constructing smart contracts on various blockchain platforms, most notably, Ethereum.\n\n* It's used to create smart contracts that implements business logic and generate a chain of transaction records in the blockchain system.\n* It acts as a tool for creating machine-level code and compiling it on the Ethereum Vitural Machine (EVM).\n\nLike any other programming languages, Solidity also has variables, functions, classes, arithmetic operations, string manipulation, and many more.\n\nVisit the following resources to learn more:",
+ "description": "Solidity is an object-oriented programming language created specifically by Ethereum Network team for constructing smart contracts on various blockchain platforms, most notably, Ethereum. It's used to create smart contracts that implements business logic and generate a chain of transaction records in the blockchain system. It acts as a tool for creating machine-level code and compiling it on the Ethereum Virtual Machine (EVM).\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Solidity Programming Language",
@@ -1010,6 +1050,11 @@
"url": "https://www.rust-lang.org/",
"type": "article"
},
+ {
+ "title": "Learn Rust",
+ "url": "https://www.rust-lang.org/learn",
+ "type": "article"
+ },
{
"title": "How to write and deploy a smart contract in Rust",
"url": "https://docs.near.org/tutorials/nfts/introduction",
@@ -1047,13 +1092,18 @@
"title": "Integration Tests",
"description": "Integration tests validate interactions between multiple components. For smart contract testing this can mean interactions between different components of a single contract, or across multiple contracts.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Blockchain Testing Guide",
+ "url": "https://blog.logrocket.com/complete-guide-blockchain-testing/",
+ "type": "article"
+ },
{
"title": "Explore top posts about Testing",
"url": "https://app.daily.dev/tags/testing?ref=roadmapsh",
"type": "article"
},
{
- "title": "Unit tests vs integration tests | Smart contract testing course",
+ "title": "Unit Tests vs Integration Tests",
"url": "https://youtu.be/GxnX9k8i0zM",
"type": "video"
}
@@ -1090,7 +1140,7 @@
"type": "article"
},
{
- "title": "Deploying and interacting with smart contracts",
+ "title": "Deploying and Interacting with Smart Contracts",
"url": "https://docs.openzeppelin.com/learn/deploying-and-interacting",
"type": "article"
},
@@ -1145,10 +1195,10 @@
},
"bjUuL7WALETzgFxL6-ivU": {
"title": "ERC Tokens",
- "description": "An ‘Ethereum Request for Comments’ (ERC) is a document that programmers use to write smart contracts on Ethereum Blockchain. They describe rules in these documents that Ethereum-based tokens must comply with.\n\nWhile there are several Ethereum standards. These ERC Ethereum standards are the most well-known and popular: ERC-20, ERC-721, ERC-1155, and ERC-777.\n\nVisit the following resources to learn more:",
+ "description": "An ‘Ethereum Request for Comments’ (ERC) is a document that programmers use to write smart contracts on Ethereum Blockchain. They describe rules in these documents that Ethereum-based tokens must comply with. While there are several Ethereum standards. These ERC Ethereum standards are the most well-known and popular: ERC-20, ERC-721, ERC-1155, and ERC-777.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What are Ethereum request for comments (ERC) Standards",
+ "title": "What are Ethereum Request for Comments (ERC) Standards",
"url": "https://dev.to/envoy_/ks-what-are-ethereum-request-for-comments-erc-standards-5f80",
"type": "article"
},
@@ -1174,13 +1224,13 @@
"description": "A cryptocurrency wallet is a device, physical medium, program, or service which stores the public and/or private keys for cryptocurrency transactions. In addition to this basic function of storing the keys, a cryptocurrency wallet more often also offers the functionality of encrypting and/or signing information.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What is a crypto wallet?",
- "url": "https://www.coinbase.com/learn/crypto-basics/what-is-a-crypto-wallet",
+ "title": "What is a Crypto Wallet?: A Beginner’s Guide",
+ "url": "https://crypto.com/university/crypto-wallets",
"type": "article"
},
{
- "title": "What is a Crypto Wallet? A Beginner’s Guide",
- "url": "https://crypto.com/university/crypto-wallets",
+ "title": "Crypto Wallet? What is it?",
+ "url": "https://www.coinbase.com/learn/crypto-basics/what-is-a-crypto-wallet",
"type": "article"
},
{
@@ -1237,13 +1287,13 @@
"description": "Decentralized storage is where data is stored on a decentralized network across multiple locations by users or groups who are incentivized to join, store, and keep data accessible. The servers used are hosted by people, rather than a single company. Anyone is free to join, they are kept honest due to smart contracts, and they are incentivized to participate via tokens.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What Is Decentralized Storage?",
- "url": "https://medium.com/@ppio/what-is-decentralized-storage-9c4b761942e2",
+ "title": "Decentralized Storage",
+ "url": "https://ethereum.org/en/developers/docs/storage/",
"type": "article"
},
{
- "title": "Decentralized Storage",
- "url": "https://ethereum.org/en/developers/docs/storage/",
+ "title": "What Is Decentralized Storage?",
+ "url": "https://medium.com/@ppio/what-is-decentralized-storage-9c4b761942e2",
"type": "article"
},
{
@@ -1278,6 +1328,11 @@
"title": "Hardhat",
"description": "Hardhat is an Ethereum development environment. It allows users to compile contracts and run them on a development network. Get Solidity stack traces, console.log and more.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Hardhat",
+ "url": "https://hardhat.org/",
+ "type": "article"
+ },
{
"title": "Hardhat Overview",
"url": "https://hardhat.org/hardhat-runner/docs/getting-started#overview",
@@ -1316,12 +1371,17 @@
"description": "A development environment, testing framework, and asset pipeline for blockchains using the Ethereum Virtual Machine (EVM), aiming to make life as a developer easier.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Truffle Overview",
+ "title": "Truffle Documentation",
"url": "https://trufflesuite.com/docs/truffle/",
"type": "article"
},
{
- "title": "Truffle Tutorial for Beginners | Compile, Test & Deploy Smart contracts to any EVM Blockchain",
+ "title": "Ultimate Guide to Truffle",
+ "url": "https://archive.trufflesuite.com/guides/ultimate-guide-to-truffle-the-gateway-to-full-stack-blockchain-development/",
+ "type": "article"
+ },
+ {
+ "title": "Truffle Tutorial for Beginners",
"url": "https://youtu.be/62f757RVEvU",
"type": "video"
}
@@ -1337,7 +1397,7 @@
"type": "article"
},
{
- "title": "Intro to Foundry",
+ "title": "Introduction to Foundry",
"url": "https://youtu.be/fNMfMxGxeag",
"type": "video"
}
@@ -1382,7 +1442,7 @@
},
"wypJdjTW4jHm9FCqv7Lhb": {
"title": "Fuzz Testing & Static Analysis",
- "description": "Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a smart contract.\n\nStatic analysis is the analysis of smart contracts performed without executing them.\n\nVisit the following resources to learn more:",
+ "description": "Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a smart contract. Static analysis is the analysis of smart contracts performed without executing them.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Getting Started with Smart Contract Fuzzing",
@@ -1390,7 +1450,7 @@
"type": "article"
},
{
- "title": "Solidity smart contract Static Code Analysis",
+ "title": "Solidity Smart Contract Static Code Analysis",
"url": "https://lightrains.com/blogs/solidity-static-analysis-tools/#static-code-analysis",
"type": "article"
},
@@ -1400,7 +1460,7 @@
"type": "article"
},
{
- "title": "Smart contract Fuzzing",
+ "title": "Smart Contract Fuzzing",
"url": "https://youtu.be/LRyyNzrqgOc",
"type": "video"
}
@@ -1416,7 +1476,7 @@
"type": "opensource"
},
{
- "title": "Solidity Security: Comprehensive list of known attack vectors and common anti-patterns",
+ "title": "Solidity Security Checkpoints",
"url": "https://blog.sigmaprime.io/solidity-security.html",
"type": "article"
},
@@ -1445,8 +1505,19 @@
},
"n3pipnNb76aaQeUwrDLk_": {
"title": "Tools",
- "description": "Blockchain and smart contract technology is fairly new, therefore, you should expect constant changes in the security landscape, as new bugs and security risks are discovered, and new best practices are developed. Keeping track of this constantly moving landscape proves difficult, so using tools to aid this mission is important. The cost of failing to properly secure smart contracts can be high, and because change can be difficult, we must make use of these tools.",
- "links": []
+ "description": "Blockchain and smart contract technology is fairly new, therefore, you should expect constant changes in the security landscape, as new bugs and security risks are discovered, and new best practices are developed. Keeping track of this constantly moving landscape proves difficult, so using tools to aid this mission is important. The cost of failing to properly secure smart contracts can be high, and because change can be difficult, we must make use of these tools.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Best Blockchain Tools",
+ "url": "https://101blockchains.com/best-blockchain-tools/",
+ "type": "article"
+ },
+ {
+ "title": "Top 10 Tools for Blockchain Development",
+ "url": "https://www.blockchain-council.org/blockchain/top-10-tools-for-blockchain-development/",
+ "type": "article"
+ }
+ ]
},
"YA3-7EZBRW-T-8HuVI7lk": {
"title": "Slither",
@@ -1454,8 +1525,13 @@
"links": [
{
"title": "Slither, the Solidity source analyzer",
- "url": "https://github.com/crytic/slither/blob/master/README.md",
+ "url": "https://github.com/crytic/slither",
"type": "opensource"
+ },
+ {
+ "title": "Slither Framework",
+ "url": "https://blog.trailofbits.com/2018/10/19/slither-a-solidity-static-analysis-framework/",
+ "type": "article"
}
]
},
@@ -1475,7 +1551,7 @@
"description": "MythX is a comprehensive smart contract security analysis tools developed by Consensys. It allows users to detect security vulnerabilities in Ethereum smart contracts throughout the development life cycle as well as analyze Solidity dapps for security holes and known smart contract vulnerabilities.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "MythX Official Site",
+ "title": "MythX",
"url": "https://mythx.io/",
"type": "article"
},
@@ -1492,15 +1568,26 @@
"links": [
{
"title": "Echidna: A Fast Smart Contract Fuzzer",
- "url": "https://github.com/crytic/echidna/blob/master/README.md",
+ "url": "https://github.com/crytic/echidna/",
"type": "opensource"
+ },
+ {
+ "title": "Echidna - Smart Contracts",
+ "url": "https://secure-contracts.com/program-analysis/echidna/index.html",
+ "type": "article"
}
]
},
"fbESHQGYqxKRi-5DW8TY3": {
"title": "Management Platforms",
- "description": "Managing smart contracts in a production environment (mainnet) can prove difficult as users must keep track of different versions, blockchains, deployments, etc. Using a tool for this process eliminates a lot of the risk that comes with manual tracking.",
- "links": []
+ "description": "Managing smart contracts in a production environment (mainnet) can prove difficult as users must keep track of different versions, blockchains, deployments, etc. Using a tool for this process eliminates a lot of the risk that comes with manual tracking.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is a Blockchain Platform?",
+ "url": "https://www.bitdegree.org/crypto/tutorials/blockchain-platform",
+ "type": "article"
+ }
+ ]
},
"qox-x_q-Q7aWcNFWD7RkT": {
"title": "OpenZeppelin",
@@ -1536,8 +1623,23 @@
},
"gpS5CckcQZX3TMFQ2jtIL": {
"title": "Git",
- "description": "[Git](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.\n\nVisit the following resources to learn more:",
+ "description": "Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Visit Dedicated Git Roadmap",
+ "url": "https://roadmap.sh/git-github",
+ "type": "article"
+ },
+ {
+ "title": "Git",
+ "url": "https://git-scm.com/",
+ "type": "article"
+ },
+ {
+ "title": "Git Documentation",
+ "url": "https://git-scm.com/doc",
+ "type": "article"
+ },
{
"title": "Learn Git with Tutorials, News and Tips - Atlassian",
"url": "https://www.atlassian.com/git",
@@ -1567,7 +1669,7 @@
{
"title": "GitHub",
"url": "https://github.com/features/",
- "type": "opensource"
+ "type": "article"
},
{
"title": "GitLab",
@@ -1578,11 +1680,6 @@
"title": "BitBucket",
"url": "https://bitbucket.org/product/guides/getting-started/overview",
"type": "article"
- },
- {
- "title": "How to choose the best source code repository",
- "url": "https://bitbucket.org/product/code-repository",
- "type": "article"
}
]
},
@@ -1591,18 +1688,18 @@
"description": "GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "GitHub Website",
- "url": "https://github.com",
- "type": "opensource"
+ "title": "Visit Dedicated Github Roadmap",
+ "url": "https://roadmap.sh/git-github",
+ "type": "article"
},
{
- "title": "GitHub Documentation",
- "url": "https://docs.github.com/en/get-started/quickstart",
+ "title": "GitHub",
+ "url": "https://github.com",
"type": "article"
},
{
- "title": "How to Use Git in a Professional Dev Team",
- "url": "https://ooloo.io/project/github-flow",
+ "title": "GitHub Documentation",
+ "url": "https://docs.github.com/en/get-started/quickstart",
"type": "article"
},
{
@@ -1624,11 +1721,6 @@
"title": "Git and GitHub for Beginners",
"url": "https://www.youtube.com/watch?v=RGOj5yH7evk",
"type": "video"
- },
- {
- "title": "Git and GitHub - CS50 Beyond 2019",
- "url": "https://www.youtube.com/watch?v=eulnSXkhE7I",
- "type": "video"
}
]
},
@@ -1639,7 +1731,7 @@
{
"title": "GitLab Website",
"url": "https://gitlab.com/",
- "type": "opensource"
+ "type": "article"
},
{
"title": "GitLab Documentation",
@@ -1655,7 +1747,7 @@
},
"TMPB62h9LGIA0pMmjfUun": {
"title": "Bitbucket",
- "description": "Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc\n\nBitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)\n\nVisit the following resources to learn more:",
+ "description": "Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc. Bitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Bitbucket Website",
@@ -1663,12 +1755,12 @@
"type": "article"
},
{
- "title": "A brief overview of Bitbucket",
+ "title": "Overview of Bitbucket",
"url": "https://bitbucket.org/product/guides/getting-started/overview#a-brief-overview-of-bitbucket",
"type": "article"
},
{
- "title": "Getting started with Bitbucket",
+ "title": "Getting Started with Bitbucket",
"url": "https://bitbucket.org/product/guides/basics/bitbucket-interface",
"type": "article"
},
@@ -1682,11 +1774,6 @@
"url": "https://app.daily.dev/tags/bitbucket?ref=roadmapsh",
"type": "article"
},
- {
- "title": "Bitbucket tutorial | How to use Bitbucket Cloud",
- "url": "https://www.youtube.com/watch?v=M44nEyd_5To",
- "type": "video"
- },
{
"title": "Bitbucket Tutorial | Bitbucket for Beginners",
"url": "https://www.youtube.com/watch?v=i5T-DB8tb4A",
@@ -1777,13 +1864,18 @@
"title": "NFTs",
"description": "A non-fungible token (NFT) is a financial security consisting of digital data stored in a blockchain, a form of distributed ledger. The ownership of an NFT is recorded in the blockchain, and can be transferred by the owner, allowing NFTs to be sold and traded.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "What are NFTs?",
+ "url": "https://www.coindesk.com/learn/what-are-nfts-and-how-do-they-work/",
+ "type": "article"
+ },
{
"title": "Non-Fungible Token (NFT)",
"url": "https://www.investopedia.com/non-fungible-tokens-nft-5115211",
"type": "article"
},
{
- "title": "NFTs, explained",
+ "title": "NFTs Explained",
"url": "https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq",
"type": "article"
},
@@ -1804,7 +1896,7 @@
"description": "Blockchain technology has the ability to eliminate all the tolls exacted by centralized organization when transferring payments.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "How does blockchain impact global payments and remittances?",
+ "title": "How does Blockchain Impact Global Payments and Remittances?",
"url": "https://consensys.net/blockchain-use-cases/finance/#payments",
"type": "article"
},
@@ -1852,7 +1944,7 @@
"description": "Alchemy is a developer platform that empowers companies to build scalable and reliable decentralized applications without the hassle of managing blockchain infrastructure in-house.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Alchemy official site",
+ "title": "Alchemy",
"url": "https://www.alchemy.com/",
"type": "article"
}
@@ -1863,9 +1955,14 @@
"description": "Infura provides the tools and infrastructure that allow developers to easily take their blockchain application from testing to scaled deployment - with simple, reliable access to Ethereum and IPFS.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Infura official site",
+ "title": "Infura",
"url": "https://infura.io/",
"type": "article"
+ },
+ {
+ "title": "Infura Documentation",
+ "url": "https://docs.infura.io/api",
+ "type": "article"
}
]
},
@@ -1874,7 +1971,7 @@
"description": "Moralis provides a single workflow for building high performance dapps. Fully compatible with your favorite web3 tools and services.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Moralis official site",
+ "title": "Moralis",
"url": "https://moralis.io/",
"type": "article"
},
@@ -1890,7 +1987,7 @@
"description": "QuickNode is a Web3 developer platform used to build and scale blockchain applications.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Quicknode official site",
+ "title": "Quicknode",
"url": "https://www.quicknode.com/",
"type": "article"
}
@@ -1898,21 +1995,32 @@
},
"NK02dunI3i6C6z7krENCC": {
"title": "Supporting Languages",
- "description": "While the bulk of the logic in blockchain applications is handled by smart contracts, all the surrounding services that support those smart contracts (frontend, monitoring, etc.) are often written in other languages.",
- "links": []
+ "description": "While the bulk of the logic in blockchain applications is handled by smart contracts, all the surrounding services that support those smart contracts (frontend, monitoring, etc.) are often written in other languages.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Programming Languages for Smart Contracts",
+ "url": "https://blog.logrocket.com/smart-contract-programming-languages/",
+ "type": "article"
+ },
+ {
+ "title": "Top Programming Languages for Blockchains",
+ "url": "https://www.codecademy.com/resources/blog/programming-languages-blockchain-development/",
+ "type": "article"
+ }
+ ]
},
"fF06XiQV4CPEJnt_ESOvv": {
"title": "JavaScript",
"description": "JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, and popups etc on different websites -- all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications and so on.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "You Dont Know JS Yet (book series) ",
+ "title": "You Dont Know JS Yet",
"url": "https://github.com/getify/You-Dont-Know-JS",
"type": "opensource"
},
{
- "title": "W3Schools – JavaScript Tutorial",
- "url": "https://www.w3schools.com/js/",
+ "title": "Visit Dedicated JavaScript Roadmap",
+ "url": "https://roadmap.sh/javascript",
"type": "article"
},
{
@@ -1935,11 +2043,6 @@
"url": "https://youtu.be/hdI2bqOjy3c",
"type": "video"
},
- {
- "title": "Node.js Crash Course",
- "url": "https://www.youtube.com/watch?v=fBNz5xF-Kx4",
- "type": "video"
- },
{
"title": "Node.js Tutorial for Beginners",
"url": "https://www.youtube.com/watch?v=TlB_eWDSMt4",
@@ -1956,18 +2059,23 @@
"url": "https://roadmap.sh/python",
"type": "article"
},
+ {
+ "title": "Python Getting Started",
+ "url": "https://www.python.org/about/gettingstarted/",
+ "type": "article"
+ },
{
"title": "Python Website",
"url": "https://www.python.org/",
"type": "article"
},
{
- "title": "Python Getting Started",
- "url": "https://www.python.org/about/gettingstarted/",
+ "title": "Python Documentation",
+ "url": "https://www.docs.python.org/3",
"type": "article"
},
{
- "title": "W3Schools - Python Tutorial ",
+ "title": "W3Schools - Python Tutorial",
"url": "https://www.w3schools.com/python/",
"type": "article"
},
@@ -1976,11 +2084,6 @@
"url": "https://ehmatthes.github.io/pcc/",
"type": "article"
},
- {
- "title": "Automate the Boring Stuff",
- "url": "https://automatetheboringstuff.com/",
- "type": "article"
- },
{
"title": "Explore top posts about Python",
"url": "https://app.daily.dev/tags/python?ref=roadmapsh",
@@ -2013,7 +2116,7 @@
"type": "article"
},
{
- "title": "W3Schools Go Tutorial ",
+ "title": "W3Schools Go Tutorial",
"url": "https://www.w3schools.com/go/",
"type": "article"
},
@@ -2061,12 +2164,12 @@
},
{
"title": "React Website",
- "url": "https://reactjs.org/",
+ "url": "https://react.dev/",
"type": "article"
},
{
- "title": "Official Getting Started",
- "url": "https://reactjs.org/tutorial/tutorial.html",
+ "title": "Getting Started with React",
+ "url": "https://react.dev/learn",
"type": "article"
},
{
@@ -2101,8 +2204,13 @@
"type": "article"
},
{
- "title": "Official - Getting started with Angular",
- "url": "https://angular.io/start",
+ "title": "Angular",
+ "url": "https://angular.dev/",
+ "type": "article"
+ },
+ {
+ "title": "Getting Started with Angular",
+ "url": "https://angular.dev/overview",
"type": "article"
},
{
@@ -2127,7 +2235,7 @@
"type": "article"
},
{
- "title": "Official Getting Started",
+ "title": "Vue.js Guide",
"url": "https://vuejs.org/v2/guide/",
"type": "article"
},
@@ -2150,7 +2258,7 @@
},
"-7Bq2ktD0nt7of9liuCDL": {
"title": "Testing",
- "description": "A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.\n\nLike traditional software, testing dApps involves testing the entire stack that makes up the dApp (backend, frontend, db, etc.).\n\nVisit the following resources to learn more:",
+ "description": "A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What is Software Testing?",
@@ -2197,8 +2305,14 @@
},
"XvVpnlYhT_yOsvjAvwZpr": {
"title": "Maintenance",
- "description": "dApps can be harder to maintain because the code and data published to the blockchain is harder to modify. It’s hard for developers to make updates to their dapps (or the underlying data stored by a dapp) once they are deployed, even if bugs or security risks are identified in an old version.",
- "links": []
+ "description": "dApps can be harder to maintain because the code and data published to the blockchain is harder to modify. It’s hard for developers to make updates to their dapps (or the underlying data stored by a dapp) once they are deployed, even if bugs or security risks are identified in an old version.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Blockchain Maintenance",
+ "url": "https://imiblockchain.com/blockchain-coding/maintenance/",
+ "type": "article"
+ }
+ ]
},
"B6GGTUbzEaIz5yu32WrAq": {
"title": "Architecture",
@@ -2252,9 +2366,19 @@
"description": "You don't need to write every smart contract in your project from scratch. There are many open source smart contract libraries available that provide reusable building blocks for your project that can save you from having to reinvent the wheel.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Viem library with great TypeScript support",
+ "title": "Viem Library",
"url": "https://viem.sh",
"type": "article"
+ },
+ {
+ "title": "Client Libraries",
+ "url": "https://docs.waves.tech/en/building-apps/waves-api-and-sdk/client-libraries/",
+ "type": "article"
+ },
+ {
+ "title": "Smart Contract Libraries",
+ "url": "https://ethereum.org/en/developers/docs/smart-contracts/libraries/",
+ "type": "article"
}
]
},
@@ -2279,7 +2403,7 @@
"description": "web3.js is a collection of libraries that allow you to interact with a local or remote ethereum node using HTTP, IPC or WebSocket.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "web3.js Documentation",
+ "title": "Web3.js Documentation",
"url": "https://web3js.readthedocs.io/",
"type": "article"
},
@@ -2296,9 +2420,19 @@
"links": [
{
"title": "Moralis SDK",
- "url": "https://github.com/MoralisWeb3/Moralis-JS-SDK/blob/main/README.md",
+ "url": "https://github.com/MoralisWeb3/Moralis-JS-SDK",
"type": "opensource"
},
+ {
+ "title": "Moralis",
+ "url": "https://moralis.com/",
+ "type": "article"
+ },
+ {
+ "title": "Moralis Docs",
+ "url": "https://docs.moralis.com/",
+ "type": "article"
+ },
{
"title": "Explore top posts about Moralis",
"url": "https://app.daily.dev/tags/moralis?ref=roadmapsh",
@@ -2308,13 +2442,29 @@
},
"CoYEwHNNmrQ0i0sSQTcB7": {
"title": "Client Nodes",
- "description": "A blockchain is a distributed network of computers (known as nodes) running software that can verify blocks and transaction data. The software application, known as a client, must be run on your computer to turn it into a blockchain node.",
- "links": []
+ "description": "A blockchain is a distributed network of computers (known as nodes) running software that can verify blocks and transaction data. The software application, known as a client, must be run on your computer to turn it into a blockchain node.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Nodes and Clients",
+ "url": "https://ethereum.org/en/developers/docs/nodes-and-clients/",
+ "type": "article"
+ },
+ {
+ "title": "Ethereum Nodes",
+ "url": "https://www.coindesk.com/learn/ethereum-nodes-and-clients-a-complete-guide/",
+ "type": "article"
+ }
+ ]
},
"DBRaXtwvdq2UGE8rVCmI1": {
"title": "Geth",
"description": "Go Ethereum (Geth) is one of the three original implementations (along with C++ and Python) of the Ethereum protocol. It is written in Go, fully open source and licensed under the GNU LGPL v3.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Geth",
+ "url": "https://geth.ethereum.org/",
+ "type": "article"
+ },
{
"title": "Geth Documentation",
"url": "https://geth.ethereum.org/docs/",
@@ -2330,6 +2480,16 @@
"title": "Besu Ethereum Client",
"url": "https://github.com/hyperledger/besu",
"type": "opensource"
+ },
+ {
+ "title": "Besu Homepage",
+ "url": "https://www.lfdecentralizedtrust.org/projects/besu",
+ "type": "article"
+ },
+ {
+ "title": "Hyperledger Besu",
+ "url": "https://youtu.be/gF__bwiG66g",
+ "type": "video"
}
]
},
@@ -2337,9 +2497,14 @@
"title": "Nethermind",
"description": "Nethermind is a high-performance, highly configurable full Ethereum protocol client built on .NET that runs on Linux, Windows, and macOS, and supports Clique, Aura, Ethash, and Proof-of-Stake consensus algorithms.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Nethermind",
+ "url": "https://www.nethermind.io/",
+ "type": "article"
+ },
{
"title": "Nethermind Documentation",
- "url": "https://docs.nethermind.io/nethermind/",
+ "url": "https://docs.nethermind.io/",
"type": "article"
}
]
@@ -2357,8 +2522,14 @@
},
"bTdRKEiIUmu1pnp8UbJK9": {
"title": "Building for Scale",
- "description": "Due to the limited number of transactions-per-second (TPS) built-in to blockchains, a number of alternative mechanism and technologies have emerged to aid the scaling of blockchain dApps.",
- "links": []
+ "description": "Due to the limited number of transactions-per-second (TPS) built-in to blockchains, a number of alternative mechanism and technologies have emerged to aid the scaling of blockchain dApps.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Blockchain Scalability",
+ "url": "https://medium.com/iovlabs-innovation-stories/blockchain-scalability-4dce74382930",
+ "type": "article"
+ }
+ ]
},
"5T5c3SrFfMZLEKAzxJ-_S": {
"title": "State & Payment Channels",
@@ -2378,7 +2549,7 @@
},
"ti6-LSK52dTCLVdxArp9q": {
"title": "Optimistic Rollups & Fraud Proofs",
- "description": "Optimistic rollups are a layer 2 (L2) construction that improves throughput and latency on Ethereum’s base layer by moving computation and data storage off-chain. An optimistic rollup processes transactions outside of Ethereum Mainnet, reducing congestion on the base layer and improving scalability.\n\nOptimistic rollups allow anyone to publish blocks without providing proofs of validity. However, to ensure the chain remains safe, optimistic rollups specify a time window during which anyone can dispute a state transition.\n\nVisit the following resources to learn more:",
+ "description": "Optimistic rollups are a layer 2 (L2) construction that improves throughput and latency on Ethereum’s base layer by moving computation and data storage off-chain. An optimistic rollup processes transactions outside of Ethereum Mainnet, reducing congestion on the base layer and improving scalability. Optimistic rollups allow anyone to publish blocks without providing proofs of validity. However, to ensure the chain remains safe, optimistic rollups specify a time window during which anyone can dispute a state transition.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "How Do Optimistic Rollups Work (The Complete Guide)",
@@ -2480,7 +2651,7 @@
},
"ecT4W5z8Vq9pXjnuhMdpl": {
"title": "Why it matters?",
- "description": "The nature of blockchain allows for trustless systems to be built on top of it. Users don’t rely on a centralized group of people, such as a bank, to make decisions and allow transactions to flow through. Because the system is decentralized, users know that transactions will never be denied for non-custodial reasons.\n\nThis decentralization enables use-cases that were previously impossible, such as parametric insurance, decentralized finance, and decentralized organizations (DAOs), among a few. This allows developers to build products that provide immediate value without having to go through a bureaucratic process of applications, approvals, and general red tape.\n\nVisit the following resources to learn more:",
+ "description": "The nature of blockchain allows for trustless systems to be built on top of it. Users don’t rely on a centralized group of people, such as a bank, to make decisions and allow transactions to flow through. Because the system is decentralized, users know that transactions will never be denied for non-custodial reasons. This decentralization enables use-cases that were previously impossible, such as parametric insurance, decentralized finance, and decentralized organizations (DAOs), among a few. This allows developers to build products that provide immediate value without having to go through a bureaucratic process of applications, approvals, and general red tape.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Why Blockchain?",
diff --git a/public/roadmap-content/computer-science.json b/public/roadmap-content/computer-science.json
index abc9fbd82..3e01868a3 100644
--- a/public/roadmap-content/computer-science.json
+++ b/public/roadmap-content/computer-science.json
@@ -3966,6 +3966,11 @@
"title": "Operating Systems and System Programming",
"url": "https://archive.org/details/ucberkeley-webcast-PL-XXv-cvA_iBDyz-ba4yDskqMDY6A1w_c",
"type": "article"
+ },
+ {
+ "title": "25 hour Operating Systems Course - freecodecamp",
+ "url": "https://youtu.be/yK1uBHPdp30?si=gGPdK7cM4KlP6Qq0",
+ "type": "video"
}
]
},
@@ -4040,8 +4045,29 @@
},
"Ge2nagN86ofa2y-yYR1lv": {
"title": "Scheduling Algorithms",
- "description": "CPU Scheduling is the process of selecting a process from the ready queue and allocating the CPU to it. The selection of a process is based on a particular scheduling algorithm. The scheduling algorithm is chosen depending on the type of system and the requirements of the processes.\n\nHere is the list of some of the most commonly used scheduling algorithms:\n\n* **First Come First Serve (FCFS):** The process that arrives first is allocated the CPU first. It is a non-preemptive algorithm.\n* **Shortest Job First (SJF):** The process with the smallest execution time is allocated the CPU first. It is a non-preemptive algorithm.\n* **Shortest Remaining Time First (SRTF):** The process with the smallest remaining execution time is allocated the CPU first. It is a preemptive algorithm.\n* **Round Robin (RR):** The process is allocated the CPU for a fixed time slice. The time slice is usually 10 milliseconds. It is a preemptive algorithm.\n* **Priority Scheduling:** The process with the highest priority is allocated the CPU first. It is a preemptive algorithm.\n* **Multi-level Queue Scheduling:** The processes are divided into different queues based on their priority. The process with the highest priority is allocated the CPU first. It is a preemptive algorithm.\n* **Multi-level Feedback Queue Scheduling:** The processes are divided into different queues based on their priority. The process with the highest priority is allocated the CPU first. If a process is preempted, it is moved to the next queue. It is a preemptive algorithm.\n* **Lottery Scheduling:** The process is allocated the CPU based on a lottery system. It is a preemptive algorithm.\n* **Multilevel Feedback Queue Scheduling:** The processes are divided into different queues based on their priority. The process with the highest priority is allocated the CPU first. If a process is preempted, it is moved to the next queue. It is a preemptive algorithm.",
- "links": []
+ "description": "CPU Scheduling is the process of selecting a process from the ready queue and allocating the CPU to it. The selection of a process is based on a particular scheduling algorithm. The scheduling algorithm is chosen depending on the type of system and the requirements of the processes.\n\nHere is the list of some of the most commonly used scheduling algorithms:\n\n* **First Come First Serve (FCFS):** The process that arrives first is allocated the CPU first. It is a non-preemptive algorithm.\n* **Shortest Job First (SJF):** The process with the smallest execution time is allocated the CPU first. It is a non-preemptive algorithm.\n* **Shortest Remaining Time First (SRTF):** The process with the smallest remaining execution time is allocated the CPU first. It is a preemptive algorithm.\n* **Round Robin (RR):** The process is allocated the CPU for a fixed time slice. The time slice is usually 10 milliseconds. It is a preemptive algorithm.\n* **Priority Scheduling:** The process with the highest priority is allocated the CPU first. It is a preemptive algorithm.\n* **Multi-level Queue Scheduling:** The processes are divided into different queues based on their priority. The process with the highest priority is allocated the CPU first. It is a preemptive algorithm.\n* **Multi-level Feedback Queue Scheduling:** The processes are divided into different queues based on their priority. The process with the highest priority is allocated the CPU first. If a process is preempted, it is moved to the next queue. It is a preemptive algorithm.\n* **Highest Response Ratio Next(HRRN):** CPU is allotted to the next process which has the highest response ratio and not to the process having less burst time. It is a Non-Preemptive algorithm.\n* **Lottery Scheduling:** The process is allocated the CPU based on a lottery system. It is a preemptive algorithm.\n\nVisit the following resources to learn more :",
+ "links": [
+ {
+ "title": "CPU Scheduling in Operating Systems - geeksforgeeks",
+ "url": "https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/",
+ "type": "article"
+ },
+ {
+ "title": "Lottery Scheduling for Operating Systems - geeksforgeeks",
+ "url": "https://www.geeksforgeeks.org/lottery-process-scheduling-in-operating-system/",
+ "type": "article"
+ },
+ {
+ "title": "Program for Round Robin Scheduling for the same Arrival time - geeksforgeeks",
+ "url": "https://www.geeksforgeeks.org/program-for-round-robin-scheduling-for-the-same-arrival-time/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to CPU Scheduling",
+ "url": "https://youtu.be/EWkQl0n0w5M?si=Lb-PxN_t-rDfn4JL",
+ "type": "video"
+ }
+ ]
},
"cpQvB0qMDL3-NWret7oeA": {
"title": "CPU Interrupts",
diff --git a/public/roadmap-content/cyber-security.json b/public/roadmap-content/cyber-security.json
index 208364f42..0c167dbe3 100644
--- a/public/roadmap-content/cyber-security.json
+++ b/public/roadmap-content/cyber-security.json
@@ -78,7 +78,7 @@
"type": "article"
},
{
- "title": "Libra Office",
+ "title": "LibreOffice",
"url": "https://www.libreoffice.org/",
"type": "article"
}
@@ -811,13 +811,8 @@
"type": "article"
},
{
- "title": "Lets subnet your home network!",
- "url": "https://www.youtube.com/watch?v=mJ_5qeqGOaI&list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF&index=6",
- "type": "video"
- },
- {
- "title": "Subnetting for hackers",
- "url": "https://www.youtube.com/watch?v=o0dZFcIFIAw",
+ "title": "Subnetting",
+ "url": "https://www.youtube.com/playlist?list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF",
"type": "video"
}
]
@@ -1058,7 +1053,7 @@
},
"lwSFIbIX-xOZ0QK2sGFb1": {
"title": "Router",
- "description": "Amazon Simple Storage Service (S3) is a scalable, object-based cloud storage service provided by AWS. It allows users to store and retrieve large amounts of data, such as files, backups, or media content, with high durability and availability. S3 is designed for flexibility, enabling users to access data from anywhere via the internet while offering security features like encryption and access controls. It is widely used for data storage, content distribution, disaster recovery, and big data analytics, providing cost-effective, scalable storage for a variety of applications.\n\nLearn more from the following resources:",
+ "description": "A router is a networking device that directs data packets between different networks, ensuring they reach their destination. It operates at the network layer (Layer 3) of the OSI model and forwards data based on the IP addresses of the source and destination. Routers are essential for connecting devices to the internet or linking multiple networks together. They maintain a routing table to decide the best path for data and can dynamically update routes using protocols like RIP, OSPF, or BGP. Routers also handle Network Address Translation (NAT), allowing multiple devices to share a single public IP address. Many modern routers offer Wi-Fi for wireless connectivity and include basic firewall security to protect the network from threats.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is a Router",
@@ -1069,6 +1064,16 @@
"title": "What is a router and how does it work?",
"url": "https://www.youtube.com/watch?v=UIJzHLpG9bM",
"type": "video"
+ },
+ {
+ "title": "Everything Routers do",
+ "url": "https://youtu.be/AzXys5kxpAM?si=nEsCH6jG2Lj6Ua8N",
+ "type": "video"
+ },
+ {
+ "title": "How Routers forward Packets?",
+ "url": "https://youtu.be/Ep-x_6kggKA?si=II5xBPoXjYEjLvWX",
+ "type": "video"
}
]
},
@@ -1393,11 +1398,16 @@
},
"LKK1A5-xawA7yCIAWHS8P": {
"title": "SSL / TLS",
- "description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
+ "description": "Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.\n\nLearn more from the following resources:",
"links": [
{
- "title": "What’s the Difference Between SSL and TLS?",
- "url": "https://aws.amazon.com/compare/the-difference-between-ssl-and-tls/",
+ "title": "What is SSL? | SSL definition",
+ "url": "https://www.cloudflare.com/en-gb/learning/ssl/what-is-ssl/",
+ "type": "article"
+ },
+ {
+ "title": "TLS Basics",
+ "url": "https://www.internetsociety.org/deploy360/tls/basics/",
"type": "article"
},
{
@@ -1795,7 +1805,7 @@
"type": "article"
},
{
- "title": "",
+ "title": "What is LDAP",
"url": "https://www.youtube.com/watch?v=vy3e6ekuqqg",
"type": "video"
}
diff --git a/public/roadmap-content/devops.json b/public/roadmap-content/devops.json
index 7334ba67e..2873d9d0c 100644
--- a/public/roadmap-content/devops.json
+++ b/public/roadmap-content/devops.json
@@ -1509,11 +1509,6 @@
"title": "White / Grey Listing",
"description": "Whitelisting involves creating a list of trusted entities (such as IP addresses, email addresses, or applications) that are explicitly allowed to access a system or send messages. Anything not on the whitelist is denied by default. Whitelisting offers a high level of security by limiting access to only known and approved entities, but it can be inflexible and require frequent updates to accommodate legitimate changes. Greylisting is a more flexible approach used primarily in email filtering. When an email is received from an unknown sender, the server temporarily rejects it with a \"try again later\" response. Legitimate mail servers will retry sending the email after a short delay, while spammers, which often do not retry, are blocked. This method reduces spam by taking advantage of the fact that spammers usually do not follow retry mechanisms. Greylisting can be less intrusive than whitelisting, but it may introduce slight delays in email delivery for first-time senders.\n\nVisit the following resources to learn more:",
"links": [
- {
- "title": "Basic Introduction to whitelisting",
- "url": "https://www.cblohm.com/blog/education-marketing-trends/what-is-email-whitelisting/",
- "type": "article"
- },
{
"title": "Detailed Introduction to greylisting",
"url": "https://en.wikipedia.org/wiki/Greylisting_(email)",
@@ -2659,31 +2654,21 @@
},
"Yq8kVoRf20aL_o4VZU5--": {
"title": "Container Orchestration",
- "description": "Containers are a construct in which cgroups, namespaces, and chroot are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines, These images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.\n\nVisit the following resources to learn more:",
+ "description": "Container orchestration is the process of managing and automating the lifecycle of containers, including their deployment, scaling, and networking across multiple hosts. It is a critical technology for running complex containerized applications in production environments.\n\nBy leveraging tools like Kubernetes, Docker Swarm, and Apache Mesos, organizations can ensure high availability, scalability, and reliability for their applications. Container orchestration simplifies operations by automating routine tasks and providing a robust foundation for microservices, cloud-native development, and DevOps practices.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "What are Containers?",
- "url": "https://cloud.google.com/learn/what-are-containers",
+ "title": "What is Container Orchestration?",
+ "url": "https://www.redhat.com/en/topics/containers/what-is-container-orchestration",
"type": "article"
},
{
- "title": "What is a Container?",
- "url": "https://www.docker.com/resources/what-container/",
+ "title": "What is Kubernetes?",
+ "url": "https://kubernetes.io/docs/tutorials/kubernetes-basics/",
"type": "article"
},
{
- "title": "Articles about Containers - The New Stack",
- "url": "https://thenewstack.io/category/containers/",
- "type": "article"
- },
- {
- "title": "Explore top posts about Containers",
- "url": "https://app.daily.dev/tags/containers?ref=roadmapsh",
- "type": "article"
- },
- {
- "title": "What are Containers?",
- "url": "https://www.youtube.com/playlist?list=PLawsLZMfND4nz-WDBZIj8-nbzGFD4S9oz",
+ "title": "Introduction to Kubernetes",
+ "url": "https://www.youtube.com/watch?v=PH-2FfFD2PU",
"type": "video"
}
]
diff --git a/public/roadmap-content/devrel.json b/public/roadmap-content/devrel.json
index d0d814725..ee8bc8233 100644
--- a/public/roadmap-content/devrel.json
+++ b/public/roadmap-content/devrel.json
@@ -227,8 +227,24 @@
},
"c0w241EL0Kh4ek76IgsEs": {
"title": "Blog Posts",
- "description": "",
- "links": []
+ "description": "Writing blog posts is about creating valuable, informative content that resonates with developers by addressing their interests, challenges, and learning needs. Effective blog posts should be well-structured, beginning with a clear introduction that outlines the problem or topic, followed by detailed, actionable insights, code examples, or step-by-step guides that help readers understand and apply the concepts. It’s essential to write in a clear, engaging tone that balances technical accuracy with readability, making sure to anticipate common questions and challenges developers might have.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to Write an SEO Blog Post: 11 Key Tips",
+ "url": "https://www.semrush.com/blog/seo-blog-post/",
+ "type": "article"
+ },
+ {
+ "title": "How to Write an Awesome Blog Post in 5 Steps",
+ "url": "https://www.wordstream.com/blog/ws/2015/02/09/how-to-write-a-blog-post",
+ "type": "article"
+ },
+ {
+ "title": "How to Write a PERFECT Blog Post in 2024",
+ "url": "https://www.youtube.com/watch?v=HoT9naGLgNk",
+ "type": "video"
+ }
+ ]
},
"X0xUzEP0S6SyspvqyoDDk": {
"title": "Technical Documentation",
@@ -288,7 +304,7 @@
},
"iKYmUvWFT_C0wnO0iB6gM": {
"title": "Engaging Audience",
- "description": "",
+ "description": "Engaging the audience in public speaking involves capturing and maintaining their attention through dynamic delivery, relatable content, and interactive elements. Effective speakers use storytelling, humor, and real-life examples to make complex ideas more accessible and memorable. Varying tone, pace, and body language helps keep the presentation lively and prevents monotony. Additionally, asking questions, encouraging participation, and inviting feedback can make the audience feel involved and connected. Engaging speakers not only convey information but also create an experience that resonates, making it easier for the audience to absorb and remember key points.",
"links": []
},
"VTGsmk3p4RVXiNhDmx2l8": {
@@ -298,8 +314,19 @@
},
"LixiZj3-QcmQgGAqaaDr6": {
"title": "Contrast Principle",
- "description": "",
- "links": []
+ "description": "The contrast principle is a psychological concept where the perception of something is influenced by what came before it. In practical terms, it means that when two items are presented one after the other, the differences between them seem more pronounced. For example, if a developer sees a complex solution first, a simpler one that follows will appear even easier than it would on its own. This principle is often used in marketing, presentations, and negotiations to shape how choices are perceived, making certain options more attractive by strategically setting up contrasts.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to have influence",
+ "url": "https://www.ethosdebate.com/influence-part-2-contrast-principle/",
+ "type": "article"
+ },
+ {
+ "title": "Psychology of perceptual contrast",
+ "url": "https://www.linkedin.com/pulse/psychology-perceptual-contrast-devender-kumar/",
+ "type": "article"
+ }
+ ]
},
"tbIAEStaoVWnEWbdk7EGc": {
"title": "Handouts",
@@ -318,18 +345,46 @@
},
"UdUDngq425NYSvIuOd7St": {
"title": "Active Listening",
- "description": "",
- "links": []
+ "description": "Active listening in developer relations is about genuinely engaging with the developer community to understand their needs, challenges, and feedback. It involves more than just hearing what is said; it requires attention to verbal cues, context, and non-verbal signals, as well as asking clarifying questions and reflecting on what has been shared. By actively listening, developer advocates can build trust, foster open communication, and gain insights that help shape better products, documentation, and community experiences, ultimately creating a more supportive and responsive environment for developers.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is active listening?",
+ "url": "https://hbr.org/2024/01/what-is-active-listening",
+ "type": "article"
+ },
+ {
+ "title": "7 Active Listening Techniques For Better Communication",
+ "url": "https://www.verywellmind.com/what-is-active-listening-3024343",
+ "type": "article"
+ }
+ ]
},
"jyScVS-sYMcZcH3hOwbMK": {
"title": "Anticipate Questions",
- "description": "",
- "links": []
+ "description": "When giving talks, especially at developer conferences or events, its important to anticipate the audience asking questions at the end of your talk. Being prepared to handle common questions related to your topic can help you with confidence and show that you're a subject matter expert when you answer them correctly. It's important however not to lie or give incorrect answers so make sure that if you don't know the answer, you're honest about it.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to handle questions during a presentation",
+ "url": "https://www.secondnature.com.au/blog/how-to-handle-questions-during-a-presentation",
+ "type": "article"
+ }
+ ]
},
"rhs6QwxZ7PZthLfi38FJn": {
"title": "Be Concise",
- "description": "",
- "links": []
+ "description": "Being concise during a Q&A means delivering clear, direct answers that address the core of the question without unnecessary detail or digression. This approach respects the time of both the person asking and the audience, ensuring that key information is communicated effectively.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to handle a Q&A",
+ "url": "https://anthonysanni.com/blog/3-common-difficult-q-session-questions-turn-advantage",
+ "type": "article"
+ },
+ {
+ "title": "How to answer any presentation question",
+ "url": "https://www.youtube.com/watch?v=lfiNFNTwFGU",
+ "type": "video"
+ }
+ ]
},
"VSOdD9KKF_Qz8nbRdHNo3": {
"title": "Managing Difficult Questions",
@@ -354,7 +409,7 @@
},
"C2w8R4tNy2lOhhWU9l32s": {
"title": "Event Participation",
- "description": "",
+ "description": "Event participation involves encouraging members to actively join and contribute to events such as workshops, webinars, hackathons, or meetups. Effective participation starts with understanding the community’s interests and creating events that provide value, whether through learning opportunities, networking, or problem-solving. To boost attendance, clear communication, easy registration processes, and engaging promotions are essential. During the event, interactive elements like Q&A sessions, polls, and group activities help keep participants involved and foster a sense of community.",
"links": []
},
"gvMbo22eRxqOzszc_w4Gz": {
@@ -447,7 +502,7 @@
},
"sUEZHmKxtjO9gXKJoOdbF": {
"title": "APIs & SDKs",
- "description": "",
+ "description": "APIs (Application Programming Interfaces) and SDKs (Software Development Kits) are essential tools in developer relations as they provide the building blocks for developers to integrate and extend functionality within their own applications. An API allows developers to interact with a service or platform through a defined set of rules and endpoints, enabling data exchange and functionality use without needing to understand the underlying code. SDKs go a step further by offering a collection of pre-built libraries, tools, and documentation that simplify the process of developing with an API, reducing the time and effort needed to create robust integrations. Together, APIs and SDKs empower developers to quickly build and innovate, driving adoption and engagement with a platform.",
"links": []
},
"pqp9FLRJRDDEnni72KHmv": {
@@ -457,8 +512,24 @@
},
"h6R3Vyq0U8t8WL3G5xC2l": {
"title": "Building SDKs",
- "description": "",
- "links": []
+ "description": "Building SDKs (Software Development Kits) involves creating a set of tools, libraries, and documentation that help developers easily integrate and interact with a platform or service. A well-designed SDK abstracts the complexities of an API, providing a streamlined, user-friendly interface that enables developers to focus on building their applications rather than handling low-level technical details. It should be intuitive, well-documented, and robust, offering clear examples, error handling, and flexibility to suit different use cases.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to build an SDK from scratch: Tutorial & best practices",
+ "url": "https://liblab.com/blog/how-to-build-an-sdk/",
+ "type": "article"
+ },
+ {
+ "title": "Guiding Principles for Building SDKs",
+ "url": "https://auth0.com/blog/guiding-principles-for-building-sdks/",
+ "type": "article"
+ },
+ {
+ "title": "API vs SDK",
+ "url": "https://www.youtube.com/watch?v=kG-fLp9BTRo",
+ "type": "video"
+ }
+ ]
},
"7Q6_tdRaeb8BgreG8Mw-a": {
"title": "Understanding APIs",
@@ -472,13 +543,40 @@
},
"a-i1mgF3VAxbbpA1gMWyK": {
"title": "Git",
- "description": "",
- "links": []
+ "description": "Git is a distributed version control system designed to handle projects of any size with speed and efficiency. Created by Linus Torvalds in 2005, it tracks changes in source code during software development, allowing multiple developers to work together on non-linear development. Git maintains a complete history of all changes, enabling easy rollbacks and comparisons between versions. Its distributed nature means each developer has a full copy of the repository, allowing for offline work and backup. Git’s key features include branching and merging capabilities, staging area for commits, and support for collaborative workflows like pull requests. Its speed, flexibility, and robust branching and merging capabilities have made it the most widely used version control system in software development, particularly for open-source projects and team collaborations.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Git?",
+ "url": "https://www.atlassian.com/git/tutorials/what-is-git",
+ "type": "article"
+ },
+ {
+ "title": "What is Git? Our beginner’s guide to version control",
+ "url": "https://github.blog/developer-skills/programming-languages-and-frameworks/what-is-git-our-beginners-guide-to-version-control/",
+ "type": "article"
+ },
+ {
+ "title": "What is Git? Explained in 2 Minutes!",
+ "url": "https://www.youtube.com/watch?v=2ReR1YJrNOM",
+ "type": "video"
+ }
+ ]
},
"8O1AgUKXe35kdiYD02dyt": {
"title": "GitHub",
- "description": "",
- "links": []
+ "description": "GitHub is a web-based platform for version control and collaboration using Git. Owned by Microsoft, it provides hosting for software development and offers features beyond basic Git functionality. GitHub includes tools for project management, code review, and social coding. Key features include repositories for storing code, pull requests for proposing and reviewing changes, issues for tracking bugs and tasks, and actions for automating workflows. It supports both public and private repositories, making it popular for open-source projects and private development. GitHub’s collaborative features, like forking repositories and inline code comments, facilitate team development and community contributions. With its extensive integrations and large user base, GitHub has become a central hub for developers, serving as a portfolio, collaboration platform, and deployment tool for software projects of all sizes.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "GitHub Website",
+ "url": "https://github.com",
+ "type": "article"
+ },
+ {
+ "title": "How to Use GitHub",
+ "url": "https://www.youtube.com/watch?v=v_1iqtOnUMg",
+ "type": "video"
+ }
+ ]
},
"J2WunUJBzYw_D5cQH_pnH": {
"title": "Managing Discussions",
@@ -538,13 +636,35 @@
},
"4ZvzY_xGO5BZOmfqj0TTq": {
"title": "Community Guidelines",
- "description": "",
- "links": []
+ "description": "Community guidelines serve as the cornerstone of a thriving developer community, establishing essential rules and principles that govern interactions among members. By setting clear expectations for behavior and promoting respectful communication, these guidelines create a safe and inclusive environment where all participants can flourish. Covering a range of crucial topics such as code of conduct, content moderation, intellectual property rights, and dispute resolution, well-crafted guidelines provide a comprehensive framework for community engagement. Through the implementation and enforcement of these standards, communities can effectively foster collaboration, encourage knowledge sharing, and maintain a positive atmosphere. Ultimately, this supportive environment nurtures the growth and engagement of developers, contributing to the overall success and sustainability of the community.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Community Guidelines: How to Write and Enforce Them",
+ "url": "https://www.commsor.com/post/community-guidelines",
+ "type": "article"
+ },
+ {
+ "title": "Community Guidelines Mastery: From Creation to Enforcement",
+ "url": "https://bettermode.com/blog/community-guidelines",
+ "type": "article"
+ }
+ ]
},
"er9ukuBvY-F4F8S1qbbjU": {
"title": "Code of Conduct",
- "description": "",
- "links": []
+ "description": "Writing a code of conduct for developer communities involves setting clear guidelines that promote a safe, inclusive, and respectful environment for all participants. It should define acceptable behavior, outline unacceptable actions (such as harassment, discrimination, or disruptive conduct), and provide examples where necessary to clarify expectations. A well-crafted code of conduct also includes clear procedures for reporting violations, ensuring that community members know how to raise concerns and what they can expect in terms of response. Additionally, it should outline the consequences for breaking the rules, emphasizing that the guidelines are actively enforced to maintain a positive and welcoming community culture. The tone should be firm yet approachable, reflecting the commitment to fairness, safety, and inclusivity.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Guide to writing a Code of Conduct",
+ "url": "https://projectinclude.org/writing_cocs",
+ "type": "article"
+ },
+ {
+ "title": "Code of Conduct meaning and template",
+ "url": "https://humaans.io/hr-glossary/code-of-conduct",
+ "type": "article"
+ }
+ ]
},
"8I59U-nnkhQv8ldRuqQlb": {
"title": "Rules and Policies",
@@ -553,8 +673,19 @@
},
"-6cf3RT4-cbwvLYIkCosF": {
"title": "Community Management",
- "description": "",
- "links": []
+ "description": "Community management is a critical aspect of developer relations that involves nurturing and overseeing a thriving ecosystem of developers, users, and stakeholders. It encompasses a wide range of activities, including facilitating discussions, moderating content, organizing events, and implementing strategies to foster engagement and growth. Effective community managers act as bridges between the company and its users, ensuring that community needs are addressed while aligning with organizational goals. They cultivate a positive and inclusive environment where members can share knowledge, collaborate on projects, and provide valuable feedback, ultimately contributing to the success and evolution of the product or platform.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "The Ultimate Guide to Community Management",
+ "url": "https://blog.hubspot.com/marketing/community-management-expert-advice",
+ "type": "article"
+ },
+ {
+ "title": "Online Community Management Tactics",
+ "url": "https://www.higherlogic.com/blog/online-community-management-guide/",
+ "type": "article"
+ }
+ ]
},
"d_dKF87OnRWoWj3Bf1uFf": {
"title": "Moderation",
@@ -563,13 +694,35 @@
},
"8ls5kQvDgvwLbIrwYg1OL": {
"title": "Conflict Resolution",
- "description": "",
- "links": []
+ "description": "Conflict resolution in communities is a crucial skill for Developer Relations professionals, as it involves navigating and mediating disagreements that inevitably arise in diverse and passionate groups. This process requires a delicate balance of empathy, objectivity, and clear communication to address issues ranging from technical disputes to interpersonal conflicts. By fostering an environment of mutual respect and understanding, effective conflict resolution not only resolves immediate problems but also strengthens community bonds, encourages healthy debate, and promotes a culture of collaboration. Ultimately, the ability to skillfully manage conflicts contributes to the overall health and growth of the developer community, ensuring that differences of opinion become opportunities for learning and innovation rather than sources of division.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "5 Conflict Resolution Strategies",
+ "url": "https://www.indeed.com/career-advice/career-development/conflict-resolution-strategies",
+ "type": "article"
+ },
+ {
+ "title": "14 Effective Conflict Resolution Techniques",
+ "url": "https://www.youtube.com/watch?v=v4sby5j4dTY",
+ "type": "video"
+ }
+ ]
},
"6yLt4Ia52Jke9i5kJQvAC": {
"title": "Encouraging Participation",
- "description": "",
- "links": []
+ "description": "Encouraging participation involves creating an environment where members feel welcomed, valued, and motivated to engage. This can be achieved by initiating discussions, asking open-ended questions, and hosting events like webinars, AMAs (Ask Me Anything), or contests that encourage sharing and collaboration. Recognizing and rewarding contributions, whether through badges, shout-outs, or exclusive content, helps foster a sense of belonging and appreciation. It’s also important to actively respond to posts, provide guidance, and make sure the community feels heard. By building a supportive, interactive space, community managers can drive more meaningful participation and strengthen overall engagement.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "12 tips to encourage activity in an online community",
+ "url": "https://www.yunits.com/en/blogs/12-tips-to-encourage-activity-in-an-online-community/",
+ "type": "article"
+ },
+ {
+ "title": "How To Increase Community Engagement",
+ "url": "https://www.aluminati.net/how-to-increase-community-engagement/",
+ "type": "article"
+ }
+ ]
},
"Nta8pUncwNQxJlqF6h1AT": {
"title": "Recognition Programs",
@@ -578,8 +731,19 @@
},
"usorG1GkkvGAZ0h_AGHVk": {
"title": "Event Management",
- "description": "",
- "links": []
+ "description": "Event management involves planning, organizing, and executing events, whether virtual or in-person, to create meaningful experiences for participants. It requires careful coordination of logistics, from selecting venues and scheduling to arranging speakers, promotions, and attendee engagement. Effective event management starts with clear objectives and a detailed plan, covering everything from budgeting and marketing to technical support and post-event follow-ups. Engaging content, interactive sessions, and smooth operations are essential to ensuring a positive experience, encouraging participation, and meeting the event’s goals.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "A step-by-step guide to organising unforgettable professional events for developers",
+ "url": "https://weezevent.com/en-gb/blog/organising-events-for-developers/",
+ "type": "article"
+ },
+ {
+ "title": "Anatomy of a Developer Conference",
+ "url": "https://shiloh-events.com/anatomy-of-a-developer-conference/",
+ "type": "article"
+ }
+ ]
},
"RQk3uOikjQYRyTu7vuAG7": {
"title": "Planning",
@@ -593,8 +757,19 @@
},
"1m1keusP-PTjEwy0dCJJL": {
"title": "Execution",
- "description": "",
- "links": []
+ "description": "In the context of event management within developer advocacy or developer relations, execution refers to the process of effectively implementing and delivering events such as hackathons, conferences, webinars, or meetups. It involves coordinating logistics, ensuring that speakers, content, and schedules align with the target audience's interests, and driving engagement throughout the event. Strong execution ensures seamless operations, from setup to follow-up, with an emphasis on providing value to attendees and fostering deeper connections within the developer community.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to run a successful hackathon",
+ "url": "https://hackathon.guide/",
+ "type": "article"
+ },
+ {
+ "title": "So you want to run an online event",
+ "url": "https://www.youtube.com/watch?v=56rvtjZ9x3g",
+ "type": "video"
+ }
+ ]
},
"kmcOYDvu1vq7AQPllZvv0": {
"title": "Post Event Followup",
@@ -608,12 +783,23 @@
},
"oWXfov-mOF47d7Vffyp3t": {
"title": "Feedback Collection",
- "description": "",
- "links": []
+ "description": "Feedback collection is the process of gathering insights and opinions from attendees, participants, and stakeholders to assess the event's effectiveness and identify areas for improvement. This can be done through surveys, polls, one-on-one conversations, or post-event interviews. For developer advocacy events, feedback collection focuses on understanding the value provided to the developer audience, the relevance of content, speaker quality, and logistical aspects such as venue or virtual platform experience. Analyzing this feedback helps refine future events, tailor content more effectively, and enhance overall engagement with the developer community.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "7 Tips to Collect Effective Event Feedback",
+ "url": "https://daily.dev/blog/7-tips-to-collect-effective-event-feedback",
+ "type": "article"
+ },
+ {
+ "title": "Post-event survey questions you should ask after any event",
+ "url": "https://uk.surveymonkey.com/mp/post-event-survey-questions/",
+ "type": "article"
+ }
+ ]
},
"B1IdobUaGeBLI2CgsFg8H": {
"title": "Blogging",
- "description": "",
+ "description": "Blogging is a strategic way to share knowledge, build credibility, and engage with the developer community. It involves writing content that educates, informs, or solves specific problems that developers face, such as tutorials, best practices, case studies, and product updates. Effective blogging should be both informative and approachable, using clear language, relevant examples, and practical insights that developers can easily apply.",
"links": []
},
"uzMfR6Yd9Jvjn8i5RpC1Q": {
@@ -638,13 +824,35 @@
},
"nlzI2fG3SwC5Q42qXcXPX": {
"title": "Cross-Promotion",
- "description": "",
- "links": []
+ "description": "Cross-promotion in the context of guest blogging involves leveraging both the host’s and guest’s platforms to maximize reach and engagement. When a guest blog post is published, both parties actively share and promote it across their social media channels, newsletters, and websites. This collaboration helps expose the content to a broader audience, as followers from both sides are introduced to new voices and insights. Effective cross-promotion requires clear communication on promotional plans, consistent messaging, and tagging or mentioning each other to ensure visibility. It’s a mutually beneficial strategy that boosts traffic, expands reach, and strengthens partnerships.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What Is Cross-Promotion + 8 Cross-Promotion Strategies",
+ "url": "https://optinmonster.com/cross-promotion-ideas-triple-customers/",
+ "type": "article"
+ },
+ {
+ "title": "How to cross promote on Social Media",
+ "url": "https://www.sprinklr.com/blog/cross-promote-social-media/",
+ "type": "article"
+ }
+ ]
},
"w1ZooDCDOkbL1EAa5Hx3d": {
"title": "Collaborations",
- "description": "",
- "links": []
+ "description": "Blogging collaborations involve partnering with other experts, influencers, or organizations to create content that combines diverse perspectives, expertise, and audiences. Collaborations can take the form of co-authored posts, guest blogging, or interview-style pieces, and they are an effective way to reach a broader audience while enriching the content with insights that a single author might not cover. Successful collaborations require clear communication, aligning on goals, topics, and responsibilities, and ensuring that the content feels cohesive and valuable to all parties involved. They also provide an opportunity to build relationships, share knowledge, and cross-promote, driving increased visibility and engagement for everyone participating.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to collaborate with bloggers",
+ "url": "https://www.create.net/blog/how-to-collaborate-with-bloggers",
+ "type": "article"
+ },
+ {
+ "title": "The Power of Collaborative Blogging: Building Relationships and Connecting with Others",
+ "url": "https://aicontentfy.com/en/blog/power-of-collaborative-blogging-building-relationships-and-connecting-with-others",
+ "type": "article"
+ }
+ ]
},
"bRzzc137OlmivEGdhv5Ew": {
"title": "Video Production",
@@ -653,8 +861,24 @@
},
"6zK9EJDKBC89UArY7sfgs": {
"title": "Editing",
- "description": "",
- "links": []
+ "description": "Editing is the process of refining raw footage to create a cohesive, engaging final product. It involves cutting, arranging, and enhancing clips to ensure a smooth flow, clear storytelling, and visual appeal. Effective editing also includes adding elements like transitions, graphics, sound effects, and music to emphasize key points and maintain viewer interest. In developer-focused content, such as tutorials or product demos, editing helps simplify complex information, highlighting important details and ensuring clarity. Good editing not only enhances the viewing experience but also helps convey the intended message more effectively and professionally.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Introduction to video editing",
+ "url": "https://www.adobe.com/uk/creativecloud/video/discover/edit-a-video.html",
+ "type": "article"
+ },
+ {
+ "title": "A complete guide on how to edit videos fast even if you are a beginner",
+ "url": "https://www.laura-moore.co.uk/a-complete-guide-on-how-to-edit-videos-fast-even-if-you-are-a-beginner/",
+ "type": "article"
+ },
+ {
+ "title": "How to edit videos",
+ "url": "https://www.youtube.com/watch?app=desktop&v=sTqEmGNtNqk",
+ "type": "video"
+ }
+ ]
},
"_QHUpFW4kZ5SBaP7stXY2": {
"title": "Recording",
@@ -668,8 +892,19 @@
},
"OUWVqJImrmsZpAtRrUYNH": {
"title": "Animations & Graphics",
- "description": "",
- "links": []
+ "description": "Adding animations and graphics to your videos are a great way to retain users. Large creators in the developer community such as [Fireship.io](http://Fireship.io) do this to great effect by having a mix of informative and humourous animations and graphics.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "3 AMAZING Graphic Animations For Level UP Your Videos",
+ "url": "https://www.youtube.com/watch?v=cnyQkr21oM8",
+ "type": "video"
+ },
+ {
+ "title": "Essential Motion Graphics for Youtube",
+ "url": "https://www.youtube.com/watch?v=HH0VBOyht0E",
+ "type": "video"
+ }
+ ]
},
"pEMNcm_wJNmOkWm57L1pA": {
"title": "Video Production",
@@ -703,8 +938,19 @@
},
"D7_iNPEKxFv0gw-fsNNrZ": {
"title": "Animations & Graphics",
- "description": "",
- "links": []
+ "description": "Animations and graphics can be a great addition to your live streaming setup, especially if they're related to the brand that you're representing. Be aware though that excessive animations and graphics can take its toll on your machine and potentially result in a bad experience for viewers, so it's important to find the right balance.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to Create Animated Overlays For Your Live Streams",
+ "url": "https://www.youtube.com/watch?v=y6BykyZGUlE",
+ "type": "video"
+ },
+ {
+ "title": "How to Install & Use Overlays in OBS",
+ "url": "https://www.youtube.com/watch?v=pxB9ET8gZH0",
+ "type": "video"
+ }
+ ]
},
"8aiLVG4clveX1Luiehvxr": {
"title": "Technical Setup",
@@ -718,8 +964,19 @@
},
"7y4vHk_jgNTW6Q1WoqYDc": {
"title": "Audio",
- "description": "",
- "links": []
+ "description": "Having good quality audio when live streaming or creating video content is a must, it is often said that viewers will accept lower quality video but poor audio is a deal breaker. Unfortunetly this often includes purchasing a good quality microphone although theres many improvements you can make to an existing setup such as streaming from a quiet location with good accoustics and applying filters in your software of choice.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to Improve Your Live Stream Audio",
+ "url": "https://www.soundproofcow.com/improve-live-stream-audio/",
+ "type": "article"
+ },
+ {
+ "title": "How to improve your live stream audio quality!",
+ "url": "https://www.youtube.com/watch?app=desktop&v=_bTb0YqJX9w",
+ "type": "video"
+ }
+ ]
},
"71BBFjaON1NJi4rOHKW6K": {
"title": "Social Media",
@@ -738,8 +995,14 @@
},
"ZMManUnO-9EQqi-xmLt5r": {
"title": "Facebook",
- "description": "",
- "links": []
+ "description": "Facebook serves as a powerful platform for building and engaging developer communities. It allows developer advocates to share resources, host live events, and create dedicated groups or pages for discussions, fostering collaboration and knowledge sharing. Facebook’s global reach and engagement tools help advocates amplify content, provide support, and maintain an active presence in the developer ecosystem, creating a space for feedback, networking, and promoting tools or products to a broad audience.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Facebook marketing: The complete guide for your brand’s strategy",
+ "url": "https://sproutsocial.com/insights/facebook-marketing-strategy/",
+ "type": "article"
+ }
+ ]
},
"UAkGV9_I6qiKZMr1aqQCm": {
"title": "Instagram",
@@ -748,28 +1011,88 @@
},
"TGXPxTFv9EhsfS5uWR5gS": {
"title": "Content Strategy",
- "description": "",
- "links": []
+ "description": "A social media content strategy is a plan to use platforms effectively to engage and grow an audience. It starts with setting clear goals, like increasing brand awareness or driving traffic, and understanding the target audience. The strategy focuses on creating diverse, valuable content—tutorials, updates, tips, and interactive posts—that resonates with followers and encourages engagement. Choosing the right platforms, posting consistently, and actively responding to comments are key to building relationships.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "The Ultimate Guide to Creating a Content Marketing Strategy",
+ "url": "https://www.semrush.com/blog/content-marketing-strategy-guide/",
+ "type": "article"
+ },
+ {
+ "title": "How to craft an effective social media content strategy",
+ "url": "https://sproutsocial.com/insights/social-media-content-strategy/",
+ "type": "article"
+ }
+ ]
},
"lG1FH7Q-YX5pG-7mMtbSR": {
"title": "Analytics and Optimization",
- "description": "",
- "links": []
+ "description": "When engaging with developer communities on social media, it's important to monitor your analytics in order to maximise the potential of your content. Platforms like X provide great analytics that help you keep an eye on which posts perform well with data such as impressions, likes, and shares.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is social media analytics?",
+ "url": "https://blog.hootsuite.com/social-media-analytics/",
+ "type": "article"
+ },
+ {
+ "title": "2024 Guide to X (Twitter) Analytics",
+ "url": "https://blog.hootsuite.com/twitter-analytics-guide/",
+ "type": "article"
+ }
+ ]
},
"l2P44pL9eF8xarBwC_CVO": {
"title": "Consistent Posting",
- "description": "",
- "links": []
+ "description": "Consistent posting on social media platforms is a cornerstone of effective Developer Relations strategy, serving as a powerful tool to maintain engagement, build credibility, and foster a sense of community among developers. By regularly sharing valuable content, insights, and updates, DevRel professionals can establish a reliable presence that audiences come to expect and appreciate. This steady stream of information not only keeps the community informed about the latest developments, but also demonstrates an ongoing commitment to the field, enhancing trust and authority. Moreover, consistent posting helps to navigate the algorithms of various social media platforms, increasing visibility and reach, while providing ample opportunities for interaction and feedback from the developer community.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Why Posting Consistently is Key to Social Media",
+ "url": "https://forty8creates.com/why-posting-consistently-is-key-to-social-media/",
+ "type": "article"
+ },
+ {
+ "title": "How to Create Consistent Content on Social Media",
+ "url": "https://www.youtube.com/watch?v=-bQpsCMgCkA",
+ "type": "video"
+ }
+ ]
},
"WIH216mHg2OiSebzQYI-f": {
"title": "Engaging Content",
- "description": "",
- "links": []
+ "description": "Content on social media is designed to capture attention, spark interaction, and encourage sharing. It’s often visually appealing, informative, or entertaining, offering value that resonates with the audience’s interests. This can include eye-catching images, short videos, infographics, or quick tips that are easy to digest and act on. To boost engagement, content should invite responses, like asking questions, running polls, or encouraging users to share their thoughts.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Create engaging and effective social media content",
+ "url": "https://help.hootsuite.com/hc/en-us/articles/4403597090459-Create-engaging-and-effective-social-media-content",
+ "type": "article"
+ },
+ {
+ "title": "How To Create Engaging Videos As A Solo Creator",
+ "url": "https://www.youtube.com/watch?v=yxXOjyvIkik",
+ "type": "video"
+ }
+ ]
},
"ZWkpgvXIzjN3_fOyhVEv0": {
"title": "Creating Brand Voice",
- "description": "",
- "links": []
+ "description": "Creating a brand voice involves defining a consistent tone and style that reflects the brand’s personality and values across all communication. It’s about deciding how the brand should sound—whether friendly, professional, witty, or authoritative—and using that voice to connect authentically with the audience. To build a strong brand voice, it’s important to understand the target audience, outline key characteristics (like being approachable or technical), and ensure all content, from social media posts to documentation, follows these guidelines. Consistency in voice helps build trust, makes the brand more recognizable, and strengthens its identity in the market.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Creating Your Brand Voice: A Complete Guide",
+ "url": "https://blog.hubspot.com/marketing/brand-voice",
+ "type": "article"
+ },
+ {
+ "title": "How to Define Your Brand’s Tone of Voice",
+ "url": "https://www.semrush.com/blog/how-to-define-your-tone-of-voice",
+ "type": "article"
+ },
+ {
+ "title": "Branding 101: How To Build Customer Loyalty With Brand Voice",
+ "url": "https://www.youtube.com/watch?v=et-a39drCsU",
+ "type": "video"
+ }
+ ]
},
"NWxAxiDgvlGpvqdkzqnOH": {
"title": "Tracking Engagement",
@@ -778,12 +1101,23 @@
},
"46iMfYgC7fCZLCy-qzl1B": {
"title": "Data-Driven Strategy Shift",
- "description": "",
- "links": []
+ "description": "A data-driven strategy for social media involves using analytics and insights to guide content creation, posting schedules, and engagement tactics. By regularly reviewing metrics like engagement rates, click-throughs, and follower growth, brands can identify what resonates with their audience and refine their approach accordingly. This strategy helps in making informed decisions, such as which types of content to prioritize, the best times to post, and how to optimize ads. It also enables brands to track the success of campaigns, experiment with new ideas, and adjust quickly to shifting trends, ensuring that social media efforts are effective and aligned with overall goals.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What Does a Data-Driven Digital Marketing Strategy Look Like",
+ "url": "https://medium.com/@vantageplusmarketing/implementing-data-driven-strategies-in-seo-and-content-marketing-6104a91afba0",
+ "type": "article"
+ },
+ {
+ "title": "Analytics vs Reporting: How to make Data-driven Business Decisions",
+ "url": "https://www.youtube.com/watch?v=kyNa3SdKU84",
+ "type": "video"
+ }
+ ]
},
"g3M6nfLr0DMcn-NCFF7nZ": {
"title": "Documentation",
- "description": "",
+ "description": "Documentation is a key resource that guides new users through understanding and using a platform or tool. It should provide clear, step-by-step instructions, code examples, and explanations that help developers get started quickly, addressing common questions and challenges. Effective onboarding documentation is well-organized, easy to navigate, and written in a straightforward, approachable tone. It often includes tutorials, guides, and FAQs that cover everything from initial setup to more advanced features, ensuring that developers can smoothly integrate the platform into their projects. Good documentation reduces friction, boosts developer confidence, and accelerates adoption.",
"links": []
},
"RLf08xKMjlt6S9-MFiTo-": {
@@ -793,8 +1127,19 @@
},
"7IJO_jDpZUdlr_n5rBJ6O": {
"title": "API References",
- "description": "",
- "links": []
+ "description": "Adding API References to your products documentation is a key component and the most common reason for developers using documentation. When creating API documentation, ensure you add examples for the most common languages as well as any details around authorization and common issues faced.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What Is API Documentation?",
+ "url": "https://blog.hubspot.com/website/api-documentation",
+ "type": "article"
+ },
+ {
+ "title": "API Documentation and Why it Matters",
+ "url": "https://www.youtube.com/watch?v=39Tt1IkLiQQ",
+ "type": "video"
+ }
+ ]
},
"6ubk20TBIL3_VrrRMe8tO": {
"title": "Tutorials",
@@ -808,12 +1153,18 @@
},
"pGJrCyYhLLGUnv6LxpYUe": {
"title": "Code Samples",
- "description": "",
- "links": []
+ "description": "Code samples are essential in sample projects as they provide concrete, practical examples that help developers understand how to use a platform, library, or tool. Well-crafted code samples should be clear, concise, and focused on demonstrating specific functionality, avoiding unnecessary complexity that might distract from the core concept. They should be easy to read, following consistent naming conventions, proper formatting, and best practices for the relevant programming language. Including inline comments and explanations can help clarify key steps, while additional context in the accompanying documentation or blog post can guide developers through the logic and potential use cases.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Code Documentation Best Practices and Standards: A Complete Guide",
+ "url": "https://blog.codacy.com/code-documentation",
+ "type": "article"
+ }
+ ]
},
"mWcMSKnUQamUykBxND-Ju": {
"title": "Example Apps",
- "description": "",
+ "description": "Example apps are pre-built applications that demonstrate how to use a platform, framework, or set of tools in a practical, real-world scenario. These apps provide developers with hands-on examples of best practices, showing how different components work together and offering a solid starting point for building their projects. Effective example apps are well-documented, easy to set up, and cover common use cases, helping developers quickly understand core concepts and features. By providing clear, functional code, they reduce the learning curve, making it easier for developers to explore, experiment, and adopt new technologies.",
"links": []
},
"omnUSgUHZg2DmnOUJ0Xo1": {
@@ -828,7 +1179,7 @@
},
"oGTIvAY3zYgoiC63FQRSd": {
"title": "Forums",
- "description": "",
+ "description": "Support forums are online platforms where developers can ask questions, share solutions, and collaborate on challenges related to specific products, technologies, or tools. They serve as a valuable resource for peer-to-peer support, allowing the community to contribute their expertise, often reducing the workload of official support teams. Active participation by developer advocates in these forums can foster stronger relationships, provide real-time feedback, and build trust within the community by addressing issues, clarifying doubts, and offering guidance in a more interactive and collaborative environment.",
"links": []
},
"j6tr3mAaKqTuEFTRSCsrK": {
@@ -838,8 +1189,19 @@
},
"4GCQ3stXxW1HrlAVC0qDl": {
"title": "FAQs",
- "description": "",
- "links": []
+ "description": "FAQs (Frequently Asked Questions) serve as a self-service resource that addresses common queries or issues users may encounter. They help reduce the volume of support tickets by providing quick, accessible answers to recurring problems, ranging from technical troubleshooting to product usage. Well-structured FAQs not only improve customer satisfaction by offering immediate solutions but also free up support teams to focus on more complex cases, ultimately enhancing overall efficiency and user experience. For developer relations, FAQs can include coding examples, integration tips, and clarifications about APIs or tools.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "12 Crystal-Clear FAQ Page Examples & How to Make Your Own",
+ "url": "https://blog.hubspot.com/service/faq-page",
+ "type": "article"
+ },
+ {
+ "title": "How to write an FAQ page",
+ "url": "https://uk.indeed.com/career-advice/career-development/how-to-write-faq-page",
+ "type": "article"
+ }
+ ]
},
"weyCcboaekqf5NuVAOxfU": {
"title": "Office Hours",
@@ -858,23 +1220,62 @@
},
"afR1VviBs2w0k8UmP38vn": {
"title": "Community Growth",
- "description": "",
- "links": []
+ "description": "Growing a community is a multifaceted process that requires strategic planning, consistent engagement, and a focus on providing value to members. It involves creating a welcoming environment, fostering meaningful interactions, and continuously adapting to the needs and interests of the community. Key aspects include developing clear goals, establishing communication channels, organizing events, encouraging user-generated content, and leveraging data-driven insights to refine strategies. Successful community growth not only increases numbers but also enhances the quality of interactions, builds loyalty, and creates a self-sustaining ecosystem where members actively contribute to and benefit from the community's collective knowledge and resources.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "ecrets to Building the Most Engaging Community Ever",
+ "url": "https://www.youtube.com/watch?v=6ZVpufakwfk",
+ "type": "video"
+ }
+ ]
},
"RXj0yB7KsIOM5whwtyBBU": {
"title": "Engagement Rates",
- "description": "",
- "links": []
+ "description": "Engagement rates are key metrics that measure how actively an audience interacts with content on platforms like social media, blogs, or community forums. They reflect actions such as likes, comments, shares, and clicks, indicating how well the content resonates with the audience. High engagement rates suggest that the content is relevant, valuable, and appealing, while low rates can signal a need for adjustment in the messaging or approach. Tracking engagement rates helps in understanding audience preferences, refining content strategies, and assessing the effectiveness of campaigns, making them essential for improving overall communication and outreach efforts.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Engagement Rate",
+ "url": "https://sproutsocial.com/glossary/engagement-rate/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to Engagement Rate",
+ "url": "https://www.youtube.com/watch?v=SCTbIwADCo4",
+ "type": "video"
+ }
+ ]
},
"yhDBZfUAjumFHpUZtmLg3": {
"title": "Content Performance",
- "description": "",
- "links": []
+ "description": "Content performance involves measuring, analyzing, and optimizing the impact of created content on the target audience. This multifaceted process encompasses tracking various metrics such as engagement rates, conversion rates, time spent on page, and social shares to gauge the effectiveness of content in achieving its intended goals. By leveraging sophisticated analytics tools and techniques, DevRel professionals can gain valuable insights into audience behavior, preferences, and pain points, enabling them to refine their content strategy, improve user experience, and ultimately drive better outcomes for both developers and the organization.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Content Performance: 19 Metrics to Track Your Results",
+ "url": "https://www.semrush.com/blog/content-performance/",
+ "type": "article"
+ },
+ {
+ "title": "How to measure the ROI of your content efforts",
+ "url": "https://www.youtube.com/watch?v=j1CNmi302Oc",
+ "type": "video"
+ }
+ ]
},
"AwMwMU9hg_gCKPP4tykHb": {
"title": "Developer Satisfaction",
- "description": "",
- "links": []
+ "description": "Developer satisfaction refers to how content and engaged developers feel when using a platform, tool, or service. It encompasses aspects like ease of use, quality of documentation, helpfulness of support, and overall developer experience. High developer satisfaction is crucial because it leads to greater adoption, advocacy, and retention. To achieve it, platforms need to listen actively to developer feedback, provide intuitive and well-documented tools, and ensure quick, effective support. Regularly measuring satisfaction through surveys, feedback loops, and usage analytics helps identify areas for improvement, ensuring that the platform continues to meet developer needs effectively.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What are developer experience metrics?",
+ "url": "https://www.cortex.io/post/developer-experience-metrics-for-software-development-success",
+ "type": "article"
+ },
+ {
+ "title": "How to measure developer experience with metrics",
+ "url": "https://www.opslevel.com/resources/measuring-developer-experience-with-metrics",
+ "type": "article"
+ }
+ ]
},
"psk3bo-nSskboAoVTjlpz": {
"title": "Tools",
@@ -883,8 +1284,19 @@
},
"8xrhjG9qmbsoBC3F8zS-b": {
"title": "Google Analytics",
- "description": "",
- "links": []
+ "description": "Google Analytics is a tool used to track and analyze website traffic and user behavior. It helps advocates understand how developers interact with content such as documentation, tutorials, blogs, or event pages. By analyzing metrics like page views, bounce rates, and user demographics, developer advocates can gauge the effectiveness of their outreach efforts, identify popular resources, and optimize content strategies. This data-driven approach allows for better engagement, personalization, and improved targeting, ultimately helping advocates cater to the specific needs of the developer community.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Google Analytics Academy",
+ "url": "https://developers.google.com/analytics",
+ "type": "article"
+ },
+ {
+ "title": "Get started with Google Analytics",
+ "url": "https://www.youtube.com/watch?v=UuE37-MM1ws",
+ "type": "video"
+ }
+ ]
},
"x8RIrK2VB-LBFbt6hAcQb": {
"title": "Social Media Analytics",
@@ -908,8 +1320,19 @@
},
"0dRnUlgze87eq2FVU_mWp": {
"title": "Data Visualization",
- "description": "",
- "links": []
+ "description": "Data visualization involves using charts, graphs, and other visual tools to present data clearly and effectively. It transforms raw numbers into visual formats that make patterns, trends, and insights easier to understand at a glance. Good visualizations simplify complex data, highlighting key findings and supporting informed decision-making. When creating these visuals, it’s important to choose the right type—like bar charts for comparisons, line graphs for trends, or pie charts for proportions—and ensure they are clean, accurate, and easy to read.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "What is Data Visualization?",
+ "url": "https://www.ibm.com/topics/data-visualization",
+ "type": "article"
+ },
+ {
+ "title": "Data Visualization in 2024",
+ "url": "https://www.youtube.com/watch?v=loYuxWSsLNc",
+ "type": "video"
+ }
+ ]
},
"mh1BZDVkc-VwA8aQAmDhO": {
"title": "Insights & Recommendations",
@@ -933,13 +1356,35 @@
},
"ue0NaNnNpF7UhvJ8j0Yuo": {
"title": "Conference Speaking",
- "description": "",
- "links": []
+ "description": "Conference speaking is a pivotal aspect of Developer Relations, offering a powerful platform for sharing knowledge, showcasing expertise, and fostering connections within the tech community. As a DevRel professional, mastering the art of public speaking allows you to effectively communicate complex technical concepts, inspire fellow developers, and represent your organization at industry events. This skill encompasses not only delivering engaging presentations but also crafting compelling narratives, tailoring content to diverse audiences, and navigating the intricacies of conference logistics. By honing your conference speaking abilities, you can significantly amplify your impact, establish thought leadership, and contribute to the growth and evolution of the developer ecosystem.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How to speak at a conference",
+ "url": "https://www.eventible.com/learning/how-to-speak-at-a-conference/",
+ "type": "article"
+ },
+ {
+ "title": "Secrets of Great Conference Talks",
+ "url": "https://www.youtube.com/watch?v=rOf5sPSBLjg",
+ "type": "video"
+ }
+ ]
},
"HN2gNsYYRLVOOdy_r8FKJ": {
"title": "Building a Personal Brand",
- "description": "",
- "links": []
+ "description": "Building a personal brand in developer relations is about establishing a unique and authentic presence that showcases your expertise, values, and contributions to the developer community. It involves consistently sharing knowledge, insights, and experiences through various channels such as blogs, social media, podcasts, or public speaking, while engaging in meaningful conversations and collaborations. A strong personal brand helps build credibility and trust, making it easier to connect with other developers, influencers, and potential partners.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "A New Approach to Building Your Personal Brand",
+ "url": "https://hbr.org/2023/05/a-new-approach-to-building-your-personal-brand",
+ "type": "article"
+ },
+ {
+ "title": "5 Steps to Building a Personal Brand You Feel Good About",
+ "url": "https://www.youtube.com/watch?v=ozMCb0wOnMU",
+ "type": "video"
+ }
+ ]
},
"4ygpqUK70hI5r1AmmfMZq": {
"title": "Networking Strategies",
@@ -963,7 +1408,7 @@
},
"bwwk6ESNyEJa3fCAIKPwh": {
"title": "Continuous Learning",
- "description": "",
+ "description": "The developer landscape is continuously evolving and while it's not required to stay on top of the entire industry, it is crucial that you keep up to date with the trends in your area.",
"links": []
}
}
\ No newline at end of file
diff --git a/public/roadmap-content/frontend.json b/public/roadmap-content/frontend.json
index 6b250c7de..11fa5ae0b 100644
--- a/public/roadmap-content/frontend.json
+++ b/public/roadmap-content/frontend.json
@@ -2587,6 +2587,11 @@
"url": "https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API",
"type": "article"
},
+ {
+ "title": "Web Storage API – How to Store Data on the Browser",
+ "url": "https://www.freecodecamp.org/news/web-storage-api-how-to-store-data-on-the-browser/",
+ "type": "article"
+ },
{
"title": "Explore top posts about Storage",
"url": "https://app.daily.dev/tags/storage?ref=roadmapsh",
diff --git a/public/roadmap-content/game-developer.json b/public/roadmap-content/game-developer.json
index 9ac90222f..fbe450471 100644
--- a/public/roadmap-content/game-developer.json
+++ b/public/roadmap-content/game-developer.json
@@ -1,8 +1,14 @@
{
"rQArtuVKGVgLn_fw9yO3b": {
"title": "Client Side Development",
- "description": "In game development, the term \"Client Side\" refers to all the operations and activities that occur on the player's machine, which could be a console, computer, or even a phone. The client side is responsible for rendering graphics, handling input from the user and sometimes processing game logic. This is in contrast to the server-side operations, which involve handling multiplayer connections and synchronizing game states among multiple clients. On the client side, developers need to ensure performance optimization, smooth UI/UX, quick load times, and security to provide an engaging, lag-free gaming experience. Security is also crucial to prevent cheating in multiplayer games, which can be tackled through measures like Data obfuscation and encryption.",
- "links": []
+ "description": "In game development, the term \"Client Side\" refers to all the operations and activities that occur on the player's machine, which could be a console, computer, or even a phone. The client side is responsible for rendering graphics, handling input from the user and sometimes processing game logic. This is in contrast to the server-side operations, which involve handling multiplayer connections and synchronizing game states among multiple clients. On the client side, developers need to ensure performance optimization, smooth UI/UX, quick load times, and security to provide an engaging, lag-free gaming experience. Security is also crucial to prevent cheating in multiplayer games, which can be tackled through measures like Data obfuscation and encryption.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Client Side Architecture",
+ "url": "https://gabrielgambetta.com/client-server-game-architecture.html",
+ "type": "article"
+ }
+ ]
},
"m1wX27XBWKXZcTMH2U1xp": {
"title": "Game Mathematics",
@@ -17,15 +23,20 @@
},
"grRf-MmaXimDB4iODOV47": {
"title": "Linear Algebra",
- "description": "Linear Algebra is a vital field in Mathematics that is extensively used in game development. It revolves around vector spaces and the mathematical structures used therein, including matrices, determinants, vectors, eigenvalues, and eigenvectors, among others. In the context of game development, linear algebra is used mainly for computer graphics, physics, AI, and many more. It allows developers to work with spatial transformations, helping them manipulate and critically interact with the 3D space of the game. On a broader context, it is important in computer programming for algorithms, parallax shifting, polygonal modeling, collision detection, etc. From object movements, positional calculations, game physics, to creating dynamism in games, linear algebra is key.",
+ "description": "Linear Algebra is a vital field in Mathematics that is extensively used in game development. It revolves around vector spaces and the mathematical structures used therein, including matrices, determinants, vectors, eigenvalues, and eigenvectors, among others. In the context of game development, linear algebra is used mainly for computer graphics, physics, AI, and many more. It allows developers to work with spatial transformations, helping them manipulate and critically interact with the 3D space of the game. On a broader context, it is important in computer programming for algorithms, parallax shifting, polygonal modeling, collision detection, etc. From object movements, positional calculations, game physics, to creating dynamism in games, linear algebra is key.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Linear Algebra",
+ "url": "https://en.wikipedia.org/wiki/Linear_algebra",
+ "type": "article"
+ },
{
"title": "Explore top posts about Math",
"url": "https://app.daily.dev/tags/math?ref=roadmapsh",
"type": "article"
},
{
- "title": "Linear Algebra full course by Kimberly Brehm",
+ "title": "Linear Algebra Full Course by Kimberly Brehm",
"url": "https://youtube.com/playlist?list=PLl-gb0E4MII03hiCrZa7YqxUMEeEPmZqK&si=_r0WDwh94NKJbs_R",
"type": "video"
}
@@ -33,8 +44,13 @@
},
"yLEyh5XJ3sl8eHD-PoSvJ": {
"title": "Vector",
- "description": "`Vector` in game development is a mathematical concept and an integral part of game physics. It represents a quantity that has both magnitude and direction. A vector can be used to represent different elements in a game like positions, velocities, accelerations, or directions. In 3D games, it's commonly used to define 3D coordinates (x, y, z). For example, if you have a character in a game and you want to move it up, you'd apply a vector that points upward. Hence, understanding how to manipulate vectors is a fundamental skill in game development.",
+ "description": "`Vector` in game development is a mathematical concept and an integral part of game physics. It represents a quantity that has both magnitude and direction. A vector can be used to represent different elements in a game like positions, velocities, accelerations, or directions. In 3D games, it's commonly used to define 3D coordinates (x, y, z). For example, if you have a character in a game and you want to move it up, you'd apply a vector that points upward. Hence, understanding how to manipulate vectors is a fundamental skill in game development.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Practical Introduction to Vectors for Game Development",
+ "url": "https://dev.to/marcbeaujean/practical-introduction-to-vectors-for-game-development-532f",
+ "type": "article"
+ },
{
"title": "Introduction to Vectors (By Sebastian Lague)",
"url": "https://youtu.be/m7VY1T6f8Ak?feature=shared",
@@ -44,37 +60,103 @@
},
"Kg6Mg9ieUUGXWX9Lai7B0": {
"title": "Matrix",
- "description": "In game development, a **matrix** is a fundamental part of game mathematics. It's a grid of numbers arranged into rows and columns that's particularly important in 3D game development. These matrices are typically 4x4, meaning they contain 16 floating point numbers, and they're used extensively for transformations. They allow for the scaling, rotation, and translation (moving) of 3D vertices in space. With matrices, these transformations can be combined, and transformed vertices can be used to draw the replicas of 3D models into 2D screen space for rendering.",
- "links": []
+ "description": "In game development, a **matrix** is a fundamental part of game mathematics. It's a grid of numbers arranged into rows and columns that's particularly important in 3D game development. These matrices are typically 4x4, meaning they contain 16 floating point numbers, and they're used extensively for transformations. They allow for the scaling, rotation, and translation (moving) of 3D vertices in space. With matrices, these transformations can be combined, and transformed vertices can be used to draw the replicas of 3D models into 2D screen space for rendering.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Matrix Algebra and Game Programming",
+ "url": "https://www.gameludere.com/2019/12/21/matrix-algebra-and-game-programming/",
+ "type": "article"
+ },
+ {
+ "title": "Matrices in Game Development",
+ "url": "https://dev.to/fkkarakurt/matrices-and-vectors-in-game-development-67h",
+ "type": "article"
+ }
+ ]
},
"XWxW2ZBw3LcQ4DRk4tgAG": {
"title": "Geometry",
- "description": "Geometry in game development refers to the mathematical study used to define the spatial elements within a game. This is vital in determining how objects interact within a game's environment. Particularly, geometry is employed in various aspects like object rendering, collision detection, character movement, and the calculation of angles and distance. It allows developers to create the spatial parameters for a game, including object dimensions and orientations. Understanding the basics such as 2D vs 3D, polygons, vertices, meshes and more advanced topics such as vectors, matrices, quaternions etc. is crucial to this field.",
- "links": []
+ "description": "Geometry in game development refers to the mathematical study used to define the spatial elements within a game. This is vital in determining how objects interact within a game's environment. Particularly, geometry is employed in various aspects like object rendering, collision detection, character movement, and the calculation of angles and distance. It allows developers to create the spatial parameters for a game, including object dimensions and orientations. Understanding the basics such as 2D vs 3D, polygons, vertices, meshes and more advanced topics such as vectors, matrices, quaternions etc. is crucial to this field.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game Geometry - Math is Fun",
+ "url": "https://www.mathsisfun.com/geometry/index.html",
+ "type": "article"
+ },
+ {
+ "title": "Geometry and Primitives for Games",
+ "url": "https://dev.to/fkkarakurt/geometry-and-primitives-in-game-development-1og",
+ "type": "article"
+ }
+ ]
},
"XABzEU9owCx9-zw1id9xU": {
"title": "Linear Transformation",
- "description": "`Linear transformations` or `linear maps` are an important concept in mathematics, particularly in the fields of linear algebra and functional analysis. A linear transformation can be thought of as a transformation that preserves the operations of addition and scalar multiplication. In other words, a transformation T is linear if for every pair of vectors `x` and `y`, the equation T(x + y) = T(x) + T(y) holds true. Similarly, for any scalar `c` and any vector `x`, the equation T(cx)=cT(x) should also hold true. This property makes them very useful when dealing with systems of linear equations, matrices, and in many areas of computer graphics, including game development.",
- "links": []
+ "description": "`Linear transformations` or `linear maps` are an important concept in mathematics, particularly in the fields of linear algebra and functional analysis. A linear transformation can be thought of as a transformation that preserves the operations of addition and scalar multiplication. In other words, a transformation T is linear if for every pair of vectors `x` and `y`, the equation `T(x + y) = T(x) + T(y)` holds true. Similarly, for any scalar `c` and any vector `x`, the equation `T(cx)=cT(x)` should also hold true. This property makes them very useful when dealing with systems of linear equations, matrices, and in many areas of computer graphics, including game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Linear Transformation",
+ "url": "https://en.wikipedia.org/wiki/Linear_map",
+ "type": "article"
+ },
+ {
+ "title": "Explore top posts about Math",
+ "url": "https://app.daily.dev/tags/math?ref=roadmapsh",
+ "type": "article"
+ }
+ ]
},
"r5TcXQsU9s4NlAQIPvZ3U": {
"title": "Affine Space",
- "description": "In the context of game mathematics, an **Affine Space** is a fundamental concept you should understand. It is a geometric structure with properties related to both geometry and algebra. The significant aspect of an affine space is that it allows you to work more comfortably with points and vectors. While a vector space on its own focuses on vectors which have both magnitude and direction, it does not involve points. An affine space makes it easy to add vectors to points or subtract points from each other to get vectors. This concept proves extremely useful in the field of game development, particularly when dealing with graphical models, animations, and motion control.",
- "links": []
+ "description": "In the context of game mathematics, an **Affine Space** is a fundamental concept you should understand. It is a geometric structure with properties related to both geometry and algebra. The significant aspect of an affine space is that it allows you to work more comfortably with points and vectors. While a vector space on its own focuses on vectors which have both magnitude and direction, it does not involve points. An affine space makes it easy to add vectors to points or subtract points from each other to get vectors. This concept proves extremely useful in the field of game development, particularly when dealing with graphical models, animations, and motion control.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Affine Space",
+ "url": "https://en.wikipedia.org/wiki/Affine_space",
+ "type": "article"
+ },
+ {
+ "title": "Understanding Affine Space",
+ "url": "https://brilliant.org/wiki/affine-spaces/",
+ "type": "article"
+ }
+ ]
},
"SkCreb6g4i-OFtJWhRYqO": {
"title": "Affine Transformation",
- "description": "An **affine transformation**, in the context of game mathematics, is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation. In video games, it's typically used for manipulating an object's position in 3D space. This operation allows game developers to perform multiple transformations such as translation (moving an object from one place to another), scaling (changing the size of an object), and rotation (spinning the object around a point). An important feature of affine transformation is that it preserves points uniqueness; if two points are distinct to start with, they remain distinct after transformation. It's important to note that these transformations are applied relative to an object's own coordinate system, not the world coordinate system.",
- "links": []
+ "description": "An **affine transformation**, in the context of game mathematics, is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation. In video games, it's typically used for manipulating an object's position in 3D space. This operation allows game developers to perform multiple transformations such as translation (moving an object from one place to another), scaling (changing the size of an object), and rotation (spinning the object around a point). An important feature of affine transformation is that it preserves points uniqueness; if two points are distinct to start with, they remain distinct after transformation. It's important to note that these transformations are applied relative to an object's own coordinate system, not the world coordinate system.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Affine Transformation",
+ "url": "https://www.gamedevs.org/uploads/affine-transformations.pdf",
+ "type": "article"
+ },
+ {
+ "title": "Understanding Affine Transformations",
+ "url": "https://code.tutsplus.com/understanding-affine-transformations-with-matrix-mathematics--active-10884t",
+ "type": "article"
+ }
+ ]
},
"iIWEjpkNFBj4R5wQ0mcWY": {
"title": "Orientation",
- "description": "In the context of game development, **Orientation** refers to the aspect or direction in which an object is pointed in a 3D space. To determine an object's orientation in 3D space, we typically use three angles namely: pitch, yaw, and roll collectively known as Euler's angles. **Pitch** is the rotation around the X-axis, **Yaw** around the Y-axis and **Roll** around the Z-axis. Alternatively, orientation can also be represented using a Quaternion. Quaternions have the advantage of avoiding a problem known as Gimbal lock (a loss of one degree of freedom in 3D space), present when using Euler's angles.",
- "links": []
+ "description": "In the context of game development, **Orientation** refers to the aspect or direction in which an object is pointed in a 3D space. To determine an object's orientation in 3D space, we typically use three angles namely: pitch, yaw, and roll collectively known as Euler's angles. **Pitch** is the rotation around the X-axis, **Yaw** around the Y-axis and **Roll** around the Z-axis. Alternatively, orientation can also be represented using a Quaternion. Quaternions have the advantage of avoiding a problem known as Gimbal lock (a loss of one degree of freedom in 3D space), present when using Euler's angles.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Orientation of Character",
+ "url": "https://medium.com/@dev.szabo.endre/a-bit-e-of-game-dev-math-01-character-movement-with-vectors-effc862a1e4f",
+ "type": "article"
+ },
+ {
+ "title": "Vector Maths for Game Developers",
+ "url": "https://www.gamedeveloper.com/programming/vector-maths-for-game-dev-beginners",
+ "type": "article"
+ }
+ ]
},
"zPs_LlDvkfxvvCrk5fXB2": {
"title": "Quaternion",
- "description": "The **quaternion** is a complex number system that extends the concept of rotations in three dimensions. It involves four components: one real and three imaginary parts. Quaternions are used in game development for efficient and accurate calculations of rotations and orientation. They are particularly useful over other methods, such as Euler angles, due to their resistance to problems like Gimbal lock. Despite their complex nature, understanding and implementing quaternions can greatly enhance a game's 3D rotational mechanics and accuracy.",
+ "description": "The **quaternion** is a complex number system that extends the concept of rotations in three dimensions. It involves four components: one real and three imaginary parts. Quaternions are used in game development for efficient and accurate calculations of rotations and orientation. They are particularly useful over other methods, such as Euler angles, due to their resistance to problems like Gimbal lock. Despite their complex nature, understanding and implementing quaternions can greatly enhance a game's 3D rotational mechanics and accuracy.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Understanding Quaternions",
@@ -95,18 +177,40 @@
},
"L0J2kvveJNsmN9ueXhqKf": {
"title": "Euler Angle",
- "description": "The **Euler angle** is a concept in mathematics and physics used to describe the orientation of a rigid body or a coordinate system in 3D space. It uses three angles, typically named as alpha (α), beta (β), and gamma (γ), and represents three sequential rotations around the axes of the original coordinate system. Euler angles can represent any rotation as a sequence of three elementary rotations. Keep in mind, however, that Euler angles are not unique, and different sequences of rotations can represent identical total effects. It's also noteworthy that Euler angles are prone to a problem known as gimbal lock, where the first and third axis align, causing a loss of a degree of freedom, and unpredictable behavior in particular orientations.",
- "links": []
+ "description": "The **Euler angle** is a concept in mathematics and physics used to describe the orientation of a rigid body or a coordinate system in 3D space. It uses three angles, typically named as alpha (α), beta (β), and gamma (γ), and represents three sequential rotations around the axes of the original coordinate system. Euler angles can represent any rotation as a sequence of three elementary rotations. Keep in mind, however, that Euler angles are not unique, and different sequences of rotations can represent identical total effects. It's also noteworthy that Euler angles are prone to a problem known as gimbal lock, where the first and third axis align, causing a loss of a degree of freedom, and unpredictable behavior in particular orientations.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Euler Angle in Game Development",
+ "url": "https://www.gameludere.com/2020/03/12/euler-angles-hamilton-quaternions-and-video-games/",
+ "type": "article"
+ }
+ ]
},
"YTkOF_33oL1ZkA-loc_DP": {
"title": "Curve",
- "description": "Curve\n-----",
- "links": []
+ "description": "A `curve` in game development is a mathematical tool for creating smooth lines or paths, used in areas like animation, 3D modeling, UI design, and level layouts. Curves simplify complex shapes and movements, enabling more natural results compared to linear approaches. They're key for `lifelike animations`, organic shapes, `realistic physics`, and smooth camera movements, essential for polished, fluid game design.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Curves",
+ "url": "https://en.wikipedia.org/wiki/Curve",
+ "type": "article"
+ },
+ {
+ "title": "Basics of Curves",
+ "url": "https://byjus.com/maths/curve/",
+ "type": "article"
+ }
+ ]
},
"nTiHZXRh2j3_FsBotmlGf": {
"title": "Spline",
- "description": "`Spline` is a mathematical function widely used in computer graphics for generating curves and surfaces. It connects two or more points through a smooth curve, often used in games for defining pathways, movement paths, object shapes, and flow control. Splines are not confined to two dimensions and can be extended to 3D or higher dimensions. Types of splines include `Linear`, `Cubic`, and `Bezier` splines. While linear splines generate straight lines between points, cubic and bezier splines provide more control and complexity with the addition of control points and handles. Developing a good understanding of splines and their usage can vastly improve the fluidity and visual aesthetics of a game.\n\nLearn more from the following resources:",
+ "description": "`Spline` is a mathematical function widely used in computer graphics for generating curves and surfaces. It connects two or more points through a smooth curve, often used in games for defining pathways, movement paths, object shapes, and flow control. Splines are not confined to two dimensions and can be extended to 3D or higher dimensions. Types of splines include `Linear`, `Cubic`, and `Bezier` splines. While linear splines generate straight lines between points, cubic and bezier splines provide more control and complexity with the addition of control points and handles. Developing a good understanding of splines and their usage can vastly improve the fluidity and visual aesthetics of a game.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Spline in Mathematics",
+ "url": "https://en.wikipedia.org/wiki/Spline_(mathematics)",
+ "type": "article"
+ },
{
"title": "@Video@In-depth video about Splines by Freya Holmér",
"url": "https://youtu.be/jvPPXbo87ds?si=JX_G-gS81tOwQwjf",
@@ -116,93 +220,230 @@
},
"m4AuHjEBnHS0wyATG-I1Z": {
"title": "Hermite",
- "description": "Hermite refers to Hermite interpolation, a fundamental technique in game development for executing smooth transitions. Essentially, Hermite interpolation is an application of polynomial mathematics, with two points applied as start/end (they're usually 3D positional vectors), and the tangents at these points controlling the curve's shape. The technique's name is derived from its inventor, Charles Hermite, a French mathematician. Hermite interpolation can be useful in different aspects of game development, such as creating smooth animations, camera paths, or motion patterns. Note, however, that while Hermite interpolation offers control over the start and end points of a sequence, it might not precisely predict the curve's full behavior.",
- "links": []
+ "description": "Hermite refers to Hermite interpolation, a fundamental technique in game development for executing smooth transitions. Essentially, Hermite interpolation is an application of polynomial mathematics, with two points applied as start/end (they're usually 3D positional vectors), and the tangents at these points controlling the curve's shape. The technique's name is derived from its inventor, Charles Hermite, a French mathematician. Hermite interpolation can be useful in different aspects of game development, such as creating smooth animations, camera paths, or motion patterns. Note, however, that while Hermite interpolation offers control over the start and end points of a sequence, it might not precisely predict the curve's full behavior.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Hermite Interpolation",
+ "url": "https://en.wikipedia.org/wiki/Hermite_interpolation",
+ "type": "article"
+ }
+ ]
},
"DUEEm9sAaZqSI-W-PFZ8f": {
"title": "Bezier",
- "description": "`Bezier curves` are named after Pierre Bezier, a French engineer working at Renault, who used them in the 1960s for designing car bodies. A Bezier curve is defined by a set of control points with a minimum of two but no upper limit. The curve is calculated between the first and the last control point and does not pass through the controlling points, which only influence the direction of the curve. There are linear, quadratic, and cubic Bezier curves, but curves with more control points are also possible. They are widely used in computer graphics, animations, and are extensively used in vector images and tools to create shapes, texts, and objects.",
- "links": []
+ "description": "`Bezier curves` are named after Pierre Bezier, a French engineer working at Renault, who used them in the 1960s for designing car bodies. A Bezier curve is defined by a set of control points with a minimum of two but no upper limit. The curve is calculated between the first and the last control point and does not pass through the controlling points, which only influence the direction of the curve. There are linear, quadratic, and cubic Bezier curves, but curves with more control points are also possible. They are widely used in computer graphics, animations, and are extensively used in vector images and tools to create shapes, texts, and objects.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Bezier Curves for Your Games",
+ "url": "http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/",
+ "type": "article"
+ },
+ {
+ "title": "Bezier Curves Explained",
+ "url": "https://www.youtube.com/watch?v=pnYccz1Ha34",
+ "type": "video"
+ }
+ ]
},
"N9GoA3YvOaKwYjljj6NZv": {
"title": "Catmull-Rom",
- "description": "The **Catmull-Rom** spline is a form of interpolation used in 2D and 3D graphics. Named after Edwin Catmull and Raphael Rom, it offers a simple way to smoothly move objects along a set of points or, in terms of graphics, to smoothly draw a curve connecting several points. It's a cubic interpolating spline, meaning it uses the cubic polynomial to compute coordinates. This makes Catmull-Rom ideal for creating smooth and natural curves in graphics and animation. It also has a feature called C1 continuity, ensuring the curve doesn't have any abrupt changes in direction. However, if not managed properly, it can create loops between points.",
- "links": []
+ "description": "The **Catmull-Rom** spline is a form of interpolation used in 2D and 3D graphics. Named after Edwin Catmull and Raphael Rom, it offers a simple way to smoothly move objects along a set of points or, in terms of graphics, to smoothly draw a curve connecting several points. It's a cubic interpolating spline, meaning it uses the cubic polynomial to compute coordinates. This makes Catmull-Rom ideal for creating smooth and natural curves in graphics and animation. It also has a feature called C1 continuity, ensuring the curve doesn't have any abrupt changes in direction. However, if not managed properly, it can create loops between points.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Catmull Rom",
+ "url": "https://gamedev.net/forums/topic/535895-catmull-rom-in-shmups/",
+ "type": "article"
+ },
+ {
+ "title": "Catmull Rom Spline - Game Development",
+ "url": "https://gamedev.stackexchange.com/questions/47354/catmull-rom-spline-constant-speed",
+ "type": "article"
+ }
+ ]
},
"5qfoD77wU4ETI7rUSy4Nc": {
"title": "Projection",
- "description": "`Projection` in game mathematics often refers to the method by which three-dimensional images are transferred to a two-dimensional plane, typically a computer screen. There are two main types of projection in game development; `Orthographic Projection` and `Perspective Projection`. In the Orthographic Projection, objects maintain their size regardless of their distance from the camera. This is often used in 2D games or 3D games where perspective is not important. On the other hand, Perspective Projection mimics human eye perspective, where distant objects appear smaller. This method provides a more realistic rendering for 3D games. It's crucial to understand projection in game development because it governs how virtual 3D spaces and objects are displayed on 2D viewing platforms.",
- "links": []
+ "description": "`Projection` in game mathematics often refers to the method by which three-dimensional images are transferred to a two-dimensional plane, typically a computer screen. There are two main types of projection in game development; `Orthographic Projection` and `Perspective Projection`. In the Orthographic Projection, objects maintain their size regardless of their distance from the camera. This is often used in 2D games or 3D games where perspective is not important. On the other hand, Perspective Projection mimics human eye perspective, where distant objects appear smaller. This method provides a more realistic rendering for 3D games. It's crucial to understand projection in game development because it governs how virtual 3D spaces and objects are displayed on 2D viewing platforms.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Mesh Triangulation and Polyhedron",
+ "url": "https://mathworld.wolfram.com/Projection.html",
+ "type": "article"
+ }
+ ]
},
"LEJymJ2EaAW5FM5LgKW38": {
"title": "Perspective",
- "description": "In game development, **Perspective** plays a significant role in creating a three-dimensional world on a two-dimensional space. It mimics the way our eyes perceive distance and depth, with objects appearing smaller as they go farther away. Essentially, this is achieved by projecting 3D co-ordinates on a virtual screen. Perspective projection is done in two types - one-point where only one axis displays a change in size with depth and two-point where both axis display a change. It creates more realistic views, enhancing game visualization and immersion. An important aspect is the player's viewpoint, which is the vanishing point where parallel lines converge in the distance.",
- "links": []
+ "description": "In game development, **Perspective** plays a significant role in creating a three-dimensional world on a two-dimensional space. It mimics the way our eyes perceive distance and depth, with objects appearing smaller as they go farther away. Essentially, this is achieved by projecting 3D co-ordinates on a virtual screen. Perspective projection is done in two types - one-point where only one axis displays a change in size with depth and two-point where both axis display a change. It creates more realistic views, enhancing game visualization and immersion. An important aspect is the player's viewpoint, which is the vanishing point where parallel lines converge in the distance.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Perspective in Games",
+ "url": "https://www.gamedeveloper.com/design/making-the-most-out-of-the-first-person-perspectives",
+ "type": "article"
+ }
+ ]
},
"d6C1qFv-Tad3AtMBDLI6r": {
"title": "Orthogonal",
- "description": "Orthogonal projection, or orthographic projection, is a type of parallelogram projection in game development where the lines of projection are perpendicular to the projection plane. This creates a view that is straight-on, essentially removing any form of perspective. Unlike perspective projection where objects further from the viewer appear smaller, objects in orthogonal projection remain the same size regardless of distance. The lack of perspective in orthogonal projection can be useful for specific types of games like platformers or strategy games. It is commonly used in CAD (Computer-Aided Design) and technical drawings as well.",
- "links": []
+ "description": "Orthogonal projection, or orthographic projection, is a type of parallelogram projection in game development where the lines of projection are perpendicular to the projection plane. This creates a view that is straight-on, essentially removing any form of perspective. Unlike perspective projection where objects further from the viewer appear smaller, objects in orthogonal projection remain the same size regardless of distance. The lack of perspective in orthogonal projection can be useful for specific types of games like platformers or strategy games. It is commonly used in CAD (Computer-Aided Design) and technical drawings as well.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Orthogonal Projection",
+ "url": "https://medium.com/retronator-magazine/game-developers-guide-to-graphical-projections-with-video-game-examples-part-1-introduction-aa3d051c137d",
+ "type": "article"
+ }
+ ]
},
"UTBnrQiZ6Bf96yJYIUf3b": {
"title": "Game Physics",
- "description": "_Game physics_ is an integral part of game development that simulates the laws of physics in a virtual environment. This simulation brings realism into the game by defining how objects move, interact, and react to collisions and forces. Game physics ranges from how a character jumps or moves in a 2D or 3D space, to more complex mechanics such as fluid dynamics or ragdoll physics. Two main types of game physics are 'arcade physics', which are simpler and more abstract; and 'realistic physics', attempting to fully recreate real-life physics interactions. Implementing game physics requires a combination of mathematical knowledge and programming skills to integrate physics engines like Unity's PhysX and Unreal Engine's built-in physics tool.",
- "links": []
+ "description": "Game Physics is an integral part of game development that simulates the laws of physics in a virtual environment. This simulation brings realism into the game by defining how objects move, interact, and react to collisions and forces. Game physics ranges from how a character jumps or moves in a 2D or 3D space, to more complex mechanics such as fluid dynamics or ragdoll physics. Two main types of game physics are 'arcade physics', which are simpler and more abstract; and 'realistic physics', attempting to fully recreate real-life physics interactions. Implementing game physics requires a combination of mathematical knowledge and programming skills to integrate physics engines like Unity's PhysX and Unreal Engine's built-in physics tool.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game Physics",
+ "url": "https://en.wikipedia.org/wiki/Game_physics",
+ "type": "article"
+ },
+ {
+ "title": "Master Game Physics Today!",
+ "url": "https://www.udemy.com/course/gamephysics/",
+ "type": "article"
+ }
+ ]
},
"0D7KQlF-9ylmILTBBVxot": {
"title": "Dynamics",
- "description": "**Dynamics** in game physics refers to the calculation and simulation of the movement and interaction of objects over time, taking into account properties such as mass, force, and velocity. Its purpose is to ensure the motion of game elements matches expectations in the real-world, or the specific conditions defined by the game designers. This typically includes topics like kinematics (velocity and acceleration), Newton's laws of motion, forces (like gravity or friction), and conservation laws (such as momentum or energy). This also involves solving equations of motions for the game objects, detecting collisions and resolving them. Dynamics, together with Statics (dealing with how forces balance on rigid bodies at rest), makes up the core of game physics simulation.",
- "links": []
+ "description": "**Dynamics** in game physics refers to the calculation and simulation of the movement and interaction of objects over time, taking into account properties such as mass, force, and velocity. Its purpose is to ensure the motion of game elements matches expectations in the real-world, or the specific conditions defined by the game designers. This typically includes topics like kinematics (velocity and acceleration), Newton's laws of motion, forces (like gravity or friction), and conservation laws (such as momentum or energy). This also involves solving equations of motions for the game objects, detecting collisions and resolving them. Dynamics, together with Statics (dealing with how forces balance on rigid bodies at rest), makes up the core of game physics simulation.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Dynamics in Game Physics",
+ "url": "https://americanprofessionguide.com/game-mechanics-and-dynamics/",
+ "type": "article"
+ }
+ ]
},
"HWtU4q-YPXxSi64t43VNF": {
"title": "Center of Mass",
- "description": "The **center of mass** is a position defined relative to an object or system of objects. Typically denoted by the symbol (COM), it refers to the average position of all the parts of the system, weighted according to their masses. For instance, if you have a uniformly dense object, the center of mass would be in the geometric center of that object. In gaming, the center of mass of an object can have a significant impact on how the object behaves when forces are applied to it. This includes how the object moves in response to these forces, and can affect the realism of the physics simulations in a game.",
- "links": []
+ "description": "The **center of mass** is a position defined relative to an object or system of objects. Typically denoted by the symbol (COM), it refers to the average position of all the parts of the system, weighted according to their masses. For instance, if you have a uniformly dense object, the center of mass would be in the geometric center of that object. In gaming, the center of mass of an object can have a significant impact on how the object behaves when forces are applied to it. This includes how the object moves in response to these forces, and can affect the realism of the physics simulations in a game.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Center of Mass",
+ "url": "https://medium.com/@brazmogu/physics-for-game-dev-a-platformer-physics-cheatsheet-f34b09064558",
+ "type": "article"
+ }
+ ]
},
"6E2mkXuAzoYnrT1SEIA16": {
"title": "Moment of Inertia",
- "description": "The **moment of inertia**, also known as rotational inertia, is a measure of an object's resistance to changes to its rotation. In simpler terms, it's essentially how difficult it is to start or stop an object from spinning. It is determined by both the mass of an object and its distribution of mass around the axis of rotation. In the context of game development, the moment of inertia is crucial for creating realistic movements of characters, objects or vehicles within the game. This is particularly relevant in scenarios where the motion involves spinning or revolving entities. Calculating and applying these physics ensures a more immersive and believable gaming experience.",
- "links": []
+ "description": "The **moment of inertia**, also known as rotational inertia, is a measure of an object's resistance to changes to its rotation. In simpler terms, it's essentially how difficult it is to start or stop an object from spinning. It is determined by both the mass of an object and its distribution of mass around the axis of rotation. In the context of game development, the moment of inertia is crucial for creating realistic movements of characters, objects or vehicles within the game. This is particularly relevant in scenarios where the motion involves spinning or revolving entities. Calculating and applying these physics ensures a more immersive and believable gaming experience.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Moment of Inertia",
+ "url": "https://en.wikipedia.org/wiki/Moment_of_inertia",
+ "type": "article"
+ }
+ ]
},
"ejZMnxZ0QrN-jBqo9Vrj8": {
"title": "Acceleration",
- "description": "**Acceleration** refers to the rate of change in velocity per unit time. This physical concept is translated into game dynamics where it impacts the movement and speed of game characters or objects. For example, when a character starts moving, there is usually a slight delay before they reach their top speed, which then continues as long as the move button is held down. This is caused by acceleration. Conversely, when the button is released, the character doesn't stop instantly but slows down gradually - this is due to deceleration, which is negative acceleration. By mastering acceleration and deceleration, game developers can create more realistic and interesting movements for their characters.",
- "links": []
+ "description": "**Acceleration** refers to the rate of change in velocity per unit time. This physical concept is translated into game dynamics where it impacts the movement and speed of game characters or objects. For example, when a character starts moving, there is usually a slight delay before they reach their top speed, which then continues as long as the move button is held down. This is caused by acceleration. Conversely, when the button is released, the character doesn't stop instantly but slows down gradually - this is due to deceleration, which is negative acceleration. By mastering acceleration and deceleration, game developers can create more realistic and interesting movements for their characters.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Simple Acceleration in Games",
+ "url": "http://earok.net/sections/articles/game-dev/theory/simplified-acceleration-games",
+ "type": "article"
+ }
+ ]
},
"m2_wUW2VHMCXHnn5B91qr": {
"title": "Joints",
- "description": "Joints in game development primarily refer to the connections between two objects, often used in the context of physics simulations and character animations. These might simulate the physics of real-world joints like hinges or springs. Developers can control various characteristics of joints such as their constraints, forces, and reactions. The different types come with various properties suitable for specific needs. For example, Fixed joints keep objects together, Hinge joints allow rotation around an axis, and Spring joints apply a force to keep objects apart.",
- "links": []
+ "description": "Joints in game development primarily refer to the connections between two objects, often used in the context of physics simulations and character animations. These might simulate the physics of real-world joints like hinges or springs. Developers can control various characteristics of joints such as their constraints, forces, and reactions. The different types come with various properties suitable for specific needs. For example, Fixed joints keep objects together, Hinge joints allow rotation around an axis, and Spring joints apply a force to keep objects apart.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game Character Rigging Fundamentals",
+ "url": "https://learn.unity.com/project/game-character-rigging-fundamentals",
+ "type": "article"
+ },
+ {
+ "title": "Joints in Unity",
+ "url": "https://simonpham.medium.com/joints-in-unity-f9b602212524",
+ "type": "article"
+ }
+ ]
},
"qduFRhmrzJ2sn0g7L-tza": {
"title": "Force",
- "description": "**Force** is a vital concept in game development, especially when crafting physics in games. In the context of game physics, 'Force' is an influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. It's typically implemented in game engines, with part of the physics simulation that computes forces like gravity, friction, or custom forces defined by the developer. Incorporating forces gives a realistic feel to the game, allowing objects to interact naturally following the laws of physics. This is central in genres like racing games, sports games, and any game featuring physical interactions between objects. Remember that `F = ma`, the acceleration of an object is directly proportional to the force applied and inversely proportional to its mass. The balance and manipulation of these forces are integral to dynamic, immersive gameplay.",
- "links": []
+ "description": "**Force** is a vital concept in game development, especially when crafting physics in games. In the context of game physics, 'Force' is an influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. It's typically implemented in game engines, with part of the physics simulation that computes forces like gravity, friction, or custom forces defined by the developer. Incorporating forces gives a realistic feel to the game, allowing objects to interact naturally following the laws of physics. This is central in genres like racing games, sports games, and any game featuring physical interactions between objects. Remember that `F = ma`, the acceleration of an object is directly proportional to the force applied and inversely proportional to its mass. The balance and manipulation of these forces are integral to dynamic, immersive gameplay.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Physics for Game Dev",
+ "url": "https://medium.com/@brazmogu/physics-for-game-dev-a-platformer-physics-cheatsheet-f34b09064558",
+ "type": "article"
+ }
+ ]
},
"egOcxFTQP7vPIGrxcieuk": {
"title": "Restitution",
- "description": "In game development, **Restitution** is a property closely related to the physics of objects. Essentially, restitution represents the \"bounciness\" of an object or, in more scientific terms, the ratio of the final relative velocity to the initial relative velocity of two objects after a collision. In the context of game physics, when objects collide, restitution is used to calculate how much each object should bounce back or recoil. Restitution values typically fall between 0 and 1 where a value of 0 means an object will not bounce at all and a value of 1 refers to a perfectly elastic collision with no energy lost. Therefore, the higher the restitution value, the higher the bounce back of the object after a collision.",
- "links": []
+ "description": "In game development, **Restitution** is a property closely related to the physics of objects. Essentially, restitution represents the \"bounciness\" of an object or, in more scientific terms, the ratio of the final relative velocity to the initial relative velocity of two objects after a collision. In the context of game physics, when objects collide, restitution is used to calculate how much each object should bounce back or recoil. Restitution values typically fall between 0 and 1 where a value of 0 means an object will not bounce at all and a value of 1 refers to a perfectly elastic collision with no energy lost. Therefore, the higher the restitution value, the higher the bounce back of the object after a collision.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Restitution Property",
+ "url": "https://gamedev.stackexchange.com/questions/49616/why-do-restitution-values-less-than-one-still-cause-infinite-bouncing-in-box2d",
+ "type": "article"
+ }
+ ]
},
"Y7HYY5eq7OG42V9yQz0Q1": {
"title": "Angular Velocity",
- "description": "Angular velocity, denoted by the symbol 'ω', is a measure of the rate of change of an angle per unit of time. In simpler terms, it corresponds to how quickly an object moves around a circle or rotates around a central point. Angular velocity is typically measured in radians per second (rad/s). If you think of an object moving in a circular path, the angular velocity would be the speed at which the angle changes as the object travels along the circumference of the object. Angular velocity is a vector quantity, implying it has both magnitude and direction. The direction of the angular velocity vector is perpendicular to the plane of rotation, following the right-hand rule. It plays a crucial role in game development, especially in physics simulation and character control.",
- "links": []
+ "description": "Angular velocity, denoted by the symbol 'ω', is a measure of the rate of change of an angle per unit of time. In simpler terms, it corresponds to how quickly an object moves around a circle or rotates around a central point. Angular velocity is typically measured in radians per second (rad/s). If you think of an object moving in a circular path, the angular velocity would be the speed at which the angle changes as the object travels along the circumference of the object. Angular velocity is a vector quantity, implying it has both magnitude and direction. The direction of the angular velocity vector is perpendicular to the plane of rotation, following the right-hand rule. It plays a crucial role in game development, especially in physics simulation and character control.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Angular Velocity",
+ "url": "https://allenchou.net/2013/12/game-physics-motion-dynamics-fundamentals/",
+ "type": "article"
+ },
+ {
+ "title": "Understanding Angular Velocity",
+ "url": "https://math.libretexts.org/Bookshelves/Precalculus/Book%3A_Trigonometry_(Sundstrom_and_Schlicker)/01%3A_The_Trigonometric_Functions/1.04%3A_Velocity_and_Angular_Velocity",
+ "type": "article"
+ }
+ ]
},
"WzcmdW_fKHv3gwdBnvI0_": {
"title": "Buoyancy",
- "description": "**Buoyancy** refers to a specific interaction in physics where an object submerged in fluid (such as a game character in water) experiences an upward force that counteracts the force of gravity. This makes the object either float or appear lighter. In game development, implementing buoyancy can enhance realism particularly in games that involve water-based activities or environments. Buoyancy can be manipulated through adjustments in density and volume to create various effects - from making heavy objects float to sinking light ones. Calculating it typically requires approximating the object to a sphere or another simple geometric shape, and using this in Archimedes' Principle. This principle states that buoyant force equals the weight of the fluid that the object displaces. In the realm of video games, programming buoyancy can involve complex physics equations and careful testing to achieve a balance between realism and playability.",
- "links": []
+ "description": "**Buoyancy** refers to a specific interaction in physics where an object submerged in fluid (such as a game character in water) experiences an upward force that counteracts the force of gravity. This makes the object either float or appear lighter. In game development, implementing buoyancy can enhance realism particularly in games that involve water-based activities or environments. Buoyancy can be manipulated through adjustments in density and volume to create various effects - from making heavy objects float to sinking light ones. Calculating it typically requires approximating the object to a sphere or another simple geometric shape, and using this in Archimedes' Principle. This principle states that buoyant force equals the weight of the fluid that the object displaces. In the realm of video games, programming buoyancy can involve complex physics equations and careful testing to achieve a balance between realism and playability.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Buoyancy in Game Development",
+ "url": "https://www.gamedeveloper.com/programming/water-interaction-model-for-boats-in-video-games-part-2",
+ "type": "article"
+ }
+ ]
},
"Z_U6abGV_wVkTGZ2LVkFK": {
"title": "Linear Velocity",
- "description": "**Linear Velocity** is a fundamental concept in physics that is extensively used in game development. It refers to the rate of change of an object's position with respect to a frame of reference. It's calculated by dividing the change in position by the change in time, often represented with the vector 'v'. In game development, an object's linear velocity can be manipulated to control its speed and direction. This is especially important in the development of physics simulations or movement-dependent gameplay elements. For instance, it can be used to make a character run or drive, or to throw an object at different speeds and directions.",
- "links": []
+ "description": "**Linear Velocity** is a fundamental concept in physics that is extensively used in game development. It refers to the rate of change of an object's position with respect to a frame of reference. It's calculated by dividing the change in position by the change in time, often represented with the vector 'v'. In game development, an object's linear velocity can be manipulated to control its speed and direction. This is especially important in the development of physics simulations or movement-dependent gameplay elements. For instance, it can be used to make a character run or drive, or to throw an object at different speeds and directions.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Linear Velocity",
+ "url": "https://byjus.com/physics/linear-velocity/",
+ "type": "article"
+ },
+ {
+ "title": "Explore top posts about Math",
+ "url": "https://app.daily.dev/tags/math?ref=roadmapsh",
+ "type": "article"
+ }
+ ]
},
"Hz9R4YGYtD0jAur8rYist": {
"title": "Friction",
- "description": "`Friction` is a crucial concept in game dynamics. In the context of games, it's typically used to slow down or impede movement, providing a realistic feel to characters or objects movement. For example, when a player's character runs on a smooth surface as compared to a rough one, friction influences the speed and control of that character. It can be seen in how cars skid on icy surfaces, how walking speed changes depending on the terrain, or how a ball rolls and eventually slows. The equation to compute friction is usually `f = μN`, where `f` is the force of friction, `μ` is the coefficient of friction (which depends on the two surfaces interacting), and `N` is the normal force (which is generally the weight of the object). You can adjust the coefficient of friction in a game to have different effects depending upon the desired outcome.",
+ "description": "`Friction` is a crucial concept in game dynamics. In the context of games, it's typically used to slow down or impede movement, providing a realistic feel to characters or objects movement. For example, when a player's character runs on a smooth surface as compared to a rough one, friction influences the speed and control of that character. It can be seen in how cars skid on icy surfaces, how walking speed changes depending on the terrain, or how a ball rolls and eventually slows. The equation to compute friction is usually `f = μN`, where `f` is the force of friction, `μ` is the coefficient of friction (which depends on the two surfaces interacting), and `N` is the normal force (which is generally the weight of the object). You can adjust the coefficient of friction in a game to have different effects depending upon the desired outcome.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Friction in Game Dev",
+ "url": "https://medium.com/@originallearguy/rub-the-right-way-applying-friction-in-game-design-122bd98de69d",
+ "type": "article"
+ },
{
"title": "Friction",
"url": "https://youtu.be/t1HWIoDUWXg?si=FmFsIGTSHpLS72vp",
@@ -212,113 +453,309 @@
},
"AdOfOJtLtNgDwuABb6orE": {
"title": "Collision Detection",
- "description": "**Collision Detection** is a critical aspect in game physics that handles the computer’s ability to calculate and respond when two or more objects come into contact in a game environment. This is vital to ensure objects interact realistically, don't pass through each other, and impact the game world in intended ways. Techniques for collision detection can vary based on the complexity required by the game. Simple methods may involve bounding boxes or spheres that encapsulate objects. When these spheres or boxes overlap, a collision is assumed. More complex methods consider the object's shape and volume for precise detection. Several libraries and game engines offer built-in support for collision detection, making it easier for developers to implement in their games.",
- "links": []
+ "description": "**Collision Detection** is a critical aspect in game physics that handles the computer’s ability to calculate and respond when two or more objects come into contact in a game environment. This is vital to ensure objects interact realistically, don't pass through each other, and impact the game world in intended ways. Techniques for collision detection can vary based on the complexity required by the game. Simple methods may involve bounding boxes or spheres that encapsulate objects. When these spheres or boxes overlap, a collision is assumed. More complex methods consider the object's shape and volume for precise detection. Several libraries and game engines offer built-in support for collision detection, making it easier for developers to implement in their games.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Collision Detection in Games",
+ "url": "https://developer.mozilla.org/en-US/docs/Games/Tutorials/2D_Breakout_game_pure_JavaScript/Collision_detection",
+ "type": "article"
+ }
+ ]
},
"SuemqQuiePab0Qpm2EGy9": {
"title": "Narrow Phase",
- "description": "The **Narrow Phase** of collision detection is a process that dives deeply into detailed collision checks for pairs of objects that are already found to be potentially colliding during the broad phase. Narrow phase is essentially a fine-tuning process. Upon positive detection from the broad phase, it identifies the precise points of collision between the two objects, and it may involve more detailed shape representations and more expensive algorithms. It might also calculate additional information necessary for the physics simulation (like the exact time of impact and contact normals). The usual methods used for this phase involve bounding box, bounding sphere or separating axis theorem. However, the method can vary depending on the complexity of shapes of objects and the specific needs of the game.",
- "links": []
+ "description": "The **Narrow Phase** of collision detection is a process that dives deeply into detailed collision checks for pairs of objects that are already found to be potentially colliding during the broad phase. Narrow phase is essentially a fine-tuning process. Upon positive detection from the broad phase, it identifies the precise points of collision between the two objects, and it may involve more detailed shape representations and more expensive algorithms. It might also calculate additional information necessary for the physics simulation (like the exact time of impact and contact normals). The usual methods used for this phase involve bounding box, bounding sphere or separating axis theorem. However, the method can vary depending on the complexity of shapes of objects and the specific needs of the game.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Narrow Phase in Game Development",
+ "url": "https://rocketbrush.com/blog/game-development-process-guide",
+ "type": "article"
+ }
+ ]
},
"AKd2UpITqBZV7cZszSRps": {
"title": "Broad Phase",
- "description": "**Broad Phase Collision Detection** is the first step in the collision detection process. Its primary function is to identify which pairs of objects might potentially collide. Rather than examining the entire body of every object for possible collision, it wraps up each one in a simpler shape like a bounding box or sphere, aiming to reduce the number of calculations. The output of this phase is a list of 'candidate pairs' which are passed onto the next phase, often referred to as the narrow phase, for in-depth overlap checks.",
- "links": []
+ "description": "**Broad Phase Collision Detection** is the first step in the collision detection process. Its primary function is to identify which pairs of objects might potentially collide. Rather than examining the entire body of every object for possible collision, it wraps up each one in a simpler shape like a bounding box or sphere, aiming to reduce the number of calculations. The output of this phase is a list of 'candidate pairs' which are passed onto the next phase, often referred to as the narrow phase, for in-depth overlap checks.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Broad Phase Collision Detection",
+ "url": "http://buildnewgames.com/broad-phase-collision-detection/",
+ "type": "article"
+ }
+ ]
},
"YLusnwCba7BIdKOYUoY6F": {
"title": "Convexity",
- "description": "Convexity is a significant concept used in game development, particularly in the narrow phase of collision detection. A shape is considered convex if, for every pair of points inside the shape, the complete line segment between them is also inside the shape. Essentially, a convex shape has no angles pointing inwards. Convex shapes can be of great benefit in game development because they're simpler to handle computationally. For instance, in collision detection algorithms such as separating axis theorem (SAT) and Gilbert–Johnson–Keerthi (GJK), the input shapes are often convex. Non-convex shapes or concave shapes usually require more complex methods for collision detection, often involving partitioning the shape into smaller convex parts.",
- "links": []
+ "description": "Convexity is a significant concept used in game development, particularly in the narrow phase of collision detection. A shape is considered convex if, for every pair of points inside the shape, the complete line segment between them is also inside the shape. Essentially, a convex shape has no angles pointing inwards. Convex shapes can be of great benefit in game development because they're simpler to handle computationally. For instance, in collision detection algorithms such as separating axis theorem (SAT) and Gilbert–Johnson–Keerthi (GJK), the input shapes are often convex. Non-convex shapes or concave shapes usually require more complex methods for collision detection, often involving partitioning the shape into smaller convex parts.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Convexity in Game Development",
+ "url": "https://www.gamedeveloper.com/game-platforms/understanding-convexity-in-ltv-modelling",
+ "type": "article"
+ }
+ ]
},
"pG_V12qhS4HevoP_KHTvh": {
"title": "Convex",
- "description": "The term \"convex\" in game development relates primarily to shapes and collision detection within the gaming environment. A shape is convex if all line segments between any two points in the shape lie entirely within the shape. This is an essential concept when programming collision detection and physics engines in games since the mathematical calculations can be more straightforward and efficient when the objects are convex. In addition to this, many rendering algorithms also operate optimally on convex objects, thereby helping improve the game’s graphical performance.",
- "links": []
+ "description": "The term \"convex\" in game development relates primarily to shapes and collision detection within the gaming environment. A shape is convex if all line segments between any two points in the shape lie entirely within the shape. This is an essential concept when programming collision detection and physics engines in games since the mathematical calculations can be more straightforward and efficient when the objects are convex. In addition to this, many rendering algorithms also operate optimally on convex objects, thereby helping improve the game’s graphical performance.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Convex in Game Development",
+ "url": "https://dev.to/fkkarakurt/geometry-and-primitives-in-game-development-1og",
+ "type": "article"
+ }
+ ]
},
"jslk7Gy58VspO1uXGDgBp": {
"title": "Concave",
- "description": "In game development, a shape is said to be \"concave\" if it has an interior angle greater than 180 degrees. In simpler terms, if the shape has a portion \"inwards curved\" or a \"cave-like\" indentation, it's concave. Unlike convex shapes, a straight line drawn within a concave shape may not entirely lie within the boundaries of the shape. Concave shapes add complexity in game physics, especially in collision detection, as there are more points and angles to consider compared to convex shapes. These shapes are commonly seen in game elements like terrains, mazes, game level boundaries and gaming characters. Let's remember that the practical application of concave shapes largely depends on the gameplay requirements and the level of realism needed in the game.",
- "links": []
+ "description": "In game development, a shape is said to be \"concave\" if it has an interior angle greater than 180 degrees. In simpler terms, if the shape has a portion \"inwards curved\" or a \"cave-like\" indentation, it's concave. Unlike convex shapes, a straight line drawn within a concave shape may not entirely lie within the boundaries of the shape. Concave shapes add complexity in game physics, especially in collision detection, as there are more points and angles to consider compared to convex shapes. These shapes are commonly seen in game elements like terrains, mazes, game level boundaries and gaming characters. Let's remember that the practical application of concave shapes largely depends on the gameplay requirements and the level of realism needed in the game.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is Concave Shape?",
+ "url": "https://dev.to/fkkarakurt/geometry-and-primitives-in-game-development-1og",
+ "type": "article"
+ }
+ ]
},
"jixffcPBELkhoG0e7Te8g": {
"title": "Convex Hull",
- "description": "The **Convex Hull** is a foundational concept used in various areas of game development, particularly in the creation of physics engines and collision detection. Essentially, it is the smallest convex polygon that can enclose a set of points in a two-dimensional space, or the smallest convex polyhedron for a set of points in a three-dimensional space. It can be thought of as the shape that a rubber band would take if it was stretched around the points and then released. In computational geometry, various algorithms like Graham's Scan and QuickHull have been developed to compute Convex Hulls rapidly. Using Convex Hulls in game engines can drastically improve the performance of collision detection routines as fewer points need to be checked for overlap, which in turn helps in creating smoother gameplay.",
- "links": []
+ "description": "The **Convex Hull** is a foundational concept used in various areas of game development, particularly in the creation of physics engines and collision detection. Essentially, it is the smallest convex polygon that can enclose a set of points in a two-dimensional space, or the smallest convex polyhedron for a set of points in a three-dimensional space. It can be thought of as the shape that a rubber band would take if it was stretched around the points and then released. In computational geometry, various algorithms like Graham's Scan and QuickHull have been developed to compute Convex Hulls rapidly. Using Convex Hulls in game engines can drastically improve the performance of collision detection routines as fewer points need to be checked for overlap, which in turn helps in creating smoother gameplay.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Convex Decomposition for 3D",
+ "url": "https://colin97.github.io/CoACD/",
+ "type": "article"
+ }
+ ]
},
"bgP9NpD0DJGqN4VCt65xP": {
"title": "Convex Decomposition",
- "description": "`Convex Decomposition` represents a process within game development that involves breaking down complex, concave shapes into simpler, convex shapes. This technique considerably simplifies the computation involved in collision detection, a critical aspect of any game development project that involves physical simulations. In concrete terms, a concave shape has one or more parts that 'cave in' or have recesses, while a convex shape has no such depressions - in simplistic terms, it 'bulges out' with no interior angles exceeding 180 degrees. So, Convex decomposition is essentially a process of breaking down a shape with 'caves' or 'recesses' into simpler shapes that only 'bulge out'.",
- "links": []
- },
- "vmRYaXNVCe0N73xG8bsEK": {
+ "description": "`Convex Decomposition` represents a process within game development that involves breaking down complex, concave shapes into simpler, convex shapes. This technique considerably simplifies the computation involved in collision detection, a critical aspect of any game development project that involves physical simulations. In concrete terms, a concave shape has one or more parts that 'cave in' or have recesses, while a convex shape has no such depressions - in simplistic terms, it 'bulges out' with no interior angles exceeding 180 degrees. So, Convex decomposition is essentially a process of breaking down a shape with 'caves' or 'recesses' into simpler shapes that only 'bulge out'.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Convex Decomposition for 3D",
+ "url": "https://colin97.github.io/CoACD/",
+ "type": "article"
+ }
+ ]
+ },
+ "vmRYaXNVCe0N73xG8bsEK": {
"title": "Intersection",
- "description": "`Intersection` is a concept in the narrow phase of game development where the exact point or points of collision are determined between two potentially colliding objects. This process takes place once a potential collision is determined in the broad phase. Algorithms such as Axis-Aligned Bounding Boxes (AABB), Separating Axis Theorem (SAT), Spherical or Capsule bounding, and many others are used for different intersection tests based on the shape of the objects. The intersection provides valuable data such as the point of contact, direction and depth of penetration, which are used to calculate the accurate physical response in the collision.",
- "links": []
+ "description": "`Intersection` is a concept in the narrow phase of game development where the exact point or points of collision are determined between two potentially colliding objects. This process takes place once a potential collision is determined in the broad phase. Algorithms such as Axis-Aligned Bounding Boxes (AABB), Separating Axis Theorem (SAT), Spherical or Capsule bounding, and many others are used for different intersection tests based on the shape of the objects. The intersection provides valuable data such as the point of contact, direction and depth of penetration, which are used to calculate the accurate physical response in the collision.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Intersection Tests for Games",
+ "url": "https://www.gamedeveloper.com/game-platforms/simple-intersection-tests-for-games",
+ "type": "article"
+ },
+ {
+ "title": "Intersection Geometry",
+ "url": "https://www.petercollingridge.co.uk/tutorials/computational-geometry/line-line-intersections/",
+ "type": "article"
+ }
+ ]
},
"kSMz7mZ243qMKtT_YD3AD": {
"title": "SAT",
- "description": "`Sat`, or separating axis theorem, is frequently used in collision detection in game development. Its primary benefit is for simple and fast detection of whether two convex polygons intersect. The theorem is somewhat complex—it works by projecting all points of both polygons onto numerous axes around the shapes, then checking for overlaps. However, it can be relatively time-consuming when dealing with more complex models or numerous objects as it has to calculate the projections, so often it is used in a broad-phase detection system. A deep explanation of how `sat` works might involve some mathematical concepts or visual aids, but this is the foundation of its use in game development.",
- "links": []
+ "description": "`Sat`, or separating axis theorem, is frequently used in collision detection in game development. Its primary benefit is for simple and fast detection of whether two convex polygons intersect. The theorem is somewhat complex—it works by projecting all points of both polygons onto numerous axes around the shapes, then checking for overlaps. However, it can be relatively time-consuming when dealing with more complex models or numerous objects as it has to calculate the projections, so often it is used in a broad-phase detection system. A deep explanation of how `sat` works might involve some mathematical concepts or visual aids, but this is the foundation of its use in game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Separating Axis Theorem",
+ "url": "https://dyn4j.org/2010/01/sat/",
+ "type": "article"
+ },
+ {
+ "title": "Collision Detection Using the Separating Axis Theorem",
+ "url": "https://code.tutsplus.com/collision-detection-using-the-separating-axis-theorem--gamedev-169t",
+ "type": "article"
+ }
+ ]
},
"lwd3Gz9bJEKCIwhXD6m-v": {
"title": "GJK",
- "description": "The **GJK algorithm** (Gilbert–Johnson–Keerthi) is a computational geometry algorithm that is widely used to detect collisions between convex objects in video games and simulations. The primary role of this algorithm is to assess the intersection between two convex shapes. What makes it unique and widely used is its efficiency and accuracy even when dealing with complex three-dimensional shapes. It uses the concept of \"Minkowski Difference\" to simplify its calculations and determine if two shapes are intersecting.\n\nThe algorithm works iteratively, beginning with a single point (the origin) and progressing by adding vertices from the Minkowski Difference, each time refining a simple 'guess' about the direction of the nearest point to the origin until it either concludes that the shapes intersect (the origin is inside the Minkowski difference), or until it can't progress further, in which case the shapes are confirmed not to intersect. This makes it an incredibly powerful and useful tool for game developers.",
- "links": []
+ "description": "The **GJK algorithm** (Gilbert–Johnson–Keerthi) is a computational geometry algorithm that is widely used to detect collisions between convex objects in video games and simulations. The primary role of this algorithm is to assess the intersection between two convex shapes. What makes it unique and widely used is its efficiency and accuracy even when dealing with complex three-dimensional shapes. It uses the concept of \"Minkowski Difference\" to simplify its calculations and determine if two shapes are intersecting.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game Geometry - Math is Fun",
+ "url": "https://en.wikipedia.org/wiki/Gilbert-Johnson-Keerthi_distance_algorithm",
+ "type": "article"
+ },
+ {
+ "title": "The GJK Algorithm",
+ "url": "https://medium.com/@mbayburt/walkthrough-of-the-gjk-collision-detection-algorithm-80823ef5c774",
+ "type": "article"
+ },
+ {
+ "title": "GJK Algorithm Explanation & Implementation",
+ "url": "https://www.youtube.com/watch?v=MDusDn8oTSE",
+ "type": "video"
+ }
+ ]
},
"vWLKYK2KUzV1fO-vQunzW": {
"title": "EPA",
- "description": "The **EPA**, also known as the _Environmental Protection Agency_, is not typically related to game development or the concept of intersection within this context. However, in game development, EPA might refer to an 'Event-driven Process chain Architecture' or some other game-specific acronym. In this domain, different terminologies and acronyms are often used to express complex architectures, designs, or functionalities. If you have encountered EPA in a game development context, it might be best to refer to the specific documentation or guide where it was described for a better understanding. Understanding the context is key to untangle the meaning of such abbreviations.",
- "links": []
+ "description": "The **EPA**, also known as the _Environmental Protection Agency_, is not typically related to game development or the concept of intersection within this context. However, in game development, EPA might refer to an 'Event-driven Process chain Architecture' or some other game-specific acronym. In this domain, different terminologies and acronyms are often used to express complex architectures, designs, or functionalities. If you have encountered EPA in a game development context, it might be best to refer to the specific documentation or guide where it was described for a better understanding. Understanding the context is key to untangle the meaning of such abbreviations.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Environmental Sustainability in Game Development",
+ "url": "https://polydin.com/environmental-sustainability-in-game-development/",
+ "type": "article"
+ },
+ {
+ "title": "Gaming Sustainability - Microsoft Game Dev",
+ "url": "https://learn.microsoft.com/en-us/gaming/sustainability/sustainability-overview",
+ "type": "article"
+ }
+ ]
},
"PLR_4yoRifoTzkOR4c7ym": {
"title": "Bounding Volume",
- "description": "`Bounding Volume` is a simple shape that fully encompasses a more complex game model. It is less expensive to check for the intersection of bounding volumes when compared to checking for intersection of the actual models. Some commonly used types of bounding volume in game development include Axis-Aligned Bounding Boxes (AABBs), Bounding Spheres, and Oriented Bounding Boxes (OBBs). AABBs and Bounding Spheres are simple to implement and work well with static objects, while OBBs are slightly more complex and are often used with dynamic objects that need to rotate.",
- "links": []
+ "description": "`Bounding Volume` is a simple shape that fully encompasses a more complex game model. It is less expensive to check for the intersection of bounding volumes when compared to checking for intersection of the actual models. Some commonly used types of bounding volume in game development include Axis-Aligned Bounding Boxes (AABBs), Bounding Spheres, and Oriented Bounding Boxes (OBBs). AABBs and Bounding Spheres are simple to implement and work well with static objects, while OBBs are slightly more complex and are often used with dynamic objects that need to rotate.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Collision Detection in 3D Games",
+ "url": "https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_collision_detection",
+ "type": "article"
+ },
+ {
+ "title": "Visualizing Bounding Volume",
+ "url": "https://www.haroldserrano.com/blog/visualizing-the-boundary-volume-hierarchy-collision-algorithm",
+ "type": "article"
+ }
+ ]
},
"aTeYGd4JlPr5txNPyBezn": {
"title": "AABB",
- "description": "`AABB`, short for Axis-Aligned Bounding Box, is a commonly used form of bounding volume in game development. It is a box that directly aligns with the axes of the coordinate system and encapsulates a game object. The sides of an AABB are aligned with the axes, which is helpful when carrying out certain calculations, as non-axis-aligned boxes would require more complex math. AABBs are primarily used for broad-phase collision detection, which means checking whether two objects might be in the process of colliding. Although AABBs are relatively conservative and can have more bounding volume than oriented bounding boxes (OBBs), they are simpler and faster to use in collision detection.",
- "links": []
+ "description": "`AABB`, short for Axis-Aligned Bounding Box, is a commonly used form of bounding volume in game development. It is a box that directly aligns with the axes of the coordinate system and encapsulates a game object. The sides of an AABB are aligned with the axes, which is helpful when carrying out certain calculations, as non-axis-aligned boxes would require more complex math. AABBs are primarily used for broad-phase collision detection, which means checking whether two objects might be in the process of colliding. Although AABBs are relatively conservative and can have more bounding volume than oriented bounding boxes (OBBs), they are simpler and faster to use in collision detection.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Axis-Aligned Bounding Box",
+ "url": "https://gdbooks.gitbooks.io/3dcollisions/content/Chapter1/aabb.html",
+ "type": "article"
+ }
+ ]
},
"7nGtvbxoEAheiF4IPMfPf": {
"title": "OBB",
- "description": "`Oriented Bounding Box (OBB)` is a type of bounding volume used in computer graphics and computational geometry. It is often used to simplify complex geometric objects by correlating them as a box much closer in size and orientation to the actual object. Unlike the `Axis-Aligned Bounding Box (AABB)`, the `OBB` is not constrained to align with the axis, so the box can be rotated. This orientation is usually chosen based on the object's local coordinate system, so the `OBB` maintains its rotation. Properties of an `OBB` include its center, dimensions, and orientation. However, it is worth noting that `OBBs` can be more computationally intensive than `AABBs` due to mathematical complexity.",
- "links": []
+ "description": "`Oriented Bounding Box (OBB)` is a type of bounding volume used in computer graphics and computational geometry. It is often used to simplify complex geometric objects by correlating them as a box much closer in size and orientation to the actual object. Unlike the `Axis-Aligned Bounding Box (AABB)`, the `OBB` is not constrained to align with the axis, so the box can be rotated. This orientation is usually chosen based on the object's local coordinate system, so the `OBB` maintains its rotation. Properties of an `OBB` include its center, dimensions, and orientation. However, it is worth noting that `OBBs` can be more computationally intensive than `AABBs` due to mathematical complexity.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "OBB vs OBB Collision Detection",
+ "url": "https://gamedev.stackexchange.com/questions/25397/obb-vs-obb-collision-detection",
+ "type": "article"
+ },
+ {
+ "title": "Oriented Bounding Box",
+ "url": "https://gamedev.stackexchange.com/questions/49041/oriented-bounding-box-how-to",
+ "type": "article"
+ }
+ ]
},
"9Fk3XSINBr2NNdbMtwsIK": {
"title": "Spatial Partitioning",
- "description": "\"Spatial partitioning\" is a technique used in computational geometry, intended to make calculations involving objects in space more efficient. It involves dividing a large virtual space into a series of smaller spaces, or \"partitions\". These partitions can be used to quickly eliminate areas that are irrelevant to a particular calculation or query, thus lowering the overall computational cost. This technique is widely used in game development in contexts such as collision detection, rendering, pathfinding, and more. Various methods exist for spatial partitioning, including grid-based, tree-based (like Quadtree and Octree), and space-filling curve (like Z-order or Hilbert curve) approaches.",
- "links": []
+ "description": "\"Spatial partitioning\" is a technique used in computational geometry, intended to make calculations involving objects in space more efficient. It involves dividing a large virtual space into a series of smaller spaces, or \"partitions\". These partitions can be used to quickly eliminate areas that are irrelevant to a particular calculation or query, thus lowering the overall computational cost. This technique is widely used in game development in contexts such as collision detection, rendering, pathfinding, and more. Various methods exist for spatial partitioning, including grid-based, tree-based (like Quadtree and Octree), and space-filling curve (like Z-order or Hilbert curve) approaches.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Spatial Partitioning",
+ "url": "https://en.wikipedia.org/wiki/Space_partitioning",
+ "type": "article"
+ },
+ {
+ "title": "Spatial Partitioning in Game Programming",
+ "url": "https://gameprogrammingpatterns.com/spatial-partition.html",
+ "type": "article"
+ }
+ ]
},
"STdvFYM9V0a36IkPXjvrB": {
"title": "Sort & Sweep",
- "description": "**Sort and Sweep** is an algorithm used in collision detection in game development which optimizes the process of identifying potential intersecting objects. Here's how it works: first, all objects in the game are sorted along a specific axis (typically the 'x' axis). Then a line (known as the 'sweep line') is moved along this axis. As the line sweeps over the scene, any objects that cross this line are added to an 'active' list. When an object no longer intersects with the sweep line, it's removed from this list. The only objects checked for intersection are those within this 'active' list reducing the number of checks required. This makes sort and sweep an efficient spatial partitioning strategy.",
- "links": []
+ "description": "**Sort and Sweep** is an algorithm used in collision detection in game development which optimizes the process of identifying potential intersecting objects. Here's how it works: first, all objects in the game are sorted along a specific axis (typically the 'x' axis). Then a line (known as the 'sweep line') is moved along this axis. As the line sweeps over the scene, any objects that cross this line are added to an 'active' list. When an object no longer intersects with the sweep line, it's removed from this list. The only objects checked for intersection are those within this 'active' list reducing the number of checks required. This makes sort and sweep an efficient spatial partitioning strategy.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Sort, Sweep and Prune",
+ "url": "https://leanrada.com/notes/sweep-and-prune/",
+ "type": "article"
+ },
+ {
+ "title": "Collision Detection Algorithm",
+ "url": "https://notes.billmill.org/visualization/interactive_explainers/sort__sweep_and_prune_-_collision_detection_algorithms.html",
+ "type": "article"
+ }
+ ]
},
"FCc5xwb_G3VsDRXOcg3hV": {
"title": "BVH",
- "description": "BVH, or Bounding Volume Hierarchy, is an algorithm used in 3D computer graphics to speed up the rendering process. It organizes the geometry in a hierarchical structure where each node in the tree represents a bounding volume (a volume enclosing or containing one or more geometric objects). The root node of the BVH contains all other nodes or geometric objects, its child nodes represent a partition of the space, and the leaf nodes are often individual geometric objects. The main objective of using BVH is to quickly exclude large portions of the scene from the rendering process, to reduce the computational load of evaluating every single object in the scene individually.",
- "links": []
+ "description": "BVH, or Bounding Volume Hierarchy, is an algorithm used in 3D computer graphics to speed up the rendering process. It organizes the geometry in a hierarchical structure where each node in the tree represents a bounding volume (a volume enclosing or containing one or more geometric objects). The root node of the BVH contains all other nodes or geometric objects, its child nodes represent a partition of the space, and the leaf nodes are often individual geometric objects. The main objective of using BVH is to quickly exclude large portions of the scene from the rendering process, to reduce the computational load of evaluating every single object in the scene individually.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "UnityBoundingVolumeHeirachy",
+ "url": "https://github.com/rossborchers/UnityBoundingVolumeHeirachy",
+ "type": "opensource"
+ }
+ ]
},
"XHFV4d6Ab4kWQ3-XcZTyT": {
"title": "DBVT",
- "description": "`DBVT` or `Dynamic Bounding Volume Tree` is an acceleration data structure that's primarily used in physics simulations like collision detection. It's a type of BVH (`Bounding Volume Hierarchy`), but the unique aspect of a DBVT is its handling of dynamic objects. As the name suggests, it's specifically designed to efficiently handle changing scenarios, such as objects moving or environments evolving, better than a typical BVH. Unlike a static BVH, a DBVT dynamically updates the tree as objects move, maintaining efficiency of collision queries. It primarily does this through tree rotations and refitting bounding volumes rather than fully rebuilding the tree. This makes DBVT a highly appealing option for scenarios with considerable dynamics.",
- "links": []
+ "description": "`DBVT` or `Dynamic Bounding Volume Tree` is an acceleration data structure that's primarily used in physics simulations like collision detection. It's a type of BVH (`Bounding Volume Hierarchy`), but the unique aspect of a DBVT is its handling of dynamic objects. As the name suggests, it's specifically designed to efficiently handle changing scenarios, such as objects moving or environments evolving, better than a typical BVH. Unlike a static BVH, a DBVT dynamically updates the tree as objects move, maintaining efficiency of collision queries. It primarily does this through tree rotations and refitting bounding volumes rather than fully rebuilding the tree. This makes DBVT a highly appealing option for scenarios with considerable dynamics.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "DBVT",
+ "url": "https://sopiro.github.io/DynamicBVH/",
+ "type": "article"
+ },
+ {
+ "title": "Dynamic Bounding Volume Hierarchies",
+ "url": "https://box2d.org/files/ErinCatto_DynamicBVH_Full.pdf",
+ "type": "article"
+ }
+ ]
},
"1yK8TH4Pn7Ag8VQoug54i": {
"title": "CCD",
- "description": "**CCD (Continuous Collision Detection)** is a sophisticated technique used in detecting collisions within games, more advanced than the traditional discrete collision. Rather than checking for collisions at designated time frames, CCD checks for any possible collisions that may happen during the entire time period or motion path of the moving object. This can prevent instances of \"tunneling\", where an object moves so fast that it passes through walls or obstacles undetected by discrete collision detection due to being at different points in one frame to another. Although more computationally heavy than discrete detection, CCD offers an increased accuracy in collision detection, making it vital in games where precise movements are needed.",
- "links": []
+ "description": "**CCD (Continuous Collision Detection)** is a sophisticated technique used in detecting collisions within games, more advanced than the traditional discrete collision. Rather than checking for collisions at designated time frames, CCD checks for any possible collisions that may happen during the entire time period or motion path of the moving object. This can prevent instances of \"tunneling\", where an object moves so fast that it passes through walls or obstacles undetected by discrete collision detection due to being at different points in one frame to another. Although more computationally heavy than discrete detection, CCD offers an increased accuracy in collision detection, making it vital in games where precise movements are needed.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Continuous Collision Detection",
+ "url": "https://docs.unity3d.com/Manual/ContinuousCollisionDetection.html",
+ "type": "article"
+ }
+ ]
},
"fv5tivGad2P9GRZOodfn2": {
"title": "Game Engine",
- "description": "A _Game Engine_ is a software framework designed to facilitate the creation and development of video games. Developers use them to create games for consoles, mobile devices, and personal computers. The core functionality typically provided by a game engine includes a rendering engine (\"renderer\") for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, and a scene graph. Game Engines can save a significant amount of development time by providing these reusable components. However, they aren't one-size-fits-all solutions, as developers must still customize much of the code to fit their games' unique needs. Some popular game engines are Unity, Unreal Engine, and Godot.",
- "links": []
+ "description": "A _Game Engine_ is a software framework designed to facilitate the creation and development of video games. Developers use them to create games for consoles, mobile devices, and personal computers. The core functionality typically provided by a game engine includes a rendering engine (\"renderer\") for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, and a scene graph. Game Engines can save a significant amount of development time by providing these reusable components. However, they aren't one-size-fits-all solutions, as developers must still customize much of the code to fit their games' unique needs. Some popular game engines are Unity, Unreal Engine, and Godot.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game Engine",
+ "url": "https://en.wikipedia.org/wiki/Game_engine",
+ "type": "article"
+ },
+ {
+ "title": "Choosing a Game Engine is Easy!",
+ "url": "https://www.youtube.com/watch?v=aMgB018o71U",
+ "type": "video"
+ }
+ ]
},
"7OffO2mBmfBKqPBTZ9ngI": {
"title": "Godot",
- "description": "Godot is an open-source, multi-platform game engine that is known for being feature-rich and user-friendly. It is developed by hundreds of contributors from around the world and supports the creation of both 2D and 3D games. Godot uses its own scripting language, GDScript, which is similar to Python, but it also supports C# and visual scripting. It is equipped with a unique scene system and comes with a multitude of tools that can expedite the development process. Godot's design philosophy centers around flexibility, extensibility, and ease of use, providing a handy tool for both beginners and pros in game development.",
+ "description": "Godot is an open-source, multi-platform game engine that is known for being feature-rich and user-friendly. It is developed by hundreds of contributors from around the world and supports the creation of both 2D and 3D games. Godot uses its own scripting language, GDScript, which is similar to Python, but it also supports C# and visual scripting. It is equipped with a unique scene system and comes with a multitude of tools that can expedite the development process. Godot's design philosophy centers around flexibility, extensibility, and ease of use, providing a handy tool for both beginners and pros in game development.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "godotengine/godot",
+ "url": "https://github.com/godotengine/godot",
+ "type": "opensource"
+ },
+ {
+ "title": "Godot",
+ "url": "https://godotengine.org/",
+ "type": "article"
+ },
+ {
+ "title": "Godot Documentation",
+ "url": "https://docs.godotengine.org/en/stable/",
+ "type": "article"
+ },
{
"title": "Godot in 100 Seconds",
"url": "https://m.youtube.com/watch?v=QKgTZWbwD1U",
@@ -328,7 +765,7 @@
},
"a6H-cZtp3A_fB8jnfMxBR": {
"title": "Unreal Engine",
- "description": "The **Unreal Engine** is a powerful game development engine created by Epic Games. Used by game developers worldwide, it supports the creation of high-quality games across multiple platforms such as iOS, Android, Windows, Mac, Xbox, and PlayStation. Unreal Engine is renowned for its photo-realistic rendering, dynamic physics and effects, robust multiplayer framework, and its flexible scripting system called Blueprint. The engine is also fully equipped with dedicated tools and functionalities for animation, AI, lighting, cinematography, and post-processing effects. The most recent version, Unreal Engine 5, introduces real-time Global Illumination and makes film-quality real-time graphics achievable.",
+ "description": "The **Unreal Engine** is a powerful game development engine created by Epic Games. Used by game developers worldwide, it supports the creation of high-quality games across multiple platforms such as iOS, Android, Windows, Mac, Xbox, and PlayStation. Unreal Engine is renowned for its photo-realistic rendering, dynamic physics and effects, robust multiplayer framework, and its flexible scripting system called Blueprint. The engine is also fully equipped with dedicated tools and functionalities for animation, AI, lighting, cinematography, and post-processing effects. The most recent version, Unreal Engine 5, introduces real-time Global Illumination and makes film-quality real-time graphics achievable.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Unreal Engine Documentation",
@@ -354,13 +791,49 @@
},
"CeAUEN233L4IoFSZtIvvl": {
"title": "Native",
- "description": "You don't necessarily have to use tools like Unreal, Unity3d, or Godot to make games. You can also use native languages like C++ or Rust to make games. However, you will have to do a lot of work yourself, and you will have to learn a lot of things that are already done for you in game engines.",
- "links": []
+ "description": "You don't necessarily have to use tools like Unreal, Unity3d, or Godot to make games. You can also use native languages like C++ or Rust to make games. However, you will have to do a lot of work yourself, and you will have to learn a lot of things that are already done for you in game engines.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Visit Dedicated C++ Roadmap",
+ "url": "https://roadmap.sh/cpp",
+ "type": "article"
+ },
+ {
+ "title": "Visit Dedicated Rust Roadmap",
+ "url": "https://roadmap.sh/rust",
+ "type": "article"
+ },
+ {
+ "title": "Learn Game Development with C++",
+ "url": "https://learn.microsoft.com/en-us/cpp/overview/game-development-cpp?view=msvc-170",
+ "type": "article"
+ },
+ {
+ "title": "Building Games with Rust",
+ "url": "https://rustmeup.com/building-games-with-rust",
+ "type": "article"
+ }
+ ]
},
"rNeOti8DDyWTMP9FB9kJ_": {
"title": "Unity 3D",
- "description": "**Unity 3D** is a versatile, cross-platform game engine that supports the development of both 2D and 3D games. This game engine allows users to create a wide variety of games including AR, VR, Mobile, Consoles, and Computers in C#. It provides a host of powerful features and tools, such as scripting, asset bundling, scene building, and simulation, to assist developers in creating interactive content. Unity 3D also boasts a large, active community that regularly contributes tutorials, scripts, assets, and more, making it a robust platform for all levels of game developers.",
+ "description": "**Unity 3D** is a versatile, cross-platform game engine that supports the development of both 2D and 3D games. This game engine allows users to create a wide variety of games including AR, VR, Mobile, Consoles, and Computers in C#. It provides a host of powerful features and tools, such as scripting, asset bundling, scene building, and simulation, to assist developers in creating interactive content. Unity 3D also boasts a large, active community that regularly contributes tutorials, scripts, assets, and more, making it a robust platform for all levels of game developers.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Unity",
+ "url": "https://unity.com/",
+ "type": "article"
+ },
+ {
+ "title": "Unity Engine for Video Games",
+ "url": "https://unity.com/products/unity-engine",
+ "type": "article"
+ },
+ {
+ "title": "Creating a 3D Game",
+ "url": "https://docs.unity3d.com/2021.2/Documentation/Manual/Quickstart3DCreate.html",
+ "type": "article"
+ },
{
"title": "Unity in 100 Seconds",
"url": "https://www.youtube.com/watch?v=iqlH4okiQqg",
@@ -370,18 +843,66 @@
},
"4YgbrXLXf5mfaL2tlYkzk": {
"title": "Programming Languages",
- "description": "Programming languages are very crucial to game development as they are the backbone of game design and functionality. A variety of languages can be used, but some are more commonly preferred in the industry due to their robustness and efficiency. The most popular ones include C++, C#, and Java. **C++**, a high-level language primarily used for developing video games, is known for its speed and efficiency. **C#**, which was developed by Microsoft, is extensively used with the Unity game engine to develop multi-platform games. **Java** is well-established in the sector as well, and it often utilized in the development of Android games. It's pivotal for a game developer to select a language that aligns with the project's requirements and nature. Despite the programming language you choose, a deep understanding of its constructs, logic, and capabilities is required for successful game development.",
- "links": []
+ "description": "Programming languages are very crucial to game development as they are the backbone of game design and functionality. A variety of languages can be used, but some are more commonly preferred in the industry due to their robustness and efficiency. The most popular ones include C++, C#, and Java. **C++**, a high-level language primarily used for developing video games, is known for its speed and efficiency. **C#**, which was developed by Microsoft, is extensively used with the Unity game engine to develop multi-platform games. **Java** is well-established in the sector as well, and it often utilized in the development of Android games. It's pivotal for a game developer to select a language that aligns with the project's requirements and nature. Despite the programming language you choose, a deep understanding of its constructs, logic, and capabilities is required for successful game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Visit Dedicated C++ Roadmap",
+ "url": "https://roadmap.sh/cpp",
+ "type": "article"
+ },
+ {
+ "title": "Visit Dedicated Rust Roadmap",
+ "url": "https://roadmap.sh/rust",
+ "type": "article"
+ },
+ {
+ "title": "Visit Dedicated Java Roadmap",
+ "url": "https://roadmap.sh/java",
+ "type": "article"
+ },
+ {
+ "title": "Learn Game Development with C++",
+ "url": "https://learn.microsoft.com/en-us/cpp/overview/game-development-cpp?view=msvc-170",
+ "type": "article"
+ },
+ {
+ "title": "Building Games with Rust",
+ "url": "https://rustmeup.com/building-games-with-rust",
+ "type": "article"
+ }
+ ]
},
"jsq0UXnIIC0Z_nbK2w48f": {
"title": "C/C++",
- "description": "**C** and **C++ (commonly known as CPP)** are two of the most foundational high-level programming languages in computer science. **C** was developed in the 1970s and it is a procedural language, meaning it follows a step-by-step approach. Its fundamental principles include structured programming and lexical variable scope.\n\nOn the other hand, **C++** follows the paradigm of both procedural and object-oriented programming. It was developed as an extension to C to add the concept of \"classes\" - a core feature of object-oriented programming. C++ enhances C by introducing new features like function overloading, exception handling, and templates.\n\nBoth of these languages heavily influence modern game development, where they often serve as the backend for major game engines like Unreal. Game developers use these languages for tasks related to rendering graphics, compiling game logic, and optimizing performance.",
- "links": []
+ "description": "**C** and **C++ (commonly known as CPP)** are two of the most foundational high-level programming languages in computer science. **C** was developed in the 1970s and it is a procedural language, meaning it follows a step-by-step approach. Its fundamental principles include structured programming and lexical variable scope.\n\nOn the other hand, **C++** follows the paradigm of both procedural and object-oriented programming. It was developed as an extension to C to add the concept of \"classes\" - a core feature of object-oriented programming. C++ enhances C by introducing new features like function overloading, exception handling, and templates.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "The C Programming Language",
+ "url": "https://www.iso.org/standard/74528.html",
+ "type": "article"
+ },
+ {
+ "title": "C++ Programming Language",
+ "url": "https://en.wikipedia.org/wiki/C%2B%2B",
+ "type": "article"
+ }
+ ]
},
"Ph3ZqmSnwwzUBUC-6dgf-": {
"title": "C#",
- "description": "**CSharp (C#)** is a modern, object-oriented programming language developed and maintained by Microsoft. It's primarily used for developing desktop applications and, more prominently, for Windows applications within the [Microsoft.Net](http://Microsoft.Net) framework. However, the language is versatile and has a wide range of uses in web services, websites, enterprise software, and even mobile app development. C# is known for its simplicity, type-safety, and support for component-oriented software development. It's also been adopted by Unity, a widely used game engine, thus making it one of the preferred languages for game development.",
- "links": []
+ "description": "**CSharp (C#)** is a modern, object-oriented programming language developed and maintained by Microsoft. It's primarily used for developing desktop applications and, more prominently, for Windows applications within the [Microsoft.Net](http://Microsoft.Net) framework. However, the language is versatile and has a wide range of uses in web services, websites, enterprise software, and even mobile app development. C# is known for its simplicity, type-safety, and support for component-oriented software development. It's also been adopted by Unity, a widely used game engine, thus making it one of the preferred languages for game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Learn C#",
+ "url": "https://learn.microsoft.com/en-us/dotnet/csharp/",
+ "type": "article"
+ },
+ {
+ "title": "C Sharp Programming Language",
+ "url": "https://en.wikipedia.org/wiki/C_Sharp_(programming_language)",
+ "type": "article"
+ }
+ ]
},
"AaRZiItRcn8fYb5R62vfT": {
"title": "GDScript",
@@ -401,18 +922,60 @@
},
"ts9pWxUimvFqfNJYCmNNw": {
"title": "Rust",
- "description": "**Rust** is a modern, open-source, multi-paradigm programming language designed for performance and safety, especially safe concurrency. It was initially designed by Mozilla Research as a language that can provide memory safety without garbage collection. Since then, it has gained popularity due to its features and performance that often compare favorably to languages like C++. Its rich type system and ownership model guarantee memory-safety and thread-safety while maintaining a high level of abstraction. Rust supports a mixture of imperative procedural, concurrent actor, object-oriented and pure functional styles.\n\n[Learn Rust full tutorial](https://youtu.be/BpPEoZW5IiY?si=lyBbBPLXQ0HWdJNr)",
- "links": []
+ "description": "**Rust** is a modern, open-source, multi-paradigm programming language designed for performance and safety, especially safe concurrency. It was initially designed by Mozilla Research as a language that can provide memory safety without garbage collection. Since then, it has gained popularity due to its features and performance that often compare favorably to languages like C++. Its rich type system and ownership model guarantee memory-safety and thread-safety while maintaining a high level of abstraction. Rust supports a mixture of imperative procedural, concurrent actor, object-oriented and pure functional styles.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Visit Dedicated Rust Roadmap",
+ "url": "https://roadmap.sh/rust",
+ "type": "article"
+ },
+ {
+ "title": "Building Games with Rust",
+ "url": "https://rustmeup.com/building-games-with-rust",
+ "type": "article"
+ },
+ {
+ "title": "Learn Rust",
+ "url": "https://youtu.be/BpPEoZW5IiY?si=lyBbBPLXQ0HWdJNr",
+ "type": "video"
+ }
+ ]
},
"AJp_QRLgSG5ETXDIjUjmm": {
"title": "Python",
- "description": "Python is a popular high-level programming language that was designed by Guido van Rossum and published in 1991. It is preferred for its simplicity in learning and usage, making it a great choice for beginners. Python's design philosophy emphasizes code readability with its use of significant indentation. Its language constructs and object-oriented approach aim to help developers write clear, logical code for small and large-scale projects. Python is dynamically-typed and garbage-collected. Moreover, it supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Python is often used for web development, software development, database operations, and machine learning. Although not typically used for game development, some game developers utilize Python for scripting and automating tasks.",
- "links": []
+ "description": "Python is a popular high-level programming language that was designed by Guido van Rossum and published in 1991. It is preferred for its simplicity in learning and usage, making it a great choice for beginners. Python's design philosophy emphasizes code readability with its use of significant indentation. Its language constructs and object-oriented approach aim to help developers write clear, logical code for small and large-scale projects. Python is dynamically-typed and garbage-collected. Moreover, it supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Python is often used for web development, software development, database operations, and machine learning. Although not typically used for game development, some game developers utilize Python for scripting and automating tasks.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Visit Dedicated Python Roadmap",
+ "url": "https://roadmap.sh/python",
+ "type": "article"
+ },
+ {
+ "title": "Python",
+ "url": "https://www.python.org/",
+ "type": "article"
+ },
+ {
+ "title": "Python Documentation",
+ "url": "https://www.python.org/doc/",
+ "type": "article"
+ }
+ ]
},
"lIb5MeDoqVj6HycveOgTS": {
"title": "Computer Graphics",
"description": "Computer Graphics is a subfield of computer science that studies methods for digitally synthesizing and manipulating visual content. It involves creating and manipulating visual content using specialized computer software and hardware. This field is primarily used in the creation of digital and video games, CGI in films, and also in visual effects for commercials. The field is divided into two major categories: **Raster graphics** and **Vector graphics**. Raster graphics, also known as bitmap, involve the representation of images through a dot matrix data structure, while Vector graphics involve the use of polygons to represent images in computer graphics. Both of these methods have their unique usage scenarios. Other concepts integral to the study of computer graphics include rendering (including both real-time rendering and offline rendering), animation, and 3D modeling. Generally, computer graphics skills are essential for game developers and animation experts.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "What is Computer Graphics?",
+ "url": "https://www.geeksforgeeks.org/introduction-to-computer-graphics/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to Computer Graphics",
+ "url": "https://open.umn.edu/opentextbooks/textbooks/420",
+ "type": "article"
+ },
{
"title": "How do Video Game Graphics Work?",
"url": "https://www.youtube.com/watch?v=C8YtdC8mxTU",
@@ -422,71 +985,175 @@
},
"JW5c_0JEtO-OiBoXUia6A": {
"title": "Ray Tracing",
- "description": "Ray tracing is a rendering technique in computer graphics that simulates the physical behavior of light. It generates images with a high degree of visual realism, as it captures shadows, reflections, and refracts light. Ray tracing follows the path of light backwards from the camera (eye) to the source (light object), calculating the color of each pixel in the image on the way. The color value calculation considers the object from which the ray has reflected or refracted, and the nature of the light source i.e. whether it's ambient, point or spot. Ray tracing algorithm handles effects that rasterization algorithms like scanline rendering and 'Z-buffer' find complex to handle.",
- "links": []
+ "description": "Ray tracing is a rendering technique in computer graphics that simulates the physical behavior of light. It generates images with a high degree of visual realism, as it captures shadows, reflections, and refracts light. Ray tracing follows the path of light backwards from the camera (eye) to the source (light object), calculating the color of each pixel in the image on the way. The color value calculation considers the object from which the ray has reflected or refracted, and the nature of the light source i.e. whether it's ambient, point or spot. Ray tracing algorithm handles effects that rasterization algorithms like scanline rendering and 'Z-buffer' find complex to handle.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is Ray Tracing?",
+ "url": "https://www.pcmag.com/how-to/what-is-ray-tracing-and-what-it-means-for-pc-gaming",
+ "type": "article"
+ },
+ {
+ "title": "Nvidia GeForce RTX",
+ "url": "https://www.nvidia.com/en-us/geforce/rtx/",
+ "type": "article"
+ }
+ ]
},
"vYNj9nzu90e9xlrzHULnP": {
"title": "Rasterization",
- "description": "In the realm of computer graphics, **Rasterization** refers to the process of converting the image data into a bitmap form, i.e., pixels or dots. It is predominantly used in 3D rendering where three-dimensional polygonal shapes are transformed into a two-dimensional image, possessing height, width, and color data. It is a scan-conversion process where vertices and primitives, upon being processed through the graphics pipeline, are mathematically converted into fragments. Every fragment finds its position in a raster grid. The process culminates in fragments becoming pixels in the frame buffer, the final rendered image you see on the screen. However, it's essential to note that rasterization does limit the image's resolution to the resolution of the device on which it is displayed.",
- "links": []
+ "description": "In the realm of computer graphics, **Rasterization** refers to the process of converting the image data into a bitmap form, i.e., pixels or dots. It is predominantly used in 3D rendering where three-dimensional polygonal shapes are transformed into a two-dimensional image, possessing height, width, and color data. It is a scan-conversion process where vertices and primitives, upon being processed through the graphics pipeline, are mathematically converted into fragments. Every fragment finds its position in a raster grid. The process culminates in fragments becoming pixels in the frame buffer, the final rendered image you see on the screen. However, it's essential to note that rasterization does limit the image's resolution to the resolution of the device on which it is displayed.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "3D Rendering Rasterization",
+ "url": "https://www.techspot.com/article/1888-how-to-3d-rendering-rasterization-ray-tracing/",
+ "type": "article"
+ }
+ ]
},
"shSRnMf4NONuZ3TGPAoQc": {
"title": "Graphics Pipeline",
- "description": "The **Graphics Pipeline**, also often referred to as the rendering pipeline, is a sequence of steps that a graphics system follows to convert a 3D model into a 2D image or view that can be displayed onto a screen. These steps typically include transformation, clipping, lighting, rasterization, shading, and other processes. Each step in the pipeline represents an operation that prepares or manipulates data to be used in downstream stages. The pipeline begins with a high-level description of a scene and ends with the final image rendered onto the screen. It is a primary concept in computer graphics that developers should learn as it can help in efficient rendering and high-quality visualization.",
- "links": []
+ "description": "The **Graphics Pipeline**, also often referred to as the rendering pipeline, is a sequence of steps that a graphics system follows to convert a 3D model into a 2D image or view that can be displayed onto a screen. These steps typically include transformation, clipping, lighting, rasterization, shading, and other processes. Each step in the pipeline represents an operation that prepares or manipulates data to be used in downstream stages. The pipeline begins with a high-level description of a scene and ends with the final image rendered onto the screen. It is a primary concept in computer graphics that developers should learn as it can help in efficient rendering and high-quality visualization.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Graphics Pipelines",
+ "url": "https://www.cs.cornell.edu/courses/cs4620/2020fa/slides/11pipeline.pdf",
+ "type": "article"
+ },
+ {
+ "title": "Definition of Graphics Pipeline",
+ "url": "https://www.pcmag.com/encyclopedia/term/graphics-pipeline",
+ "type": "article"
+ }
+ ]
},
"rmtxybcavWV6A53R4ZWgc": {
"title": "Sampling",
- "description": "**Sampling** in computer graphics is a method used to convert a continuous mathematical function (image, signal, light and sound), into a discrete digital representation. The process is done by taking snapshots at regular intervals which are also known as samples, and it's this that gives us the concept of 'sampling'. Some common types of sampling techniques include: uniform sampling (evenly spaced samples), random sampling (samples taken at random intervals), and jittered sampling (a compromise between uniform and random sampling). The higher the sampling rate, the more accurately the original function can be reconstructed from the discrete samples. Effective sampling is a significant aspect of achieving realistic computer graphics.",
- "links": []
+ "description": "**Sampling** in computer graphics is a method used to convert a continuous mathematical function (image, signal, light and sound), into a discrete digital representation. The process is done by taking snapshots at regular intervals which are also known as samples, and it's this that gives us the concept of 'sampling'. Some common types of sampling techniques include: uniform sampling (evenly spaced samples), random sampling (samples taken at random intervals), and jittered sampling (a compromise between uniform and random sampling). The higher the sampling rate, the more accurately the original function can be reconstructed from the discrete samples. Effective sampling is a significant aspect of achieving realistic computer graphics.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Textures and Sampling",
+ "url": "https://cglearn.eu/pub/computer-graphics/textures-and-sampling",
+ "type": "article"
+ }
+ ]
},
"qIrePusMuvcUva9LMDmDx": {
"title": "Shader",
- "description": "Shaders are a type of software used in 3D computer graphics. They are utilized to render quality visual effects by making calculations and transformations on image data. Also, a shader is responsible for determining the final color of an object. There are several types of shaders: vertex shaders, geometry shaders, pixel shaders, and compute shaders. Each of these is programmed to manipulate specific attributes of an image, such as its vertices, pixels, and overall geometry. They are essential tools for game developers aiming to produce realistic and engaging visual experiences.",
- "links": []
+ "description": "Shaders are a type of software used in 3D computer graphics. They are utilized to render quality visual effects by making calculations and transformations on image data. Also, a shader is responsible for determining the final color of an object. There are several types of shaders: vertex shaders, geometry shaders, pixel shaders, and compute shaders. Each of these is programmed to manipulate specific attributes of an image, such as its vertices, pixels, and overall geometry. They are essential tools for game developers aiming to produce realistic and engaging visual experiences.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What Are Shaders in Video Games?",
+ "url": "https://www.gamedesigning.org/learn/shaders/",
+ "type": "article"
+ },
+ {
+ "title": "A Beginner's Guide to Coding Graphics Shaders",
+ "url": "https://gamedevelopment.tutsplus.com/a-beginners-guide-to-coding-graphics-shaders--cms-23313t",
+ "type": "article"
+ }
+ ]
},
"WVgozaQPFbYthZLWMbNUg": {
"title": "Rendering Equation",
"description": "The **Render Equation**, also known as the **Rendering Equation**, is a fundamental principle in computer graphics that serves as the basis for most advanced lighting algorithms today. First introduced by James Kajiya in 1986, it defines how light interacts with physical objects in a given environment. The equation tries to simulate light's behavior, taking into account aspects such as transmission, absorption, scattering, and emission. The equation can be computationally intensive to solve accurately. It's worth mentioning, however, that many methods have been developed to approximate and solve it, allowing the production of highly realistic images in computer graphics.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Interactive Graphics 12 - The Rendering Equation",
- "url": "https://www.youtube.com/watch?v=wawf7Am6xy0",
+ "title": "Rendering Equation",
+ "url": "https://en.wikipedia.org/wiki/Rendering_equation",
+ "type": "article"
+ },
+ {
+ "title": "Interactive Graphics 12 - The Rendering Equation",
+ "url": "https://www.youtube.com/watch?v=wawf7Am6xy0",
"type": "video"
}
]
},
"eI2jym4AAz3ani-lreSKE": {
"title": "Reflection",
- "description": "Reflection in game development, specifically in shaders, is a phenomena that simulates the bouncing off of light from objects similar to the way it happens in the real world. Shaders replicate this effect by emitting rays from the lighting source against the object's surface. When the ray strikes the surface, it will calculate the light’s color and angle to define how light should reflect off that surface. Reflection in shaders can further be classified into two types: Specular Reflection and Diffuse Reflection. Specular Reflection is the mirror-like reflection of light from a surface, where each incident ray is reflected with the light ray reflected at an equal but opposite angle. Diffuse Reflection, on the other hand, is the reflection of light into many directions, giving a softer effect. These reflections are quantified in computer graphics often using a reflection model such as the Phong reflection model or the Lambertian reflectance model.",
- "links": []
+ "description": "Reflection in game development, specifically in shaders, is a phenomena that simulates the bouncing off of light from objects similar to the way it happens in the real world. Shaders replicate this effect by emitting rays from the lighting source against the object's surface. When the ray strikes the surface, it will calculate the light’s color and angle to define how light should reflect off that surface. Reflection in shaders can further be classified into two types: Specular Reflection and Diffuse Reflection. Specular Reflection is the mirror-like reflection of light from a surface, where each incident ray is reflected with the light ray reflected at an equal but opposite angle. Diffuse Reflection, on the other hand, is the reflection of light into many directions, giving a softer effect. These reflections are quantified in computer graphics often using a reflection model such as the Phong reflection model or the Lambertian reflectance model.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Rendering Perfect Reflections",
+ "url": "https://developer.nvidia.com/blog/rendering-perfect-reflections-and-refractions-in-path-traced-games/",
+ "type": "article"
+ },
+ {
+ "title": "How Can We Use Reflection in Game Development?",
+ "url": "https://www.youtube.com/watch?v=R1A86lZ8myQ",
+ "type": "video"
+ }
+ ]
},
"0g1z5G2dsF4PTIfFAG984": {
"title": "Diffuse",
- "description": "",
- "links": []
+ "description": "In the world of 3D rendering and game development, \"diffuse\" refers to diffuse lighting or diffuse reflection. It's a key concept in making objects look three-dimensional and realistically lit.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Difference Between Albedo and Diffuse Map",
+ "url": "https://www.a23d.co/blog/difference-between-albedo-and-diffuse-map/",
+ "type": "article"
+ },
+ {
+ "title": "Complete Guide to Learn Texture Styles",
+ "url": "https://cgobsession.com/complete-guide-to-texture-map-types/",
+ "type": "article"
+ }
+ ]
},
"odfZWKtPbb-lC35oeTCNV": {
"title": "Specular",
- "description": "`Specular` reflections are mirror-like reflections. In these cases, the rays of light are reflected, more than they are absorbed. The angle of incidence is equal to the angle of reflection, that is to say that the angle at which the light enters the medium and then bounces off, the angle of the beam that bounced off would be the same.\n\nLearn more from the following resources:\n\n\\-[@video@Specular reflection](https://www.youtube.com/watch?v=2cFvJkc4pQk)",
- "links": []
+ "description": "`Specular` reflections are mirror-like reflections. In these cases, the rays of light are reflected, more than they are absorbed. The angle of incidence is equal to the angle of reflection, that is to say that the angle at which the light enters the medium and then bounces off, the angle of the beam that bounced off would be the same.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Specular Reflections - Wiki",
+ "url": "https://en.wikipedia.org/wiki/Specular_reflection",
+ "type": "article"
+ },
+ {
+ "title": "Specular Reflection - Regular Reflection, Mirrors, Diffuse Reflection",
+ "url": "https://www.rp-photonics.com/specular_reflection.html",
+ "type": "article"
+ },
+ {
+ "title": "Specular Reflection",
+ "url": "https://www.youtube.com/watch?v=2cFvJkc4pQk",
+ "type": "video"
+ }
+ ]
},
"THMmnx8p_P0X-dSPoHvst": {
"title": "Mapping",
- "description": "\"Mapping\" in game development, especially in the context of shaders, predominantly refers to Texture Mapping and Normal Mapping.\n\n* **Texture Mapping**: This is the application of a texture (an image or colour data) onto a 3D model's surface. It's a process of defining how a 2D surface wraps around a 3D model or the way that a flat image is stretched across a model's surface to paint its appearance. This could be anything from the colour of objects to their roughness or reflectivity.\n \n* **Normal Mapping**: This is a technique used to create the illusion of complexity in the surface of a 3D model without adding any additional geometry. A Normal Map is a special kind of texture that allows the addition of surface details, such as bumps, grooves, and scratches which catch the light as if they are represented by real geometry, making a low-polygon model appear as a much more complex shape.",
- "links": []
+ "description": "\"Mapping\" in game development, especially in the context of shaders, predominantly refers to Texture Mapping and Normal Mapping. **Texture Mapping** is the application of a texture (an image or colour data) onto a 3D model's surface. It's a process of defining how a 2D surface wraps around a 3D model or the way that a flat image is stretched across a model's surface to paint its appearance. This could be anything from the colour of objects to their roughness or reflectivity. Whereas, **Normal Mapping** is a technique used to create the illusion of complexity in the surface of a 3D model without adding any additional geometry. A Normal Map is a special kind of texture that allows the addition of surface details, such as bumps, grooves, and scratches which catch the light as if they are represented by real geometry, making a low-polygon model appear as a much more complex shape.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Designing Maps",
+ "url": "https://www.gamedeveloper.com/design/designing-maps-that-complement-game-mechanics",
+ "type": "article"
+ },
+ {
+ "title": "Mapping and Tiles in Game Development",
+ "url": "https://code.tutsplus.com/an-introduction-to-creating-a-tile-map-engine--gamedev-10900t",
+ "type": "article"
+ }
+ ]
},
"iBZ1JsEWI0xuLgUvfWfl-": {
"title": "Texture",
- "description": "`Texture` is the visual quality of an object. Where the `mesh` determines the shape or `topology` of an object, the texture describes the quality of said object. For instance, if there is a spherical mesh, is it supposed to be shiny? is it supposed to be rough? is it supposed to be of rock or of wood? questions of this ilk are often resolved using textures. Textures are often just 2D images that are wrapped onto 3D meshes. The 3D mesh is first divided into segments and unfurled; the 3D meshes are converted into 2D chunks, this process is known as `UV Unwrapping`. Once a mesh has been unwrapped, the textures in the form of an image are applied to the 2D chunks of the 3D mesh, this way the texture knows how to properly wrap around the mesh and avoid any conflicts. Textures determine the visual feel and aesthetics of the game.\n\nLearn more from the following resources:",
+ "description": "`Texture` is the visual quality of an object. Where the `mesh` determines the shape or `topology` of an object, the texture describes the quality of said object. For instance, if there is a spherical mesh, is it supposed to be shiny? is it supposed to be rough? is it supposed to be of rock or of wood? questions of this ilk are often resolved using textures. Textures are often just 2D images that are wrapped onto 3D meshes. The 3D mesh is first divided into segments and unfurled; the 3D meshes are converted into 2D chunks, this process is known as `UV Unwrapping`. Once a mesh has been unwrapped, the textures in the form of an image are applied to the 2D chunks of the 3D mesh, this way the texture knows how to properly wrap around the mesh and avoid any conflicts. Textures determine the visual feel and aesthetics of the game.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "How Nintendo textures work",
+ "title": "Textures and Materials",
+ "url": "https://gamedevinsider.com/making-games/game-artist/texturing-and-materials/",
+ "type": "article"
+ },
+ {
+ "title": "How Nintendo Textures Work",
"url": "https://www.youtube.com/watch?v=WrCMzHngLxI",
"type": "video"
},
{
- "title": "How Pixar textures work",
+ "title": "How Pixar Textures work",
"url": "https://www.youtube.com/watch?v=o_I6jxlN-Ck",
"type": "video"
}
@@ -494,8 +1161,13 @@
},
"r4UkMd5QURbvJ3Jlr_H9H": {
"title": "Bump",
- "description": "`Bump` is very similar to texture. It is, as a matter of fact, a type of texture itself. If you take the texture of a bricked wall, it will becoming increasingly obvious that the amount of detail present inside the wall, if geometrically processed would be incredibly demanding and wasteful. In order to combat this ineffeciency, the `bump` maps were created. Traditionally, a flat texture would just be an image of something called a `color map`, that is to say, where does each individual color of the pixel should be to represent a texture. When you take the picture of your floor, wall, or any object, that image in essence is the color map. The bump map is different as it informs the texture about it's `normal` values. So, if you take a flat 2D mesh and apply a bump map on it, it will render the same 2D mesh with all the normal values baked into the flat 2D mesh, creating a graphically effect mimicking 3-dimensionality.\n\nThere is also something known as a normal map, and displacement maps.\n\nLearn more from the following resources:",
+ "description": "`Bump` is very similar to texture. It is, as a matter of fact, a type of texture itself. If you take the texture of a bricked wall, it will becoming increasingly obvious that the amount of detail present inside the wall, if geometrically processed would be incredibly demanding and wasteful. In order to combat this ineffeciency, the `bump` maps were created. Traditionally, a flat texture would just be an image of something called a `color map`, that is to say, where does each individual color of the pixel should be to represent a texture. When you take the picture of your floor, wall, or any object, that image in essence is the color map. The bump map is different as it informs the texture about it's `normal` values. So, if you take a flat 2D mesh and apply a bump map on it, it will render the same 2D mesh with all the normal values baked into the flat 2D mesh, creating a graphically effect mimicking 3-dimensionality.\n\nLearn more from the following resources:",
"links": [
+ {
+ "title": "Bump Maps",
+ "url": "https://developer.valvesoftware.com/wiki/Bump_map",
+ "type": "article"
+ },
{
"title": "Normals, Normal maps and Bump maps",
"url": "https://www.youtube.com/watch?v=l5PYyzsZED8",
@@ -510,18 +1182,41 @@
},
"YGeGleEN203nokiZIYJN8": {
"title": "Parallax",
- "description": "",
- "links": []
+ "description": "In game development and graphics, parallax refers to the apparent displacement or difference in the apparent position of an object viewed along two different lines of sight, and is measured by the angle or semi-angle of inclination between those two lines. In simpler terms, parallax is a technique used to create the illusion of depth in 2D environments by moving background layers at different speeds relative to the foreground.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Parallax Effect",
+ "url": "https://www.encora.com/insights/how-to-take-advantage-of-parallax-in-programming-and-video-games",
+ "type": "article"
+ }
+ ]
},
"9cBOfj58I4hBlxlQIyV9g": {
"title": "Horizon",
- "description": "",
- "links": []
+ "description": "In graphics, the horizon refers to a horizontal line that represents the visual boundary where the sky meets the earth or a flat surface, such as the ocean. It's a fundamental concept in perspective and art, serving as a reference point for creating a sense of depth and distance in an image. The placement of the horizon line can significantly impact the composition and mood of an image, with high or low horizon lines creating different effects on the viewer's perception of the scene.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Vanishing Point",
+ "url": "https://en.wikipedia.org/wiki/Vanishing_point",
+ "type": "article"
+ }
+ ]
},
"1RdyzTI_TXqmct2bIbNh9": {
"title": "Computer Animation",
- "description": "Computer Animation\n------------------\n\nComputer animation refers to the art of creating moving images via the use of computers. Increasingly, it's becoming a critical component in the game development industry. Essentially, it's divided into two categories, 2D animation and 3D animation. 2D animation, also referred to as vector animation, involves creation of images in a two-dimensional environment, including morphing, twining, and onion skinning. On the other hand, 3D animation, also known as CGI, involves moving objects and characters in a three-dimensional space. The animation process typically involves the creation of a mathematical representation of a three-dimensional object. This object is then manipulated within a virtual space by an animator to create the final animation. Software like Unity, Maya, and Blender are commonly used for computer animation in game development.",
- "links": []
+ "description": "Computer animation refers to the art of creating moving images via the use of computers. Increasingly, it's becoming a critical component in the game development industry. Essentially, it's divided into two categories, 2D animation and 3D animation. 2D animation, also referred to as vector animation, involves creation of images in a two-dimensional environment, including morphing, twining, and onion skinning. On the other hand, 3D animation, also known as CGI, involves moving objects and characters in a three-dimensional space. The animation process typically involves the creation of a mathematical representation of a three-dimensional object. This object is then manipulated within a virtual space by an animator to create the final animation. Software like Unity, Maya, and Blender are commonly used for computer animation in game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Computer Animation",
+ "url": "https://www.adobe.com/uk/creativecloud/animation/discover/computer-animation.html",
+ "type": "article"
+ },
+ {
+ "title": "What is Computer Animation?",
+ "url": "https://unity.com/topics/what-is-computer-animation",
+ "type": "article"
+ }
+ ]
},
"WK6fLWJq9Vh2ySVrSqd-U": {
"title": "Color",
@@ -536,58 +1231,164 @@
},
"1S1qPogijW2SQCiF7KLZe": {
"title": "Visual Perception",
- "description": "Visual Perception is a fundamental aspect of game development, widely explored within the field of computer graphics. It involves the ability to interpret and understand the visual information that our eyes receive, essential to create immersive and dynamic visual experiences in games. The study involves the understanding of light, color, shape, form, depth, and motion, among others, which are key elements to create aesthetically pleasing and engaging graphics. Making full use of visual perception allows the game developers to control and manipulate how the gamers interact with and experience the game world, significantly enhancing not only the visual appeal but also the overall gameplay.",
- "links": []
+ "description": "Visual Perception is a fundamental aspect of game development, widely explored within the field of computer graphics. It involves the ability to interpret and understand the visual information that our eyes receive, essential to create immersive and dynamic visual experiences in games. The study involves the understanding of light, color, shape, form, depth, and motion, among others, which are key elements to create aesthetically pleasing and engaging graphics. Making full use of visual perception allows the game developers to control and manipulate how the gamers interact with and experience the game world, significantly enhancing not only the visual appeal but also the overall gameplay.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Visual Psychology and Perception",
+ "url": "https://www.gamedeveloper.com/design/it-s-all-in-your-mind-visual-psychology-and-perception-in-game-design",
+ "type": "article"
+ },
+ {
+ "title": "Expanding the Video Game Concept",
+ "url": "https://link.springer.com/chapter/10.1007/978-3-030-45545-3_7",
+ "type": "article"
+ }
+ ]
},
"RgC9TOc0wbr2QSuvrpIDV": {
"title": "Tone Reproduction",
- "description": "`Tone Reproduction` or `Tone Mapping` is the technique used in computer graphics to simulate the appearance of high-dynamic-range images in media with a more limited dynamic range. Print-outs, CRT, LCD monitors, and other displays can only reproduce a reduced dynamic range. This technique is widely used in gaming development, where developers employ it to improve the visual experience. The process involves taking light from a scene and mapping it to a smaller range of tones while preserving the visual appearance—i.e., regarding brightness, saturation, and hue. There are various tone mapping algorithms available, each with unique attributes suitable for different imaging tasks.",
- "links": []
+ "description": "`Tone Reproduction` or `Tone Mapping` is the technique used in computer graphics to simulate the appearance of high-dynamic-range images in media with a more limited dynamic range. Print-outs, CRT, LCD monitors, and other displays can only reproduce a reduced dynamic range. This technique is widely used in gaming development, where developers employ it to improve the visual experience. The process involves taking light from a scene and mapping it to a smaller range of tones while preserving the visual appearance—i.e., regarding brightness, saturation, and hue. There are various tone mapping algorithms available, each with unique attributes suitable for different imaging tasks.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Tone Mapping",
+ "url": "https://graphics-programming.org/resources/tonemapping/index.html",
+ "type": "article"
+ },
+ {
+ "title": "Sound Design for Video Games",
+ "url": "https://www.gamedeveloper.com/audio/sound-design-for-video-games-a-primer",
+ "type": "article"
+ }
+ ]
},
"DDN3mn0LTueBhjRzXFcbU": {
"title": "Lighting and Shadow",
- "description": "**Lighting and Shadows** are paramount elements in computer graphics, significantly contributing to the visual realism of a game. They create depth and a sense of a three-dimensional space in a two-dimensional display. **Lighting** in game development mimics real-world light properties. It involves calculating how light interacts with different objects and surfaces based on their material characteristics and the light's intensity, direction, and color. Various algorithms, like Ray Tracing or Rasterization, are used to simulate these interactions. On the other hand, **shadows** are the areas unlit due to the blockage of light by an object. Producing realistic shadows involves complex computations, factoring in the light's position, the blocking object's shape and size, and the affected area's distance. Shadow Mapping and Shadow Volume are common techniques for creating shadows in game development. Special attention to these aspects can dramatically increase the perceived realism and immersion in the game environment.",
- "links": []
+ "description": "**Lighting and Shadows** are paramount elements in computer graphics, significantly contributing to the visual realism of a game. They create depth and a sense of a three-dimensional space in a two-dimensional display. **Lighting** in game development mimics real-world light properties. It involves calculating how light interacts with different objects and surfaces based on their material characteristics and the light's intensity, direction, and color. Various algorithms, like Ray Tracing or Rasterization, are used to simulate these interactions. On the other hand, **shadows** are the areas unlit due to the blockage of light by an object. Producing realistic shadows involves complex computations, factoring in the light's position, the blocking object's shape and size, and the affected area's distance. Shadow Mapping and Shadow Volume are common techniques for creating shadows in game development. Special attention to these aspects can dramatically increase the perceived realism and immersion in the game environment.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Lightning and Shadows",
+ "url": "https://www.techspot.com/article/1998-how-to-3d-rendering-lighting-shadows/",
+ "type": "article"
+ },
+ {
+ "title": "The Art of Game Lighting",
+ "url": "https://3dskillup.com/effective-lighting-for-games/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to Lighting in 3D Games",
+ "url": "https://www.gridpaperstudio.com/post/introduction-to-lighting-in-3d-game-engines-a-beginner-s-guide",
+ "type": "article"
+ }
+ ]
},
"ygtru6fqQ3gpFZRN_I8rP": {
"title": "Shadow Map",
- "description": "Shadow mapping is a technique used in computer graphics to add shadows to a scene. This process involves two steps - generating the shadow map and then rendering the scene.\n\nIn the shadow map generating step, the scene is rendered from the perspective of the light source capturing depth information. This results in a texture that stores the distance from the light to the nearest surface along each light direction, a “shadow map”.\n\nIn the scene rendering step, the scene is rendered from the camera’s perspective. For each visible surface point, its distance from the light is calculated and compared to the corresponding stored distance in the shadow map. If the point's distance is greater than the stored distance, the point is in shadow; otherwise, it's lit. This information is used to adjust the color of the point, producing the shadow effect.",
- "links": []
+ "description": "Shadow mapping is a technique used in computer graphics to add shadows to a scene. This process involves two steps - generating the shadow map and then rendering the scene. In the shadow map generating step, the scene is rendered from the perspective of the light source capturing depth information. This results in a texture that stores the distance from the light to the nearest surface along each light direction, a “shadow map”. In the scene rendering step, the scene is rendered from the camera’s perspective. For each visible surface point, its distance from the light is calculated and compared to the corresponding stored distance in the shadow map. If the point's distance is greater than the stored distance, the point is in shadow; otherwise, it's lit. This information is used to adjust the color of the point, producing the shadow effect.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Shadow Mapping Techniques",
+ "url": "https://dev.to/hayyanstudio/shadow-mapping-techniques-implementing-shadows-in-3d-scenes-using-shadow-mapping-46hl/",
+ "type": "article"
+ },
+ {
+ "title": "A Beginner's Guide to Shadow Mapping",
+ "url": "https://gamedev.net/blog/2080/entry-2261232-shadow-mapping-part-1-pcf-and-vsms/",
+ "type": "article"
+ }
+ ]
},
"Wq8siopWTD7sylNi0575X": {
"title": "2D",
- "description": "",
- "links": []
+ "description": "2D Game Development involves creating games in a two-dimensional plane, utilizing flat graphics and typically making use of x and y coordinates. From classic arcade games of the ’80s and ’90s to the rich array of indie games today, 2D game development is a vibrant and diverse sector of the gaming industry. Not only are 2D games visually appealing and nostalgic, but they’re also often more accessible for developers to create due to the simpler mechanics compared to 3D game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "2D and 3D Game Development",
+ "url": "https://hireindiandevelopers.medium.com/2d-and-3d-game-development-a-comprehensive-guide-7d22c4fdd706",
+ "type": "article"
+ },
+ {
+ "title": "How to Make a 2D Game",
+ "url": "https://gamemaker.io/en/blog/how-to-make-a-2d-game",
+ "type": "article"
+ }
+ ]
},
"cv1-AwewuqJsZDBI3h84G": {
"title": "Cube",
- "description": "",
- "links": []
+ "description": "In computer graphics, \"cube\" in the context of shadows often refers to using a cube-shaped object to visualize the concept of a \"shadow volume.\" Imagine a light source shining on a cube. The silhouette of the cube from the light's perspective, extended infinitely outwards, forms a volume. Any object inside this \"shadow volume\" is considered to be in shadow. While helpful for understanding, shadow volumes themselves are not always shaped like cubes - their complexity depends on the object casting the shadow.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Draw a Cube",
+ "url": "https://dev-tut.com/2022/unity-draw-a-debug-cube/",
+ "type": "article"
+ }
+ ]
},
"Lu38SfZ38y89BffLRMmGk": {
"title": "Cascaded",
- "description": "",
- "links": []
+ "description": "Cascaded usually refers to cascaded shadow maps, a technique for rendering realistic shadows over a large area.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Cascading Shadows",
+ "url": "https://www.gamedev.net/forums/topic/632574-cascading-shadow-maps-best-approach-to-learn/4988720/",
+ "type": "article"
+ }
+ ]
},
"VLrcBE1vb6N5fw5YESCge": {
"title": "Light Source",
- "description": "In game development, a **light source** is a critical component that impacts the visual appeal and realism of the scene. It represents any object in the game scene that emits light, such as the sun, a lamp, or a torch. Light sources can be categorized as static or dynamic. Static light sources do not move or change throughout the game, while dynamic light sources can move and their properties can change in real-time. The properties of light sources that can be manipulated include intensity (how bright the light is), color, range (how far the light extends), direction, and type (point, directional, or spot). The lighting and shading effects are then computed based on these light source properties and how they interact with various objects in the game scene.",
- "links": []
+ "description": "In game development, a **light source** is a critical component that impacts the visual appeal and realism of the scene. It represents any object in the game scene that emits light, such as the sun, a lamp, or a torch. Light sources can be categorized as static or dynamic. Static light sources do not move or change throughout the game, while dynamic light sources can move and their properties can change in real-time. The properties of light sources that can be manipulated include intensity (how bright the light is), color, range (how far the light extends), direction, and type (point, directional, or spot). The lighting and shading effects are then computed based on these light source properties and how they interact with various objects in the game scene.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "The Art of Game Lighting",
+ "url": "https://3dskillup.com/effective-lighting-for-games/",
+ "type": "article"
+ },
+ {
+ "title": "Lightning Game Environments",
+ "url": "https://cgcookie.com/posts/art-of-lighting-game-environments",
+ "type": "article"
+ }
+ ]
},
"foD8K7V0yIxgeXwl687Bv": {
"title": "Directional",
- "description": "",
- "links": []
+ "description": "Directional light simulates a distant light source like the sun. It has only a direction, not a specific position, meaning its light rays are parallel and cast consistent shadows regardless of object location within the scene. This makes it ideal for simulating sunlight or moonlight, providing realistic outdoor lighting while being relatively performant for rendering.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Directional Light",
+ "url": "https://www.a23d.co/blog/difference-between-albedo-and-diffuse-map/",
+ "type": "article"
+ }
+ ]
},
"aNhyXWW2b7yKTv8y14zk9": {
"title": "Point",
- "description": "Point lights are one of the most common types of lights used in computer graphics and games. They resemble real-world light bulbs, emitting light uniformly in all directions.\n\nThese lights are available out of the box in most game engines and offer a range of customizable parameters, such as intensity, falloff, color, and more.\n\nPoint lights are the most straightforward type of light, making them ideal for quickly and intuitively lighting up your scenes.",
- "links": []
+ "description": "Point lights are one of the most common types of lights used in computer graphics and games. They resemble real-world light bulbs, emitting light uniformly in all directions. These lights are available out of the box in most game engines and offer a range of customizable parameters, such as intensity, falloff, color, and more. Point lights are the most straightforward type of light, making them ideal for quickly and intuitively lighting up your scenes.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "The Basics of Lighting",
+ "url": "https://blog.logrocket.com/lighting-basics-unity/",
+ "type": "article"
+ },
+ {
+ "title": "Types of Lighting Component",
+ "url": "https://docs.unity3d.com/6000.0/Documentation/Manual/Lighting.html",
+ "type": "article"
+ }
+ ]
},
"FetbhcK1RDt4izZ6NEUEP": {
"title": "Spot",
- "description": "Spotlights are a common type of light in computer graphics and games that mimic the behavior of real-world spotlights. They offer a range of parameters to adjust their behavior, such as radius, cone angle, falloff, and intensity.\n\nSpotlights are readily available out of the box in both Unreal and Unity game engines, making them an accessible and powerful tool for adding realistic and dynamic lighting to your scenes.",
- "links": []
+ "description": "Spotlights are a common type of light in computer graphics and games that mimic the behavior of real-world spotlights. They offer a range of parameters to adjust their behavior, such as radius, cone angle, falloff, and intensity Spotlights are readily available out of the box in both Unreal and Unity game engines, making them an accessible and powerful tool for adding realistic and dynamic lighting to your scenes.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Computer Graphics Lighting",
+ "url": "https://en.wikipedia.org/wiki/Computer_graphics_lighting",
+ "type": "article"
+ }
+ ]
},
"sC3omOmL2DOyTSvET5cDa": {
"title": "Infinite",
@@ -596,33 +1397,83 @@
},
"OcxesFnB5wO6VXrHYnhz-": {
"title": "Visibility and Occlusion",
- "description": "\"Visibility and occlusion\" in computer graphics refers to the process of determining which parts of a particular object are visible from a certain viewpoint and which are hidden. \"Occlusion\" describes the phenomenon where an object is blocked from view by another object. Understanding these concepts is important for creating realistic renderings in game design. Real-time engines typically use data structures like BSP-trees, Quad-trees or Octrees to quickly identify occlusion. Advanced techniques such as Occlusion culling and Z-buffering are used to further optimize the representation of visible and hidden parts of 3D objects. Understanding the depths and dimensions related to visibility and occlusion empowers the game developer to enhance presentation and performance.",
- "links": []
+ "description": "\"Visibility and occlusion\" in computer graphics refers to the process of determining which parts of a particular object are visible from a certain viewpoint and which are hidden. \"Occlusion\" describes the phenomenon where an object is blocked from view by another object. Understanding these concepts is important for creating realistic renderings in game design. Real-time engines typically use data structures like BSP-trees, Quad-trees or Octrees to quickly identify occlusion. Advanced techniques such as Occlusion culling and Z-buffering are used to further optimize the representation of visible and hidden parts of 3D objects. Understanding the depths and dimensions related to visibility and occlusion empowers the game developer to enhance presentation and performance.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Introduction to Occlusion Culling",
+ "url": "https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79",
+ "type": "article"
+ },
+ {
+ "title": "Visibility in Computer Graphics",
+ "url": "https://ima.udg.edu/~sellares/ComGeo/VisComGra.pdf",
+ "type": "article"
+ }
+ ]
},
"MlLYqO_8JDNOwKRvaM-bf": {
"title": "Occluder",
- "description": "An **Occluder** in game development is basically a tool or method used to hide other objects in the game environment. When a certain object, which is known as the occluder, blocks the line of sight to another object from the camera's perspective, the hidden or blocked object does not need to be rendered. This object could be anything from a building to a terrain feature. The process of managing these occluders is known as occlusion culling. The purpose of using occluders is to optimize the game and improve its performance by reducing unnecessary rendering workload. However, it's important to note that setting up occluders requires careful planning to ensure that it does not impact the gameplay or visual quality.",
- "links": []
+ "description": "An **Occluder** in game development is basically a tool or method used to hide other objects in the game environment. When a certain object, which is known as the occluder, blocks the line of sight to another object from the camera's perspective, the hidden or blocked object does not need to be rendered. This object could be anything from a building to a terrain feature. The process of managing these occluders is known as occlusion culling. The purpose of using occluders is to optimize the game and improve its performance by reducing unnecessary rendering workload. However, it's important to note that setting up occluders requires careful planning to ensure that it does not impact the gameplay or visual quality.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Occluder and How to Use Them",
+ "url": "https://80.lv/articles/occluders-and-how-to-use-them-for-level-design/",
+ "type": "article"
+ },
+ {
+ "title": "Occlusion Culling Tutorial",
+ "url": "https://thegamedev.guru/unity-performance/occlusion-culling-tutorial/",
+ "type": "article"
+ }
+ ]
},
"1gdDeUPBRco10LpOxug4k": {
"title": "Culling",
- "description": "**Culling** is a performance optimization strategy employed in game development to improve efficiency and speed. _Culling_ helps in reducing the rendering workload by eliminating the elements that are not visible to the player or are outside the viewport of the game. There are several types of culling, two main being; **frustum culling** and **occlusion culling**. Frustum culling involves eliminating objects that are outside of the camera's field of view. On the other hand, Occlusion culling discards objects that are hidden or blocked by other objects. Culling ensures that only the elements that are necessary or add value to the player's experience are processed.",
- "links": []
+ "description": "**Culling** is a performance optimization strategy employed in game development to improve efficiency and speed. Culling helps in reducing the rendering workload by eliminating the elements that are not visible to the player or are outside the viewport of the game. There are several types of culling, two main being; **frustum culling** and **occlusion culling**. Frustum culling involves eliminating objects that are outside of the camera's field of view. On the other hand, Occlusion culling discards objects that are hidden or blocked by other objects. Culling ensures that only the elements that are necessary or add value to the player's experience are processed.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Culling in Game Development",
+ "url": "https://medium.com/@niitwork0921/what-is-culling-in-game-design-a97c0b6344dd",
+ "type": "article"
+ },
+ {
+ "title": "Object Culling in Unreal Engine",
+ "url": "https://gamedevinsider.com/object-culling-in-unreal-engine/",
+ "type": "article"
+ }
+ ]
},
"xP_VDMu1z9jiVnZaBFKJQ": {
"title": "Clipping",
- "description": "`Clipping` is a fundamental technique in computer graphics primarily used for efficiently rendering a three-dimensional scene. This process involves eliminating certain parts of objects in the scene that are out-of-view or obstructed by other objects. Clipping can occur in various ways, one of the most common methods being `View-frustum culling` where objects completely outside of the camera view are discarded. The aim of clipping is to optimize the graphic rendering pipeline by reducing the number of polygons that the graphic hardware needs to process. Consequently, this helps in improving the speed and overall performance of the rendering process.",
- "links": []
+ "description": "`Clipping` is a fundamental technique in computer graphics primarily used for efficiently rendering a three-dimensional scene. This process involves eliminating certain parts of objects in the scene that are out-of-view or obstructed by other objects. Clipping can occur in various ways, one of the most common methods being `View-frustum culling` where objects completely outside of the camera view are discarded. The aim of clipping is to optimize the graphic rendering pipeline by reducing the number of polygons that the graphic hardware needs to process. Consequently, this helps in improving the speed and overall performance of the rendering process.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Clipping in Games",
+ "url": "https://www.haroldserrano.com/blog/what-is-clipping-in-opengl",
+ "type": "article"
+ }
+ ]
},
"2ocwC0P1-ZFmjA9EmA1lV": {
"title": "Fog",
- "description": "",
- "links": []
+ "description": "Fog in game development refers to a visual effect used to simulate atmospheric conditions and enhance the depth perception in a game environment. It creates a gradient of visibility, often fading objects into the background, which can improve performance by reducing the number of objects rendered at a distance. Fog can be implemented in various ways, such as linear fog, which gradually obscures objects based on their distance from the camera, or exponential fog, which creates a more dramatic effect by rapidly increasing the density of fog with distance.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Fog - Graphics and GPU Programming",
+ "url": "https://gamedev.net/reference/articles/article677.asp",
+ "type": "article"
+ }
+ ]
},
"UcLGWYu41Ok2NYdLNIY5C": {
"title": "Frustum",
- "description": "Frustum culling is a standard practice in computer graphics, used in virtually all games to optimize performance by not rendering objects outside of your field of view. Think of your field of view as a frustum, a truncated pyramid shape. The farthest side is called the far clip plane, and the closest side is the near clip plane. Any object in the game that doesn't fall within this frustum is culled, meaning it’s not rendered, to improve performance. This feature comes built-in with Unreal Engine.\n\nYou can also adjust the near and far clip planes to fine-tune culling. For example, if an object is too close to the camera, it may disappear because it crosses the near clip plane threshold. Similarly, objects that are too far away might be culled by the far clip plane. In some cases, distant objects are LOD-ed (Level of Detail), an optimization technique that reduces the detail of the mesh the farther you are from it, and increases detail as you get closer.\n\nFrustum culling is a fundamental technique that is implemented in virtually all modern games to ensure efficient rendering and smooth gameplay.",
+ "description": "Frustum culling is a standard practice in computer graphics, used in virtually all games to optimize performance by not rendering objects outside of your field of view. Think of your field of view as a frustum, a truncated pyramid shape. The farthest side is called the far clip plane, and the closest side is the near clip plane. Any object in the game that doesn't fall within this frustum is culled, meaning it’s not rendered, to improve performance. This feature comes built-in with Unreal Engine.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Frustum Culling",
+ "url": "https://gamedev.net/tutorials/programming/general-and-gameplay-programming/frustum-culling-r4613/",
+ "type": "article"
+ },
{
"title": "Frustum Culling - Game Optimization 101 - Unreal Engine",
"url": "https://www.youtube.com/watch?v=Ql56s1erTMI",
@@ -632,43 +1483,120 @@
},
"_1LkU258hzizSIgXipE0b": {
"title": "Light",
- "description": "",
- "links": []
+ "description": "Light refers to a visual representation of illumination in a 3D environment. It is used to simulate the way light behaves in the real world, adding depth, volume, and realism to objects and scenes. Lighting can be categorized into different types, such as ambient, diffuse, specular, and emissive, each serving a specific purpose in creating a believable and immersive visual experience. Proper lighting can greatly enhance the mood, atmosphere, and overall aesthetic of a scene, making it an essential aspect of graphics and game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Lightning in Game Environments",
+ "url": "https://medium.com/my-games-company/using-light-and-color-in-game-development-a-beginners-guide-400edf4a7ae0",
+ "type": "article"
+ },
+ {
+ "title": "The Art of Game Lighting",
+ "url": "https://3dskillup.com/effective-lighting-for-games/",
+ "type": "article"
+ }
+ ]
},
"lqfW8hkuN3vWtacrqBBtI": {
"title": "Shadow",
- "description": "",
- "links": []
+ "description": "Shadows play a crucial role in enhancing the realism and depth of scenes in video games. They help to create a more immersive experience by simulating how light interacts with objects in a 3D environment.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Shadow Mapping Techniques",
+ "url": "https://dev.to/hayyanstudio/shadow-mapping-techniques-implementing-shadows-in-3d-scenes-using-shadow-mapping-46hl",
+ "type": "article"
+ },
+ {
+ "title": "Real Time Shadow Casting using Shadow Volumes",
+ "url": "https://www.gamedeveloper.com/business/real-time-shadow-casting-using-shadow-volumes",
+ "type": "article"
+ },
+ {
+ "title": "Programming and Explaining Shadows",
+ "url": "https://www.youtube.com/watch?v=RJr14qUt624L",
+ "type": "video"
+ }
+ ]
},
"-r15srXTBLnUGokpXKclH": {
"title": "Polygon",
- "description": "",
- "links": []
+ "description": "In computer graphics and game development, a polygon is a 2D or 3D shape composed of a set of vertices connected by edges. Polygons are used to represent objects, characters, and environments in games and simulations.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Polygons and Shading",
+ "url": "https://electronics.howstuffworks.com/3do5.htm",
+ "type": "article"
+ }
+ ]
},
"GHfLMtgmc36OCvjQvW_Su": {
"title": "Polyhedron",
- "description": "",
- "links": []
+ "description": "In computer graphics and game development, a polyhedron is a 3D shape composed of a set of polygons that enclose a volume. Polyhedra are used to represent objects, characters, and environments in games and simulations.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Mesh Triangulation and Polyhedron",
+ "url": "https://gamedev.stackexchange.com/questions/140978/need-some-insight-on-mesh-triangulation-and-geodesic-spheres",
+ "type": "article"
+ },
+ {
+ "title": "Polyhedron Game Physics",
+ "url": "https://gamedev.net/forums/topic/653589-misc-polyhedon-related-posts-for-game-physics/",
+ "type": "article"
+ }
+ ]
},
"AEAVc8Ih4fctSGGVJG0Np": {
"title": "Stencil Shadow",
- "description": "`Stencil shadows` are a technique used in 3D computer graphics for creating shadows. The stencil shadow algorithm operates by treating a shadow as a 3D volume of space, known as a shadow volume. Any part of the scene that lies inside this shadow volume will be in shadow. If it lies outside the shadow volume, it will be in light. The shadow volume is created by extruding the polygonal silhouette of a 3D object into space along the lines of sight from the light source. For equivalent complex objects, the number of edges or vertices to fill the stencil buffer will generally be less than the number of pixels needed to compute shadow maps, making stencil shadows more efficient in that regard. However, the shadows produced by this technique can look blocky or unrealistic if not further refined.",
- "links": []
+ "description": "`Stencil shadows` are a technique used in 3D computer graphics for creating shadows. The stencil shadow algorithm operates by treating a shadow as a 3D volume of space, known as a shadow volume. Any part of the scene that lies inside this shadow volume will be in shadow. If it lies outside the shadow volume, it will be in light. The shadow volume is created by extruding the polygonal silhouette of a 3D object into space along the lines of sight from the light source. For equivalent complex objects, the number of edges or vertices to fill the stencil buffer will generally be less than the number of pixels needed to compute shadow maps, making stencil shadows more efficient in that regard. However, the shadows produced by this technique can look blocky or unrealistic if not further refined.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Stencil Shadows Implementation",
+ "url": "https://devforum.roblox.com/t/stencil-shadows-implementation/2079287",
+ "type": "article"
+ }
+ ]
},
"Kx7O7RLp7aPGtOvK8e314": {
"title": "Graphics API",
- "description": "A Graphics API (Application Programming Interface) is a collection of commands, functions, protocols, and tools that game developers use to build games. It forms an interface between the game and the hardware of the device, usually a computer or console, and assists in rendering 2D and 3D graphics performance. Complex tasks such as drawing polygons, texturing, or lighting are encapsulated in a more manageable, higher-level process by the API. Common examples are Vulkan, DirectX, OpenGL, and Metal. Each one varies in availability and performance across different platforms and devices and has unique features that can be utilized for game development.",
- "links": []
+ "description": "A Graphics API (Application Programming Interface) is a collection of commands, functions, protocols, and tools that game developers use to build games. It forms an interface between the game and the hardware of the device, usually a computer or console, and assists in rendering 2D and 3D graphics performance. Complex tasks such as drawing polygons, texturing, or lighting are encapsulated in a more manageable, higher-level process by the API. Common examples are Vulkan, DirectX, OpenGL, and Metal. Each one varies in availability and performance across different platforms and devices and has unique features that can be utilized for game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Graphics API in Unity",
+ "url": "https://docs.unity3d.com/2022.3/Documentation/ScriptReference/PlayerSettings.SetGraphicsAPIs.html",
+ "type": "article"
+ }
+ ]
},
"bgWFV09AtDv1yJS5t0EaB": {
"title": "DirectX",
- "description": "**DirectX** is a collection of Application Programming Interfaces (APIs) developed by Microsoft to handle tasks related to multimedia, especially game programming and video, on Microsoft platforms. It was first introduced in 1995 and has become a crucial component for PC gaming. DirectX serves as an intermediary between a hardware and a software, managing the state of the hardware and giving commands to it. Some technologies under DirectX includes Direct3D for 3D graphics, DirectDraw for 2D graphics, DirectSound for sound, and DirectInput for interfacing with input devices such as keyboard and mouse.",
- "links": []
+ "description": "**DirectX** is a collection of Application Programming Interfaces (APIs) developed by Microsoft to handle tasks related to multimedia, especially game programming and video, on Microsoft platforms. It was first introduced in 1995 and has become a crucial component for PC gaming. DirectX serves as an intermediary between a hardware and a software, managing the state of the hardware and giving commands to it. Some technologies under DirectX includes Direct3D for 3D graphics, DirectDraw for 2D graphics, DirectSound for sound, and DirectInput for interfacing with input devices such as keyboard and mouse.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Microsoft's DirectX",
+ "url": "https://visualstudio.microsoft.com/vs/features/directx-game-dev/",
+ "type": "article"
+ },
+ {
+ "title": "Learn DirectX",
+ "url": "https://learn.microsoft.com/en-us/shows/introduction-to-c-and-directx-game-development/",
+ "type": "article"
+ }
+ ]
},
"ffa5-YxRhE3zhWg7KXQ4r": {
"title": "OpenGL",
- "description": "Open GL, also known as Open Graphics Library, is a cross-language, cross-platform API designed to render 2D and 3D vector graphics. As a software interface for graphics hardware, Open GL provides programmers the ability to create complex graphics visuals in detail. It was first developed by Silicon Graphics Inc. in 1992 and quickly became a highly popular tool in the graphics rendering industry. Open GL is widely used in CAD, virtual reality, scientific visualization, information visualization, and flight simulation. It is also used in video games production where real-time rendering is a requirement. The API is designed to work with a broad range of hardware from different manufacturers. Being open-source, Open GL's code capabilities can be extended by anyone in the software community.",
+ "description": "Open GL, also known as Open Graphics Library, is a cross-language, cross-platform API designed to render 2D and 3D vector graphics. As a software interface for graphics hardware, Open GL provides programmers the ability to create complex graphics visuals in detail. It was first developed by Silicon Graphics Inc. in 1992 and quickly became a highly popular tool in the graphics rendering industry. Open GL is widely used in CAD, virtual reality, scientific visualization, information visualization, and flight simulation. It is also used in video games production where real-time rendering is a requirement. The API is designed to work with a broad range of hardware from different manufacturers. Being open-source, Open GL's code capabilities can be extended by anyone in the software community.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Open Graphics Library",
+ "url": "https://www.opengl.org/",
+ "type": "article"
+ },
+ {
+ "title": "OpenGL Libraries",
+ "url": "https://www.opengl.org/sdk/libs/",
+ "type": "article"
+ },
{
"title": "OpenGL Tutorials",
"url": "https://youtube.com/playlist?list=PLPaoO-vpZnumdcb4tZc4x5Q-v7CkrQ6M-&si=Mr71bYJMgoDhN9h-",
@@ -678,23 +1606,62 @@
},
"CeydBMwckqKll-2AgOlyd": {
"title": "WebGL",
- "description": "`WebGL` (Web Graphics Library) is a JavaScript API that is used to render interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins. It leverages the power of the Graphics Processing Unit (GPU), which provides high-efficiency rendering. WebGL programs consist of control code written in JavaScript and shader code that's written in OpenGL Shading Language (GLSL), allowing developers to control the fine details of graphics rendering. Besides its compatibility with HTML5 and its ability to render on any platform that supports the web, WebGL is entirely integrated into all web standards, facilitating GPU-accelerated image processing and effects.",
- "links": []
+ "description": "`WebGL` (Web Graphics Library) is a JavaScript API that is used to render interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins. It leverages the power of the Graphics Processing Unit (GPU), which provides high-efficiency rendering. WebGL programs consist of control code written in JavaScript and shader code that's written in OpenGL Shading Language (GLSL), allowing developers to control the fine details of graphics rendering. Besides its compatibility with HTML5 and its ability to render on any platform that supports the web, WebGL is entirely integrated into all web standards, facilitating GPU-accelerated image processing and effects.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "WebGL",
+ "url": "https://en.wikipedia.org/wiki/WebGL",
+ "type": "article"
+ },
+ {
+ "title": "Getting Started with WebGL",
+ "url": "https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL",
+ "type": "article"
+ }
+ ]
},
"wYUDJb-q1rtM4w2QV3Wr1": {
"title": "HLSL",
- "description": "**HLSL** stands for High-Level Shader Language, and it is the proprietary shading language developed by Microsoft for use with the Microsoft Direct3D API. Just like its counterpart from OpenGL - GLSL, it opens up the power of programmable GPUs for developers by providing capability for creating customized rendering effects or performing operations that are computationally expensive in CPU. HLSL resembles the C programming language, thereby making it easier for developers coming from traditional programming backgrounds. It is often considered an integral part of the Direct X ecosystem and is used for developing complex and visually impressive graphics for games.",
- "links": []
+ "description": "**HLSL** stands for High-Level Shader Language, and it is the proprietary shading language developed by Microsoft for use with the Microsoft Direct3D API. Just like its counterpart from OpenGL - GLSL, it opens up the power of programmable GPUs for developers by providing capability for creating customized rendering effects or performing operations that are computationally expensive in CPU. HLSL resembles the C programming language, thereby making it easier for developers coming from traditional programming backgrounds. It is often considered an integral part of the Direct X ecosystem and is used for developing complex and visually impressive graphics for games.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "HLSL",
+ "url": "https://en.wikipedia.org/wiki/High-Level_Shader_Language",
+ "type": "article"
+ },
+ {
+ "title": "Comparison of HLSLs",
+ "url": "https://docs.vulkan.org/guide/latest/high_level_shader_language_comparison.html",
+ "type": "article"
+ }
+ ]
},
"j8mWMFMQCEIPUzegDDsm1": {
"title": "GLSL",
- "description": "GLSL (Graphics Library Shader Language) is a high-level shading language inspired by C, based on the syntax of the OpenGL Shading Language. It is used in graphics programming for defining how the graphical content should look. GLSL allows developers to harness the power of modern GPUs (Graphics Processing Units), enabling direct, unconstrained control over graphics rendering. A key aspect of the language is its ability to create shaders, which are small programs that run on the GPU. Shaders are used for various graphical effects like vertex manipulation, pixel color calculations, or post-processing effects.",
- "links": []
+ "description": "**GLSL** (Graphics Library Shader Language) is a high-level shading language inspired by C, based on the syntax of the OpenGL Shading Language. It is used in graphics programming for defining how the graphical content should look. GLSL allows developers to harness the power of modern GPUs (Graphics Processing Units), enabling direct, unconstrained control over graphics rendering. A key aspect of the language is its ability to create shaders, which are small programs that run on the GPU. Shaders are used for various graphical effects like vertex manipulation, pixel color calculations, or post-processing effects.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "GLSL Shaders",
+ "url": "https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/GLSL_Shaders",
+ "type": "article"
+ }
+ ]
},
"EVOiWAeZsIvjLTt3EYu-6": {
"title": "OpenGL ES",
- "description": "OpenGL ES (Open Graphics Library for Embedded Systems) is a simplified version of OpenGL, designed for use on systems with lower computational power, such as mobile devices and embedded systems. Despite its semantic simplifications, OpenGL ES still retains high versatility and capability, allowing for high-performance 2D and 3D graphics on these smaller, less powerful systems. OpenGL ES has become particularly popular in mobile game development, with major platforms like Android and iOS providing full support for it. The API is divided into several versions, the latest of which, OpenGL ES 3.2, was released in 2016.",
- "links": []
+ "description": "OpenGL ES (Open Graphics Library for Embedded Systems) is a simplified version of OpenGL, designed for use on systems with lower computational power, such as mobile devices and embedded systems. Despite its semantic simplifications, OpenGL ES still retains high versatility and capability, allowing for high-performance 2D and 3D graphics on these smaller, less powerful systems. OpenGL ES has become particularly popular in mobile game development, with major platforms like Android and iOS providing full support for it. The API is divided into several versions, the latest of which, OpenGL ES 3.2, was released in 2016.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "LVGL — Light and Versatile Embedded Graphics Library",
+ "url": "https://lvgl.io/",
+ "type": "article"
+ },
+ {
+ "title": "Embedded Lighting",
+ "url": "http://embeddedlightning.com/ugui/",
+ "type": "article"
+ }
+ ]
},
"oEznLciLxZJaulMlBGgg4": {
"title": "Metal",
@@ -709,38 +1676,104 @@
},
"yPfhJSTFS7a72UcqF1ROK": {
"title": "Vulkan",
- "description": "Vulkan is a high-performance, cross-platform API for graphics and computation tasks published by the Khronos Group. Unlike other graphics APIs, Vulkan provides developers with direct control over the GPU and aims to take full advantage of multicore processors, enabling significant performance gains in 3D applications. It supports Windows, Linux, Android, iOS, and MacOS platforms. It's built from ground-up to ensure minimal overhead on the CPU side, providing a more balanced CPU/GPU usage, hence not limiting the game to a single core. Vulkan can be seen as the successor to OpenGL, as it offers lower-level functionality and more efficient multi-threading capabilities.",
- "links": []
+ "description": "Vulkan is a high-performance, cross-platform API for graphics and computation tasks published by the Khronos Group. Unlike other graphics APIs, Vulkan provides developers with direct control over the GPU and aims to take full advantage of multicore processors, enabling significant performance gains in 3D applications. It supports Windows, Linux, Android, iOS, and MacOS platforms. It's built from ground-up to ensure minimal overhead on the CPU side, providing a more balanced CPU/GPU usage, hence not limiting the game to a single core. Vulkan can be seen as the successor to OpenGL, as it offers lower-level functionality and more efficient multi-threading capabilities.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Vulkan",
+ "url": "https://www.vulkan.org/",
+ "type": "article"
+ },
+ {
+ "title": "What is Vulkan?",
+ "url": "https://docs.vulkan.org/guide/latest/what_is_vulkan.html",
+ "type": "article"
+ },
+ {
+ "title": "Vulkan Driver Support",
+ "url": "https://developer.nvidia.com/vulkan-driver",
+ "type": "article"
+ }
+ ]
},
"DvV32n3NoXNEej8Fsqqs2": {
"title": "SPIR-V",
- "description": "`SPIR-V` is a binary intermediate language for graphics and computation kernels, which is defined by the Khronos Group. This programming language has been largely adopted and used by Vulkan, a low-overhead, cross-platform 3D graphics and computing API. Vulkan consumes `SPIR-V` directly, serving as the final shader stage before the GPU. The `SPIR-V` binary format is designed for portability and flexibility, allowing it to be a powerful tool for developers because of its extensibility through the addition of new instructions, without the need to rebuild toolchains or shaders. This makes `SPIR-V` an essential part of Vulkan, especially for game developers creating large, diverse worldscapes and intricate graphics.",
- "links": []
+ "description": "`SPIR-V` is a binary intermediate language for graphics and computation kernels, which is defined by the Khronos Group. This programming language has been largely adopted and used by Vulkan, a low-overhead, cross-platform 3D graphics and computing API. Vulkan consumes `SPIR-V` directly, serving as the final shader stage before the GPU. The `SPIR-V` binary format is designed for portability and flexibility, allowing it to be a powerful tool for developers because of its extensibility through the addition of new instructions, without the need to rebuild toolchains or shaders. This makes `SPIR-V` an essential part of Vulkan, especially for game developers creating large, diverse worldscapes and intricate graphics.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "SPIR Overview - The Khronos Group Inc",
+ "url": "https://www.khronos.org/spir/",
+ "type": "article"
+ },
+ {
+ "title": "SPIR-V - OpenGL Wiki - The Khronos Group",
+ "url": "https://www.khronos.org/opengl/wiki/SPIR-V",
+ "type": "article"
+ }
+ ]
},
"Hpf_CPmLpCSP8Qo07Kq1X": {
"title": "Game AI",
- "description": "Game AI is a subfield of artificial intelligence (AI) that is used to create video game characters that act and react like real human players. Game AI is used in a variety of video games, from simple puzzle games to complex strategy games. Game AI can be used to create non-player characters (NPCs) that interact with the player, as well as to create intelligent opponents that challenge the player.",
- "links": []
+ "description": "Game AI is a subfield of artificial intelligence (AI) that is used to create video game characters that act and react like real human players. Game AI is used in a variety of video games, from simple puzzle games to complex strategy games. Game AI can be used to create non-player characters (NPCs) that interact with the player, as well as to create intelligent opponents that challenge the player.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game AI",
+ "url": "https://medium.com/@alidrsn/game-development-with-ai-strategy-tools-and-examples-7ae77257c062",
+ "type": "article"
+ },
+ {
+ "title": "AI Game Development",
+ "url": "https://modl.ai/ai-game-development/",
+ "type": "article"
+ }
+ ]
},
"Ky-95ipdgyPZGAIdqwMCk": {
"title": "Decision Making",
- "description": "In game development, decision making often refers to the logic or processes that determine the behavior of non-playable characters or game environments. Three main types of decision making are used: deterministic, stochastic, and strategic. Deterministic decision making is based on predefined rules. With stochastic decision making, outcomes are probability-based, providing an element of randomness. Strategic decision making involves planning a sequence of actions to achieve a specified goal. Decisions can also be guided using various API tools such as **pathfinding algorithms** (which determine the shortest path between two points) or **decision trees** (which facilitate the selection of an action based on certain conditions). The choice of decision-making method depends largely on the desired complexity and behavior of your game elements.",
- "links": []
+ "description": "In game development, decision making often refers to the logic or processes that determine the behavior of non-playable characters or game environments. Three main types of decision making are used: deterministic, stochastic, and strategic. Deterministic decision making is based on predefined rules. With stochastic decision making, outcomes are probability-based, providing an element of randomness. Strategic decision making involves planning a sequence of actions to achieve a specified goal. Decisions can also be guided using various API tools such as **pathfinding algorithms** (which determine the shortest path between two points) or **decision trees** (which facilitate the selection of an action based on certain conditions). The choice of decision-making method depends largely on the desired complexity and behavior of your game elements.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Decision Making in Game Development",
+ "url": "https://www.cgspectrum.com/blog/game-design-decision-making-troy-dunniway",
+ "type": "article"
+ }
+ ]
},
"rwGivalwv2ozdSlVMSc4U": {
"title": "Decision Tree",
- "description": "A **Decision Tree** is a graphical representation used in game development which helps to visualize the possible outcomes or paths a game could take depending on certain choices made by a gamer. Each branch of the tree represents a possible decision, outcome, or reaction and each node on the tree represents a game situation or event. Decision Trees help game developers in making strategic designs, create complex enemy AI, and overall assists in predicting the interaction or course of the game. It allows game developers to layout the decision points, possible choices and their outcomes, thus making it easier to trace the direction in which the game progresses.",
- "links": []
+ "description": "A **Decision Tree** is a graphical representation used in game development which helps to visualize the possible outcomes or paths a game could take depending on certain choices made by a gamer. Each branch of the tree represents a possible decision, outcome, or reaction and each node on the tree represents a game situation or event. Decision Trees help game developers in making strategic designs, create complex enemy AI, and overall assists in predicting the interaction or course of the game. It allows game developers to layout the decision points, possible choices and their outcomes, thus making it easier to trace the direction in which the game progresses.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "@articleDecision Tree",
+ "url": "https://www.gamedeveloper.com/programming/behavior-trees-for-ai-how-they-work",
+ "type": "article"
+ },
+ {
+ "title": "What is a Decision Tree?",
+ "url": "https://www.youtube.com/watch?v=bmP4ppe_-cw",
+ "type": "video"
+ }
+ ]
},
"aJa_2xkZuSjQ5bt6Kj5oe": {
"title": "State Machine",
- "description": "A **State Machine** is a conceptual model that is frequently used in game development to manage game states, or conditions. It consists of a number of different 'states', or modes, and the transitions between them. For instance, a mobile game could have states such as 'Start Screen', 'Playing', 'Paused' and 'Game Over'. Each one of these states will have specific commands associated and rules for transitioning to other states. This will govern the flow and behavior of the game. It can be used in AI character behaviors, UI systems, or game-level states. State Machines keep the code organised and manageable, making it easier for developers to implement complex game logic.",
- "links": []
+ "description": "A **State Machine** is a conceptual model that is frequently used in game development to manage game states, or conditions. It consists of a number of different 'states', or modes, and the transitions between them. For instance, a mobile game could have states such as 'Start Screen', 'Playing', 'Paused' and 'Game Over'. Each one of these states will have specific commands associated and rules for transitioning to other states. This will govern the flow and behavior of the game. It can be used in AI character behaviors, UI systems, or game-level states. State Machines keep the code organized and manageable, making it easier for developers to implement complex game logic.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "State Machines in Games",
+ "url": "https://gamedev.net/tutorials/_/technical/game-programming/state-machines-in-games-r2982/",
+ "type": "article"
+ }
+ ]
},
"ztoW8fBY73Es624A_tjd7": {
"title": "Behavior Tree",
"description": "The **Behavior Tree** is a decision-making system used in game development, primarily for AI character behavior. These trees help define the actions an AI character will take, based on predefined tasks and conditions. The tree structure starts from a single root, branching out to nodes that represent these decisions or tasks. The tasks can be simple, such as moving from one point to another, or can be complex decisions like whether to attack or retreat. This kind of structure is advantageous because it is easy to add, remove, or modify tasks without breaking the tree or affecting other tasks. This makes it highly flexible and easy to manage, irrespective of the complexity of the tasks.\n\nVisit the following resources to learn more:",
"links": [
+ {
+ "title": "Open Behavior Tree",
+ "url": "https://sterberino.github.io/open-behavior-trees-documentation/index.html",
+ "type": "article"
+ },
{
"title": "Unreal Engine 5 Tutorial - AI Part 2: Behavior Tree",
"url": "https://www.youtube.com/watch?v=hbHqv9ov8IM&list=PL4G2bSPE_8uklDwraUCMKHRk2ZiW29R6e&index=3&t=16s",
@@ -750,65 +1783,166 @@
},
"4ZCVUpYrCT14d_JULulLe": {
"title": "Fuzzy Logic",
- "description": "Fuzzy Logic is a mathematical logic method that resolves problem-solving and system control. Unlike traditional binary sets (true or false), fuzzy logic variables have a truth value that ranges in degree between 0 and 1. This allows them to handle the concept of partial truth, where the truth value may range between completely true and completely false. In game development, fuzzy logic is often used in artificial intelligence to make the game more realistic. For instance, it can be used to program non-player characters (NPCs) who respond to situational changes dynamically, making the gameplay more engaging and interactive.",
- "links": []
+ "description": "Fuzzy Logic is a mathematical logic method that resolves problem-solving and system control. Unlike traditional binary sets (true or false), fuzzy logic variables have a truth value that ranges in degree between 0 and 1. This allows them to handle the concept of partial truth, where the truth value may range between completely true and completely false. In game development, fuzzy logic is often used in artificial intelligence to make the game more realistic. For instance, it can be used to program non-player characters (NPCs) who respond to situational changes dynamically, making the gameplay more engaging and interactive.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Fuzzy Login in Games",
+ "url": "https://www.michelepirovano.com/pdf/fuzzy_ai_in_games.pdf",
+ "type": "article"
+ }
+ ]
},
"c6j-30p84vk3MZEF1R2hN": {
"title": "Markov System",
- "description": "A **Markov System** or **Markov Chain** represents a statistical model that is used in decision-making scenarios within game development. This model is based on the notion of \"memorylessness\" where a certain event's probability depends solely on the state attained in the previous event. It employs a sequence of possible events where the probability of each event hinges on the state achieved in the previous event. A common usage of a Markov System is in designing AI behavior within games, where each state symbolizes a different behavior, and transitions between them are governed by the Markov chain probabilities.",
- "links": []
+ "description": "A **Markov System** or **Markov Chain** represents a statistical model that is used in decision-making scenarios within game development. This model is based on the notion of \"memorylessness\" where a certain event's probability depends solely on the state attained in the previous event. It employs a sequence of possible events where the probability of each event hinges on the state achieved in the previous event. A common usage of a Markov System is in designing AI behavior within games, where each state symbolizes a different behavior, and transitions between them are governed by the Markov chain probabilities.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Markov System",
+ "url": "https://towardsdatascience.com/modeling-games-with-markov-chains-c7b614731a7f",
+ "type": "article"
+ },
+ {
+ "title": "Using Markov Chain in Game Development",
+ "url": "https://www.gamedeveloper.com/design/advanced-math-in-game-design-random-walks-and-markov-chains-in-action",
+ "type": "article"
+ }
+ ]
},
"Cuc0xvCAkVyUtwOxO_Uua": {
"title": "Goal Oriented Behavior",
- "description": "Goal oriented behavior in game development refers to the artificial intelligence algorithms employed that give non-player characters (NPCs) the ability to make decisions based on certain objectives or tasks. These NPCs analyze the circumstances in the game environment, formulate a plan to achieve specific goals, and then execute it. The degree of sophistication in these algorithms can range from simple pathways to complex problem-solving solutions. As the behavior models are not hard-coded, it provides NPCs with greater adaptability and autonomy.",
- "links": []
+ "description": "Goal oriented behavior in game development refers to the artificial intelligence algorithms employed that give non-player characters (NPCs) the ability to make decisions based on certain objectives or tasks. These NPCs analyze the circumstances in the game environment, formulate a plan to achieve specific goals, and then execute it. The degree of sophistication in these algorithms can range from simple pathways to complex problem-solving solutions. As the behavior models are not hard-coded, it provides NPCs with greater adaptability and autonomy.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Goal Oriented Action Planning",
+ "url": "https://duckduckgo.com/?q=Goal%20Oriented%20Behaviorin%20Game%20Dev+site:www.gamedeveloper.com",
+ "type": "article"
+ }
+ ]
},
"mUyzX-DXnIKDl-r9o8d38": {
"title": "Movement",
- "description": "In the context of game development and game API (Application Programming Interface), movement refers to the process of changing the position or orientation of game objects. This involves using programming functions to control objects' movement like walk, run, jump, fly, or any such physical action in the game world. Movement is at the core to creating the dynamics of a game and is critical to both game physics and game logic. Different game engines offer different ways for handling movement. In some APIs, this process could be as simple as setting a new position directly, such as `object.position = new Vector3(5, 10, 0)`. Meanwhile, in others, more complex methods involving real-world physics are required, such as applying forces or altering velocity.",
- "links": []
+ "description": "In the context of game development and game API (Application Programming Interface), movement refers to the process of changing the position or orientation of game objects. This involves using programming functions to control objects' movement like walk, run, jump, fly, or any such physical action in the game world. Movement is at the core to creating the dynamics of a game and is critical to both game physics and game logic. Different game engines offer different ways for handling movement. In some APIs, this process could be as simple as setting a new position directly, such as `object.position = new Vector3(5, 10, 0)`. Meanwhile, in others, more complex methods involving real-world physics are required, such as applying forces or altering velocity.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Movement in Games",
+ "url": "https://www.gamedeveloper.com/design/analyzing-core-character-movement-in-3d",
+ "type": "article"
+ }
+ ]
},
"eoK70YRCz73GmzbNhh5kg": {
"title": "Board Game",
- "description": "**Board Games** represent a type of tabletop game that involves counters or pieces moved or placed on a pre-marked surface or \"board\", according to a set of rules. Some games are based on pure strategy, but many contain an element of chance, and others are purely chance, with no element of skill. Games usually have a goal that a player aims to achieve. Early board games represented a battle between two armies, and most modern board games are still based on defeating opponents in terms of counters, winning position, or accruement of points. With the digitalization of board games, developers use various **Game APIs** to create engaging and interactive board game experiences. An API defines a set of rules and protocols for building and interacting with different software applications. Game APIs allow developers to integrate with game-specific features like game mechanics, player statistics, achievements, and more.",
- "links": []
+ "description": "**Board Games** represent a type of tabletop game that involves counters or pieces moved or placed on a pre-marked surface or \"board\", according to a set of rules. Some games are based on pure strategy, but many contain an element of chance, and others are purely chance, with no element of skill. Games usually have a goal that a player aims to achieve. Early board games represented a battle between two armies, and most modern board games are still based on defeating opponents in terms of counters, winning position, or accruement of points. With the digitalization of board games, developers use various **Game APIs** to create engaging and interactive board game experiences. An API defines a set of rules and protocols for building and interacting with different software applications. Game APIs allow developers to integrate with game-specific features like game mechanics, player statistics, achievements, and more.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is a Board Game?",
+ "url": "https://code.tutsplus.com/how-to-learn-board-game-design-and-development--gamedev-11607a",
+ "type": "article"
+ }
+ ]
},
"oOjGqicW3eqRwWyIwJdBA": {
"title": "Minimax",
- "description": "`Minimax` is an artificial intelligence (AI) decision-making algorithm mainly used in decision making and game theory, particularly for two player zero-sum games. It formulates a strategy by simulating all possible game scenarios and assuming that the opponent is playing an optimal game. Minimax operates by the player minimizing the possible loss for a worst case scenario and thus making the 'maximum of the minimum' possible scenarios. This algorithm is often combined with `alpha-beta pruning` technique to increase its efficiency.",
- "links": []
+ "description": "`Minimax` is an artificial intelligence (AI) decision-making algorithm mainly used in decision making and game theory, particularly for two player zero-sum games. It formulates a strategy by simulating all possible game scenarios and assuming that the opponent is playing an optimal game. Minimax operates by the player minimizing the possible loss for a worst case scenario and thus making the 'maximum of the minimum' possible scenarios. This algorithm is often combined with `alpha-beta pruning` technique to increase its efficiency.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Minimax",
+ "url": "https://en.wikipedia.org/wiki/Minimax",
+ "type": "article"
+ }
+ ]
},
"KYCi4d475zZfNwlj6HZVD": {
"title": "AB Pruning",
- "description": "`Alpha-Beta pruning` is an optimization technique for the minimax algorithm used in artificial intelligence (AI) programming, such as game development. It cuts off branches in the game tree that don't need to be searched because there's already a better move available. It uses two parameters, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of, respectively. During the traversal of the game tree, branches of the tree that cannot possibly influence the final decision are not explored. This process 'prunes' the minimax tree, saving computational time and resources.",
- "links": []
+ "description": "`Alpha-Beta pruning` is an optimization technique for the minimax algorithm used in artificial intelligence (AI) programming, such as game development. It cuts off branches in the game tree that don't need to be searched because there's already a better move available. It uses two parameters, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of, respectively. During the traversal of the game tree, branches of the tree that cannot possibly influence the final decision are not explored. This process 'prunes' the minimax tree, saving computational time and resources.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "AB Pruning",
+ "url": "https://en.wikipedia.org/wiki/Alpha-beta_pruning",
+ "type": "article"
+ },
+ {
+ "title": "Alpha-Beta Pruning: A Deep Dive into its History",
+ "url": "https://dev.to/vedantasati03/alpha-beta-pruning-a-deep-dive-into-its-history-implementation-and-functionality-4ojf",
+ "type": "article"
+ }
+ ]
},
"QD9TfZn3yhGPVwiyJ6d0V": {
"title": "MCTS",
- "description": "\"MCTS\", or Monte Carlo Tree Search, is a search algorithm that utilizes methods of decision-making to solve complex problems, commonly implemented in a range of applications, including board games. It essentially operates through building a search tree, node by node, for probable states of a game and then using Monte Carlo simulations to provide a statistical analysis of potential outcomes. It randomly generates moves using the game's determined rules, then makes decisions based on the results of these simulations. In board games, it's often used to determine AI decisions by simulating possible game scenarios, hence contributing to making the AI system more robust and challenging.",
- "links": []
+ "description": "\"MCTS\", or Monte Carlo Tree Search, is a search algorithm that utilizes methods of decision-making to solve complex problems, commonly implemented in a range of applications, including board games. It essentially operates through building a search tree, node by node, for probable states of a game and then using Monte Carlo simulations to provide a statistical analysis of potential outcomes. It randomly generates moves using the game's determined rules, then makes decisions based on the results of these simulations. In board games, it's often used to determine AI decisions by simulating possible game scenarios, hence contributing to making the AI system more robust and challenging.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "MCTS Algorithm",
+ "url": "https://en.wikipedia.org/wiki/Monte_Carlo_tree_search/",
+ "type": "article"
+ },
+ {
+ "title": "Monte Carlo Tree Search",
+ "url": "https://www.geeksforgeeks.org/ml-monte-carlo-tree-search-mcts/",
+ "type": "article"
+ }
+ ]
},
"Hpk8eOaOepERMmOvUgkxa": {
"title": "Game AI",
- "description": "Game AI is a subfield of artificial intelligence (AI) that is used to create video game characters that act and react like real human players. Game AI is used in a variety of video games, from simple puzzle games to complex strategy games. Game AI can be used to create non-player characters (NPCs) that interact with the player, as well as to create intelligent opponents that challenge the player.",
- "links": []
+ "description": "Game AI is a subfield of artificial intelligence (AI) that is used to create video game characters that act and react like real human players. Game AI is used in a variety of video games, from simple puzzle games to complex strategy games. Game AI can be used to create non-player characters (NPCs) that interact with the player, as well as to create intelligent opponents that challenge the player.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Game AI",
+ "url": "https://medium.com/@alidrsn/game-development-with-ai-strategy-tools-and-examples-7ae77257c062",
+ "type": "article"
+ },
+ {
+ "title": "AI Game Development",
+ "url": "https://modl.ai/ai-game-development/",
+ "type": "article"
+ }
+ ]
},
"ul5XnVwQCwr4ZaL4kBNpd": {
"title": "Decision Learning",
- "description": "In the realm of game development, **Decision Learning** refers to information systems that recognize and analyze patterns to help in making decisions. It’s particularly used in AI game development where decision-making algorithms or artificial intelligence are programmed to learn from and make decisions based on past experiences or an established decision tree. These decisions can be about game behaviors, player interactions, environment changes and so on. Various methods such as reinforcement learning, Bayesian methods, Decision trees, Neural networks are used to facilitate decision learning in game development.",
- "links": []
+ "description": "In the realm of game development, **Decision Learning** refers to information systems that recognize and analyze patterns to help in making decisions. It’s particularly used in AI game development where decision-making algorithms or artificial intelligence are programmed to learn from and make decisions based on past experiences or an established decision tree. These decisions can be about game behaviors, player interactions, environment changes and so on. Various methods such as reinforcement learning, Bayesian methods, Decision trees, Neural networks are used to facilitate decision learning in game development.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Decision Learning in Game Development",
+ "url": "https://medium.com/@alidrsn/game-development-with-ai-strategy-tools-and-examples-7ae77257c062",
+ "type": "article"
+ }
+ ]
},
"aw1BAGqrdBBmUwB6vMF_A": {
"title": "Naive Bayes Classifier",
- "description": "The Naive Bayes Classifier is a type of probabalistic machine learning model that is utilized for classification tasks. These tasks can range from email filtering to sentiment analysis or even document categorization. This model is termed 'naive' because it operates under the assumption that each input feature is independent from one another. This simplifying assumption allows for the computation of the probabilities involved to be severely less complicated. It follows the Bayes' Theorem equation to predict the class of the given data point. While this classifier might seem simplistic, it holds its own quite well in complex real-world situations. Due to its simplicity and high efficiency, the Naive Bayes Classifier is one of the most reliable and practical methods in machine learning applications.",
- "links": []
+ "description": "The Naive Bayes Classifier is a type of probabalistic machine learning model that is utilized for classification tasks. These tasks can range from email filtering to sentiment analysis or even document categorization. This model is termed 'naive' because it operates under the assumption that each input feature is independent from one another. This simplifying assumption allows for the computation of the probabilities involved to be severely less complicated. It follows the Bayes' Theorem equation to predict the class of the given data point. While this classifier might seem simplistic, it holds its own quite well in complex real-world situations. Due to its simplicity and high efficiency, the Naive Bayes Classifier is one of the most reliable and practical methods in machine learning applications.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "How Naive Bayes Classifier Works?",
+ "url": "https://www.machinelearningplus.com/predictive-modeling/how-naive-bayes-algorithm-works-with-example-and-full-code/",
+ "type": "article"
+ },
+ {
+ "title": "Text Classification With Naive Bayes Classifier",
+ "url": "https://gamedevacademy.org/text-classification-tutorial-with-naive-bayes/",
+ "type": "article"
+ }
+ ]
},
"sz1047M8_kScjth84yPwU": {
"title": "Decision Tree Learning",
"description": "`Decision Tree Learning` is an important concept in game development, particularly in the development of artificial intelligence for game characters. It is a kind of machine learning method that is based on using decision tree models to predict or classify information. A decision tree is a flowchart-like model, where each internal node denotes a test on an attribute, each branch represents an outcome of that test, and each leaf node holds a class label (decision made after testing all attributes). By applying decision tree learning models, computer-controlled characters can make decisions based on different conditions or states. They play a key role in creating complex and interactive gameplay experiences, by enabling game characters to adapt to the player's actions and the ever-changing game environment.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Decision trees - A friendly introduction",
+ "title": "Game Strategy - Real Time Decision Tree",
+ "url": "https://medium.com/@aleena.sebastian/game-strategy-optimization-using-decision-trees-d4067008eed1",
+ "type": "article"
+ },
+ {
+ "title": "Real Time Decision Tree",
+ "url": "https://www.codewithc.com/real-time-decision-trees-in-pygame-ai/",
+ "type": "article"
+ },
+ {
+ "title": "Decision Trees - A Friendly Introduction",
"url": "https://www.youtube.com/watch?v=HkyWAhr9v8g",
"type": "video"
}
@@ -819,7 +1953,22 @@
"description": "Deep Learning is a sub-field of machine learning, inspired by the structure and function of the human brain, specifically designed to process complex input/output transformations. It uses artificial neural networks with many layers (hence the term 'deep' learning) to model complex, non-linear hypotheses and discover hidden patterns within large datasets. Deep learning techniques are crucial in game development, primarily in creating intelligent behaviors and features in gaming agents, procedural content generation, and player profiling. You might have heard about the uses of deep learning technologies in popular, cutting-edge games like Google DeepMind's AlphaGo. Coding languages like Python, R, and frameworks like TensorFlow, Keras, and PyTorch are commonly used for deep learning tasks. Learning Deep Learning can be a prominent game-changer in your game development journey.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "But what is a neural network? | Chapter 1, Deep learning",
+ "title": "Deep Learning",
+ "url": "https://en.wikipedia.org/wiki/Deep_learning",
+ "type": "article"
+ },
+ {
+ "title": "Deep Learning Book",
+ "url": "https://www.deeplearningbook.org/",
+ "type": "article"
+ },
+ {
+ "title": "Introduction to Deep Learning",
+ "url": "https://www.ibm.com/topics/deep-learning",
+ "type": "article"
+ },
+ {
+ "title": "What is a Neural Network?",
"url": "https://www.youtube.com/watch?v=aircAruvnKk",
"type": "video"
}
@@ -827,15 +1976,36 @@
},
"AoH2r4EOHyZd8YaV24rBk": {
"title": "Artificial Neural Network",
- "description": "Artificial Neural Networks (ANN) are a branch of machine learning that draw inspiration from biological neural networks. ANNs are capable of 'learning' from observational data, thereby enhancing game development in numerous ways. They consist of interconnected layers of nodes, or artificial neurons, that process information through their interconnected network. Each node's connection has numerical weight that gets adjusted during learning, which helps in optimizing problem solving. ANNs are utilized in various aspects of game development, such as improving AI behavior, procedural content generation, and game testing. They can also be used for image recognition tasks, such as identifying objects or actions in a game environment.",
- "links": []
+ "description": "Artificial Neural Networks (ANN) are a branch of machine learning that draw inspiration from biological neural networks. ANNs are capable of 'learning' from observational data, thereby enhancing game development in numerous ways. They consist of interconnected layers of nodes, or artificial neurons, that process information through their interconnected network. Each node's connection has numerical weight that gets adjusted during learning, which helps in optimizing problem solving. ANNs are utilized in various aspects of game development, such as improving AI behavior, procedural content generation, and game testing. They can also be used for image recognition tasks, such as identifying objects or actions in a game environment.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Artificial Neural Networks (ANN)",
+ "url": "https://www.geeksforgeeks.org/artificial-neural-networks-and-its-applications/",
+ "type": "article"
+ },
+ {
+ "title": "What is ANN?",
+ "url": "https://www.coursera.org/articles/artificial-neural-network",
+ "type": "article"
+ },
+ {
+ "title": "What is Neural Network?",
+ "url": "https://www.ibm.com/topics/neural-networks",
+ "type": "article"
+ }
+ ]
},
"rGEHTfdNeBAX3_XqC-vvI": {
"title": "Reinforcements Learning",
- "description": "`Reinforcement Learning` is a type of Machine Learning which is geared towards making decisions. It involves an agent that learns to behave in an environment, by performing certain actions and observing the results or rewards/results it gets. The main principle of reinforcement learning is to reward good behavior and penalize bad behavior. The agent learns from the consequences of its actions, rather than from being taught explicitly. In the context of game development, reinforcement learning could be used to develop an AI (Artificial Intelligence) which can improve its performance in a game based on reward-driven behavior. The AI gradually learns the optimal strategy, known as policy, to achieve the best result.",
+ "description": "`Reinforcement Learning` is a type of Machine Learning which is geared towards making decisions. It involves an agent that learns to behave in an environment, by performing certain actions and observing the results or rewards/results it gets. The main principle of reinforcement learning is to reward good behavior and penalize bad behavior. The agent learns from the consequences of its actions, rather than from being taught explicitly. In the context of game development, reinforcement learning could be used to develop an AI (Artificial Intelligence) which can improve its performance in a game based on reward-driven behavior. The AI gradually learns the optimal strategy, known as policy, to achieve the best result.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "AI Learns to Walk (deep reinforcement learning)",
+ "title": "Rendering Perfect Reflections",
+ "url": "https://developer.nvidia.com/blog/rendering-perfect-reflections-and-refractions-in-path-traced-games/",
+ "type": "article"
+ },
+ {
+ "title": "AI Learns to Walk (Deep Reinforcement Learning)",
"url": "https://m.youtube.com/watch?v=L_4BPjLBF4E",
"type": "video"
}
@@ -843,38 +2013,115 @@
},
"9_OcZ9rzedDFfwEYxAghh": {
"title": "Learning",
- "description": "Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. In terms of game development, machine learning can be used to create NPCs that can learn from the player's actions and adapt to them.",
- "links": []
+ "description": "Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. In terms of game development, machine learning can be used to create NPCs that can learn from the player's actions and adapt to them.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Machine Learning - Wiki",
+ "url": "https://en.wikipedia.org/wiki/Machine_learning",
+ "type": "article"
+ },
+ {
+ "title": "Machine Learning Explained",
+ "url": "https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained",
+ "type": "article"
+ }
+ ]
},
"CDYszS1U4v95GozB_drbt": {
"title": "Advanced Rendering",
- "description": "**Advanced rendering** is a sophisticated technique used in game development that involves translating a 3D model or scene into a 2D image or animation. Advanced rendering techniques can involve various complex methods such as physically-based rendering, ray tracing, global illumination, subsurface scattering, caustics, and volumetric rendering. The use of advanced rendering can result in highly realistic graphics, as it uses complex calculations to depict how light behaves in the real world. Advanced rendering often requires powerful hardware resources and specialized software tools in order to achieve the desired images and animations.",
- "links": []
+ "description": "**Advanced rendering** is a sophisticated technique used in game development that involves translating a 3D model or scene into a 2D image or animation. Advanced rendering techniques can involve various complex methods such as physically-based rendering, ray tracing, global illumination, subsurface scattering, caustics, and volumetric rendering. The use of advanced rendering can result in highly realistic graphics, as it uses complex calculations to depict how light behaves in the real world. Advanced rendering often requires powerful hardware resources and specialized software tools in order to achieve the desired images and animations.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Advanced Rendering",
+ "url": "https://www.advances.realtimerendering.com/",
+ "type": "article"
+ },
+ {
+ "title": "Advances in Real Time Rendering",
+ "url": "https://www.advances.realtimerendering.com/s2024/index.html",
+ "type": "article"
+ }
+ ]
},
"_i7BXZq-iLxQc3QZRMees": {
"title": "Real-time Ray Tracing",
- "description": "**Real-time Ray Tracing** is a notable advancement in rendering technology. It aims to mimic the way light works in the real world by simulating how each ray of light interacts with different surfaces. In real-time ray tracing, rays of light are generated from the viewer's perspective and sent into the scene. They can reflect off surfaces, refract through materials, or scatter in different directions. These rays can also be absorbed, producing shadows and shaping the visibility of objects. What makes real-time ray tracing special is its ability to calculate all these interactions in real-time, which allows graphics to be much more dynamic and interactive. The complexity of real-time ray tracing involves extensive computational power and it has been a groundbreaking feature in newer hardware and software releases.",
- "links": []
+ "description": "**Real-time Ray Tracing** is a notable advancement in rendering technology. It aims to mimic the way light works in the real world by simulating how each ray of light interacts with different surfaces. In real-time ray tracing, rays of light are generated from the viewer's perspective and sent into the scene. They can reflect off surfaces, refract through materials, or scatter in different directions. These rays can also be absorbed, producing shadows and shaping the visibility of objects. What makes real-time ray tracing special is its ability to calculate all these interactions in real-time, which allows graphics to be much more dynamic and interactive. The complexity of real-time ray tracing involves extensive computational power and it has been a groundbreaking feature in newer hardware and software releases.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is Real-Time Ray Tracing?",
+ "url": "https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing",
+ "type": "article"
+ },
+ {
+ "title": "Nvidia RTX Real-Time Ray Tracing",
+ "url": "https://blogs.nvidia.com/blog/rtx-real-time-ray-tracing/",
+ "type": "article"
+ }
+ ]
},
"qoIkw9o8iMx7MzUyVYoR2": {
"title": "DirectX Ray Tracing",
- "description": "DirectX Ray Tracing (DXR) is an advanced Windows API introduced with DirectX 12. It delivers real-time, cinema-quality rendering to contend development in gaming and professional visualization. It provides highly efficient and straightforward access to RT Core hardware. DXR adds four new concepts to DirectX 12: The acceleration structure, The Raytracing pipeline state object, Shader tables, and the Command list method (DispatchRays). It represents a significant step forward by Microsoft in embracing Ray Tracing as a new standard in real-time rendering pipelines. For developers, DirectX Ray tracing is straightforward to integrate into existing engines given its easy compatibility with existing DirectX 12 programming models. However, to truly maximize DXR's potential, a deep understanding of both graphics workloads and tracing algorithms is necessary.",
- "links": []
+ "description": "DirectX Ray Tracing (DXR) is an advanced Windows API introduced with DirectX 12. It delivers real-time, cinema-quality rendering to contend development in gaming and professional visualization. It provides highly efficient and straightforward access to RT Core hardware. DXR adds four new concepts to DirectX 12: The acceleration structure, The Ray tracing pipeline state object, Shader tables, and the Command list method (DispatchRays). It represents a significant step forward by Microsoft in embracing Ray Tracing as a new standard in real-time rendering pipelines. For developers, DirectX Ray tracing is straightforward to integrate into existing engines given its easy compatibility with existing DirectX 12 programming models. However, to truly maximize DXR's potential, a deep understanding of both graphics workloads and tracing algorithms is necessary.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Announcing Microsoft DirectX Ray Tracing",
+ "url": "https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/",
+ "type": "article"
+ },
+ {
+ "title": "DirectX Ray Tracing",
+ "url": "https://developer.nvidia.com/blog/introduction-nvidia-rtx-directx-ray-tracing/",
+ "type": "article"
+ },
+ {
+ "title": "DX12 Ray Tracing",
+ "url": "https://developer.nvidia.com/blog/dx12-raytracing-tutorials/",
+ "type": "article"
+ }
+ ]
},
"tDGnV8dGIFr_diz4HcEjr": {
"title": "Vulkan Ray Tracing",
- "description": "`Vulkan Ray Tracing` is an extension of the Vulkan API (Application Programming Interface), which is an open-source, cross-platform API developed by the Khronos Group. Its main goal is to provide developers with greater control over the GPU, enabling better performance and more efficient multisystem and multicore use. The Vulkan Ray Tracing extension provides a standardized ray tracing interface similar to DirectX Raytracing, enabling real-time ray tracing applications to be built on Vulkan. This extension includes a number of functionalities such as acceleration structure building and management, ray tracing shader stages and pipelines, and indirect ray tracing dispatch.",
- "links": []
+ "description": "`Vulkan Ray Tracing` is an extension of the Vulkan API (Application Programming Interface), which is an open-source, cross-platform API developed by the Khronos Group. Its main goal is to provide developers with greater control over the GPU, enabling better performance and more efficient multisystem and multicore use. The Vulkan Ray Tracing extension provides a standardized ray tracing interface similar to DirectX Raytracing, enabling real-time ray tracing applications to be built on Vulkan. This extension includes a number of functionalities such as acceleration structure building and management, ray tracing shader stages and pipelines, and indirect ray tracing dispatch.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "NVIDIA Vulkan Ray Tracing Tutorial",
+ "url": "https://developer.nvidia.com/rtx/raytracing/vkray",
+ "type": "article"
+ },
+ {
+ "title": "Ray Tracing with Vulkan",
+ "url": "https://docs.vulkan.org/guide/latest/extensions/ray_tracing.html",
+ "type": "article"
+ }
+ ]
},
"GDLysy3__cbYidEaOmFze": {
"title": "OptiX",
- "description": "`OptiX` is an application framework developed by NVIDIA for achieving high performance ray tracing in graphics processing unit (GPU) programming. It's mainly intended for use in real-time graphics application, scientific simulations, and other visual computing applications. `OptiX` provides key functionalities such as hierarchical object acceleration, programmable ray generation, material shading, and dynamic scene management to achieve fast and state-of-the-art rendering. This highly efficient, scalable and flexible API supports the coding of applications, not just for graphic rendering but also for collision detection and physics simulation. Please note that access to `OptiX` currently requires NVIDIA GeForce, Quadro and Tesla products with Kepler, Maxwell, Pascal, Volta, Turing and later generation GPUs.",
- "links": []
+ "description": "`OptiX` is an application framework developed by NVIDIA for achieving high performance ray tracing in graphics processing unit (GPU) programming. It's mainly intended for use in real-time graphics application, scientific simulations, and other visual computing applications. `OptiX` provides key functionalities such as hierarchical object acceleration, programmable ray generation, material shading, and dynamic scene management to achieve fast and state-of-the-art rendering. This highly efficient, scalable and flexible API supports the coding of applications, not just for graphic rendering but also for collision detection and physics simulation. Please note that access to `OptiX` currently requires NVIDIA GeForce, Quadro and Tesla products with Kepler, Maxwell, Pascal, Volta, Turing and later generation GPUs.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "OptiX by Nvidia",
+ "url": "https://developer.nvidia.com/optix/",
+ "type": "article"
+ },
+ {
+ "title": "OptiX API Documentation",
+ "url": "https://developer.nvidia.com/rtx/ray-tracing/optix",
+ "type": "article"
+ }
+ ]
},
"XvFtMHrYsBREmuerE7CGc": {
"title": "Physically-Based Rendering",
- "description": "Physically Based Rendering (PBR) is a technique in computer graphics that aims to mimic the interaction of light with surfaces in the real world. It models how light behaves, from reflection to refraction, in a way that accurately represents reality. The PBR model factors in physical properties of materials, such as roughness or metallicity, making the rendering output more consistent and predictable under different lighting conditions. It uses complex shading algorithms and light calculations to generate a high level of realism. In order to achieve this, many PBR systems use a combination of two important components: the Bidirectional Reflectance Distribution Function (BRDF), which defines how light is reflected off an object, and the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF), which handles how light scatters under the surface of an object.",
- "links": []
+ "description": "Physically Based Rendering (PBR) is a technique in computer graphics that aims to mimic the interaction of light with surfaces in the real world. It models how light behaves, from reflection to refraction, in a way that accurately represents reality. The PBR model factors in physical properties of materials, such as roughness or metallicity, making the rendering output more consistent and predictable under different lighting conditions. It uses complex shading algorithms and light calculations to generate a high level of realism. In order to achieve this, many PBR systems use a combination of two important components: the Bidirectional Reflectance Distribution Function (BRDF), which defines how light is reflected off an object, and the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF), which handles how light scatters under the surface of an object.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Physically-Based Rendering",
+ "url": "https://dev.epicgames.com/community/learning/tutorials/Yx3q/unreal-engine-physically-based-rendering-pbr-explained-in-depth",
+ "type": "article"
+ }
+ ]
},
"PuhXaRZ-Ql5PCqzMyz3en": {
"title": "Translucency & Transparency",
@@ -889,20 +2136,47 @@
},
"H3hkafXO9zqEnWuwHa38P": {
"title": "Conservation of Energy",
- "description": "In the realm of physically-based rendering, **translucency** and **transparency** act as key aspects in creating visually authentic and compelling images. Transparency refers to the property of an object that allows light to pass through it unhindered, hence making the object clear or invisible. This is commonly seen in materials such as glass, clear plastic, and water. On the other hand, translucency describes how light interacts with a semi-transparent object. Instead of passing directly through, light enters the object, travels within for some distance and then exits at a different location. Common examples of such surfaces include human skin, marble, milk, or wax, which exhibit a soft, diffused lighting effect when light rays pass through them. The technique to achieve this effect in graphics involves subsurface scattering, where incoming light is scattered beneath the object's surface, illuminated it in a way that showcases the material's internal structure.",
- "links": []
+ "description": "In the realm of physically-based rendering, **translucency** and **transparency** act as key aspects in creating visually authentic and compelling images. Transparency refers to the property of an object that allows light to pass through it unhindered, hence making the object clear or invisible. This is commonly seen in materials such as glass, clear plastic, and water. On the other hand, translucency describes how light interacts with a semi-transparent object. Instead of passing directly through, light enters the object, travels within for some distance and then exits at a different location. Common examples of such surfaces include human skin, marble, milk, or wax, which exhibit a soft, diffused lighting effect when light rays pass through them. The technique to achieve this effect in graphics involves subsurface scattering, where incoming light is scattered beneath the object's surface, illuminated it in a way that showcases the material's internal structure.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "What is Concave Shape?",
+ "url": "https://dev.to/fkkarakurt/geometry-and-primitives-in-game-development-1og",
+ "type": "article"
+ }
+ ]
},
"olY1ibR7kw1yJ58TfU-37": {
"title": "Metallicity",
- "description": "In Physically Based Rendering (PBR), **Metallicity** is a critical property of a material, which influences how it interacts with light. It's a binary property, indicating whether the material is a 'metal' or 'non-metal'. Metals have a high metallicity value (often 1), non-metals (such as wood, plastic, etc.) have a low metallicity value (often 0). Interestingly, with PBR, there exists no 'partially metal' materials ― it's an all or nothing characteristic. This property significantly impacts color handling, too, as metals derive their color from specular reflection while non-metals derive from subsurface scattering (diffuse).",
- "links": []
+ "description": "In Physically Based Rendering (PBR), **Metallicity** is a critical property of a material, which influences how it interacts with light. It's a binary property, indicating whether the material is a 'metal' or 'non-metal'. Metals have a high metallicity value (often 1), non-metals (such as wood, plastic, etc.) have a low metallicity value (often 0). Interestingly, with PBR, there exists no 'partially metal' materials ― it's an all or nothing characteristic. This property significantly impacts color handling, too, as metals derive their color from specular reflection while non-metals derive from subsurface scattering (diffuse).\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Metallicty in PBR",
+ "url": "https://en.wikipedia.org/wiki/Physically_based_rendering",
+ "type": "article"
+ },
+ {
+ "title": "What is PBR in 3D Games",
+ "url": "https://www.adobe.com/products/substance3d/discover/pbr.html",
+ "type": "article"
+ }
+ ]
},
"YrQgfjsdLCIUxrwflpEHO": {
"title": "Microsurface Scattering",
"description": "Microsurface scattering, also known as sub-surface scattering, is an important phenomenon in Physically Based Rendering (PBR). This process involves the penetration of light into the surface of a material, where it is scattered by interacting with the material. In other words, when light strikes an object, rather than simply bouncing off the surface, some of it goes into the object and gets scattered around inside before getting re-emitted. It is key to achieving more realistic rendering of translucent materials like skin, marble, milk, and more. Consider it essential for replicating how light interacts with real-world materials in a convincing manner in your game.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "The 4 main types of subsurface scattering",
+ "title": "Subsurface Rendering",
+ "url": "https://gamedev.net/forums/topic/600708-subsurface-scattering/",
+ "type": "article"
+ },
+ {
+ "title": "Real Time Sub Surface Rendering",
+ "url": "https://therealmjp.github.io/posts/sss-intro/",
+ "type": "article"
+ },
+ {
+ "title": "Types of Subsurface Scattering",
"url": "https://www.youtube.com/watch?v=GkjvYSbGHg4",
"type": "video"
}
diff --git a/public/roadmap-content/git-github.json b/public/roadmap-content/git-github.json
index fc0042fd7..eed5fb996 100644
--- a/public/roadmap-content/git-github.json
+++ b/public/roadmap-content/git-github.json
@@ -227,7 +227,7 @@
},
{
"title": "gitignore Documentation",
- "url": "https://git-scm.com/docs/gitignore/en",
+ "url": "https://git-scm.com/docs/gitignore",
"type": "article"
},
{
@@ -599,7 +599,7 @@
"links": [
{
"title": "git clone",
- "url": "https://git-scm.com/docs/git-clone/en",
+ "url": "https://git-scm.com/docs/git-clone",
"type": "article"
},
{
@@ -810,7 +810,7 @@
"links": [
{
"title": "Rebasing",
- "url": "https://git-scm.com/book/en/Git-Branching-Rebasing",
+ "url": "https://git-scm.com/book/en/v2/Git-Branching-Rebasing",
"type": "article"
}
]
diff --git a/public/roadmap-content/javascript.json b/public/roadmap-content/javascript.json
index 62a2e2b3c..0a0ae537b 100644
--- a/public/roadmap-content/javascript.json
+++ b/public/roadmap-content/javascript.json
@@ -1475,17 +1475,6 @@
}
]
},
- "BbrrliATuH9beTypRaFey": {
- "title": "Relational Operators",
- "description": "Relational operators are also known as comparison operators. They are used to find the relationship between two values or compare the relationship between them; on the comparison, they yield the result true or false.\n\nVisit the following resources to learn more:",
- "links": [
- {
- "title": "Relational Operators - MDN",
- "url": "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators#relational_operators",
- "type": "article"
- }
- ]
- },
"k8bJH9qydZm8I9rhH7rXw": {
"title": "Functions",
"description": "Functions exist so we can reuse code. They are blocks of code that execute whenever they are invoked. Each function is typically written to perform a particular task, like an addition function used to find the sum of two or more numbers. When numbers need to be added anywhere within your code, the addition function can be invoked as many times as necessary.\n\nVisit the following resources to learn more:",
diff --git a/public/roadmap-content/nodejs.json b/public/roadmap-content/nodejs.json
index 0c590de47..3fc73fbe9 100644
--- a/public/roadmap-content/nodejs.json
+++ b/public/roadmap-content/nodejs.json
@@ -448,6 +448,11 @@
"url": "https://blog.bitsrc.io/types-of-native-errors-in-javascript-you-must-know-b8238d40e492",
"type": "article"
},
+ {
+ "title": "JavaScript error reference - MDN",
+ "url": "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors",
+ "type": "article"
+ },
{
"title": "Explore top posts about JavaScript",
"url": "https://app.daily.dev/tags/javascript?ref=roadmapsh",
@@ -463,6 +468,11 @@
"title": "Node.js Errors - Official Docs",
"url": "https://nodejs.org/api/errors.html#errors_class_systemerror",
"type": "article"
+ },
+ {
+ "title": "@Article@16 Common Errors in Node.js and How to Fix Them",
+ "url": "https://betterstack.com/community/guides/scaling-nodejs/nodejs-errors/",
+ "type": "article"
}
]
},
@@ -522,6 +532,11 @@
"title": "Async Errors",
"url": "https://www.mariokandut.com/handling-errors-in-asynchronous-functions-node-js/",
"type": "article"
+ },
+ {
+ "title": "The best way to handle errors in asynchronous javascript",
+ "url": "https://dev.to/m__mdy__m/the-best-way-to-handle-errors-in-asynchronous-javascript-16bb",
+ "type": "article"
}
]
},
@@ -996,7 +1011,7 @@
"links": [
{
"title": "Node.js Learn environment variables",
- "url": "https://www.digitalocean.com/community/tutorials/nodejs-command-line-arguments-node-scripts",
+ "url": "https://nodejs.org/en/learn/command-line/how-to-read-environment-variables-from-nodejs",
"type": "article"
},
{
@@ -1030,6 +1045,11 @@
"title": "Official Documentation",
"url": "https://nodejs.org/api/process.html#processstdin",
"type": "article"
+ },
+ {
+ "title": "Node.js Process stdin & stdout",
+ "url": "https://nodecli.com/node-stdin-stdout",
+ "type": "article"
}
]
},
@@ -1169,6 +1189,11 @@
"title": "Explore top posts about Node.js",
"url": "https://app.daily.dev/tags/nodejs?ref=roadmapsh",
"type": "article"
+ },
+ {
+ "title": "What is an API (in 5 minutes)",
+ "url": "https://youtu.be/ByGJQzlzxQg?si=9EB9lgRvEOgt3xPJ",
+ "type": "video"
}
]
},
@@ -1867,6 +1892,11 @@
"title": "winston Website",
"url": "https://github.com/winstonjs/winston",
"type": "opensource"
+ },
+ {
+ "title": "A Complete Guide to Winston Logging in Node.js",
+ "url": "https://betterstack.com/community/guides/logging/how-to-install-setup-and-use-winston-and-morgan-to-log-node-js-applications/",
+ "type": "article"
}
]
},
diff --git a/public/roadmap-content/postgresql-dba.json b/public/roadmap-content/postgresql-dba.json
index 306ef9382..19135f54f 100644
--- a/public/roadmap-content/postgresql-dba.json
+++ b/public/roadmap-content/postgresql-dba.json
@@ -1514,12 +1514,12 @@
"description": "PgBouncer is a lightweight connection pooler for PostgreSQL, designed to reduce the overhead associated with establishing new database connections. It sits between the client and the PostgreSQL server, maintaining a pool of active connections that clients can reuse, thus improving performance and resource utilization. PgBouncer supports multiple pooling modes, including session pooling, transaction pooling, and statement pooling, catering to different use cases and workloads. It is highly configurable, allowing for fine-tuning of connection limits, authentication methods, and other parameters to optimize database access and performance.",
"links": [
{
- "title": "pgbounder/pgbouncer",
+ "title": "pgbouncer/pgbouncer",
"url": "https://github.com/pgbouncer/pgbouncer",
"type": "opensource"
},
{
- "title": "PgBounder Website",
+ "title": "PgBouncer Website",
"url": "https://www.pgbouncer.org/",
"type": "article"
}
diff --git a/public/roadmap-content/python.json b/public/roadmap-content/python.json
index 969c1e9ee..0f3937e19 100644
--- a/public/roadmap-content/python.json
+++ b/public/roadmap-content/python.json
@@ -137,6 +137,11 @@
"title": "Python Functions - W3Schools",
"url": "https://www.w3schools.com/python/python_functions.asp",
"type": "article"
+ },
+ {
+ "title": "Defining Python Functions",
+ "url": "https://realpython.com/defining-your-own-python-function/",
+ "type": "article"
}
]
},
@@ -319,6 +324,11 @@
"title": "Explore top posts about Python",
"url": "https://app.daily.dev/tags/python?ref=roadmapsh",
"type": "article"
+ },
+ {
+ "title": "Learn Python - Full Course",
+ "url": "https://www.youtube.com/watch?v=4M87qBgpafk",
+ "type": "video"
}
]
},
@@ -328,7 +338,7 @@
"links": [
{
"title": "Visit Dedicated DSA Roadmap",
- "url": "https://roadmap.sh/data-structures-and-algorithms",
+ "url": "https://roadmap.sh/datastructures-and-algorithms",
"type": "article"
},
{
@@ -1166,10 +1176,16 @@
}
]
},
- "virtualenv@_IXXTSwQOgYzYIUuKVWNE.md": {
+ "_IXXTSwQOgYzYIUuKVWNE": {
"title": "virtualenv",
- "description": "",
- "links": []
+ "description": "`virtualenv` is a tool to create isolated Python environments. It creates a folder which contains all the necessary executables to use the packages that a Python project would need.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "Virtual Environments",
+ "url": "https://virtualenv.pypa.io/en/latest/",
+ "type": "article"
+ }
+ ]
},
"N5VaKMbgQ0V_BC5tadV65": {
"title": "pyenv",
@@ -1382,7 +1398,7 @@
},
"SSnzpijHLO5_l7DNEoMfx": {
"title": "nose",
- "description": "Nose is another opensource testing framework that extends `unittest` to provide a more flexible testing framework.\n\nVisit the following resources to learn more:",
+ "description": "Nose is another opensource testing framework that extends `unittest` to provide a more flexible testing framework. Note that Nose is no longer maintained and `pytest` is considered the replacement.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Introduction to Nose",
diff --git a/public/roadmap-content/react.json b/public/roadmap-content/react.json
index 8d9048735..6e7fb25d8 100644
--- a/public/roadmap-content/react.json
+++ b/public/roadmap-content/react.json
@@ -1265,6 +1265,11 @@
"title": "Explore top posts about Jest",
"url": "https://app.daily.dev/tags/jest?ref=roadmapsh",
"type": "article"
+ },
+ {
+ "title": "Testing JavaScript with Jest on Vultr",
+ "url": "https://developer.mozilla.org/en-US/blog/test-javascript-with-jest-on-vultr/",
+ "type": "article"
}
]
},
@@ -1670,6 +1675,11 @@
"title": "React Suspense",
"url": "https://react.dev/reference/react/Suspense",
"type": "article"
+ },
+ {
+ "title": "React Suspense - A complete guide",
+ "url": "https://hygraph.com/blog/react-suspense",
+ "type": "article"
}
]
},
diff --git a/public/roadmap-content/technical-writer.json b/public/roadmap-content/technical-writer.json
index 82911fad3..fce6d8c0b 100644
--- a/public/roadmap-content/technical-writer.json
+++ b/public/roadmap-content/technical-writer.json
@@ -27,8 +27,29 @@
},
"jl1FsQ5-WGKeFyaILNt_p": {
"title": "What is Technical Writing?",
- "description": "Technical writing involves explaining complex concepts in a simple, easy-to-understand language to a specific audience. This form of writing is commonly utilized in fields such as engineering, computer hardware and software, finance, consumer electronics, and biotechnology.\n\nThe primary objective of a technical writer is to simplify complicated information and present it in a clear and concise manner. The duties of a technical writer may include creating how-to guides, instruction manuals, FAQ pages, journal articles, and other technical content that can aid the user's understanding.\n\nTechnical writing prioritizes clear and consistent communication, using straightforward language and maintaining a uniform writing style to prevent confusion. Technical writers often integrate visual aids and leverage documentation tools to achieve these objectives.\n\nThe ultimate goal is to enable the user to understand and navigate a new product or concept without difficulty.",
- "links": []
+ "description": "Technical writing involves explaining complex concepts in a simple, easy-to-understand language to a specific audience. This form of writing is commonly utilized in fields such as engineering, computer hardware and software, finance, consumer electronics, and biotechnology.\n\nThe primary objective of a technical writer is to simplify complicated information and present it in a clear and concise manner. The duties of a technical writer may include creating how-to guides, instruction manuals, FAQ pages, journal articles, and other technical content that can aid the user's understanding.\n\nTechnical writing prioritizes clear and consistent communication, using straightforward language and maintaining a uniform writing style to prevent confusion. Technical writers often integrate visual aids and leverage documentation tools to achieve these objectives.\n\nThe ultimate goal is to enable the user to understand and navigate a new product or concept without difficulty.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Indeed: What Is Technical Writing?",
+ "url": "https://www.indeed.com/career-advice/career-development/technical-writing",
+ "type": "article"
+ },
+ {
+ "title": "TechWhirl: What Is Technical Writing?",
+ "url": "https://techwhirl.com/what-is-technical-writing/",
+ "type": "article"
+ },
+ {
+ "title": "Amruta Ranade: What do Technical Writers do?",
+ "url": "https://www.youtube.com/watch?v=biocrCx5T_k",
+ "type": "video"
+ },
+ {
+ "title": "Technical Writer HQ: What is Technical Writing?",
+ "url": "https://www.youtube.com/watch?v=KEI5JzBp2Io",
+ "type": "video"
+ }
+ ]
},
"j69erqfosSZMDlmKcnnn0": {
"title": "Role of Technical Writers inOrganizations",
diff --git a/public/roadmap-content/terraform.json b/public/roadmap-content/terraform.json
index 7fa268a74..03055cd17 100644
--- a/public/roadmap-content/terraform.json
+++ b/public/roadmap-content/terraform.json
@@ -265,8 +265,19 @@
},
"76kp98rvph_8UOXZR-PBC": {
"title": "Resource Lifecycle",
- "description": "",
- "links": []
+ "description": "Each Terraform resource is subject to the lifecycle: Create, Update or Recreate, Destroy. When executing `terraform apply`, each resource:\n\n* which exists in configuration but not in state is created\n* which exists in configuration and state and has changed is updated\n* which exists in configuration and state and has changed, but cannot updated due to API limitation is destroyed and recreated\n* which exists in state, but not (anymore) in configuration is destroyed\n\nThe lifecycle behaviour can be modified to some extend using the `lifecycle` meta argument.\n\nLearn more from the following resources:",
+ "links": [
+ {
+ "title": "How Terraform Applies a Configuration",
+ "url": "https://developer.hashicorp.com/terraform/language/resources/behavior#how-terraform-applies-a-configuration",
+ "type": "article"
+ },
+ {
+ "title": "The lifecycle Meta-Argument",
+ "url": "https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle",
+ "type": "article"
+ }
+ ]
},
"EIsex6gNHDRYHn0o2spzi": {
"title": "depends_on",
@@ -495,7 +506,7 @@
"type": "article"
},
{
- "title": "@Article@Terraform Locals",
+ "title": "Terraform Locals",
"url": "https://spacelift.io/blog/terraform-locals",
"type": "article"
}
@@ -550,7 +561,7 @@
},
"8giL6H5944M2L0rwxjPso": {
"title": "Sensitive Outputs",
- "description": "Terraform sensitive outputs are a feature used to protect sensitive information in Terraform configurations. When an output is marked as sensitive, Terraform obscures its value in the console output and state files, displaying it as \"\" instead of the actual value. This is crucial for protecting sensitive data like passwords or API keys.\n\nTo mark an output as sensitive, use the sensitive argument in the output block:\n\n output \"database_password\" {\n value = aws_db_instance.example.password\n sensitive = true\n }\n \n\nSensitive outputs are still accessible programmatically, but their values are hidden in logs and the console to prevent accidental exposure. This feature helps maintain security when sharing Terraform configurations or outputs with team members or in CI/CD pipelines.\n\nLearn more from the following resources:",
+ "description": "Terraform sensitive outputs are a feature used to protect sensitive information in Terraform configurations. When an output is marked as sensitive, Terraform obscures its value in the console output, displaying it as `` instead of the actual value. This is crucial for protecting sensitive data like passwords or API keys.\n\nTo mark an output as sensitive, use the sensitive argument in the output block:\n\n output \"database_password\" {\n value = aws_db_instance.example.password\n sensitive = true\n }\n \n\nSensitive outputs are still accessible programmatically and are written to the state in clear text, but their values are hidden in logs and the console to prevent accidental exposure. This feature helps maintain security when sharing Terraform configurations or outputs with team members or in CI/CD pipelines.\n\nLearn more from the following resources:",
"links": [
{
"title": "How to output sensitive data in Terraform",
@@ -679,6 +690,11 @@
"url": "https://developer.hashicorp.com/terraform/tutorials/cli/plan",
"type": "course"
},
+ {
+ "title": "Terraform Plan Documentation",
+ "url": "https://developer.hashicorp.com/terraform/cli/commands/plan",
+ "type": "article"
+ },
{
"title": "Terraform plan command and how it works",
"url": "https://spacelift.io/blog/terraform-plan",
@@ -702,7 +718,7 @@
},
{
"title": "Terraform Apply Documentation",
- "url": "https://developer.hashicorp.com/terraform/cli/commands/plan",
+ "url": "https://developer.hashicorp.com/terraform/cli/commands/apply",
"type": "article"
},
{
@@ -1091,7 +1107,7 @@
"description": "Creating local modules in Terraform involves organizing a set of related resources into a reusable package within your project. To create a local module, you typically create a new directory within your project structure and place Terraform configuration files (`.tf`) inside it. These files define the resources, variables, and outputs for the module. The module can then be called from your root configuration using a module block, specifying the local path to the module directory. Local modules are useful for encapsulating and reusing common infrastructure patterns within a project, improving code organization and maintainability. They can accept input variables for customization and provide outputs for use in the calling configuration. Local modules are particularly beneficial for breaking down complex infrastructures into manageable, logical components and for standardizing resource configurations across a project.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Build and use a local moduke",
+ "title": "Build and use a local module",
"url": "https://developer.hashicorp.com/terraform/tutorials/modules/module-create",
"type": "article"
},
@@ -1299,7 +1315,7 @@
},
"wSh7bbPswcFAzOicX8VPx": {
"title": "state pull / push",
- "description": "The `terraform state pull` and `terraform state push` commands are used for managing Terraform state in remote backends. The `pull` command retrieves the current state from the configured backend and outputs it to stdout, allowing for inspection or backup of the remote state. It's useful for debugging or for performing manual state manipulations.\n\nThe`push` command does the opposite, uploading a local state file to the configured backend, overwriting the existing remote state. This is typically used to restore a backup or to manually reconcile state discrepancies. Both commands should be used with caution, especially push, as they can potentially overwrite important state information.\n\nLearn more from the following resources:",
+ "description": "The `terraform state pull` and `terraform state push` commands are used for managing Terraform state in remote backends. The `pull` command retrieves the current state from the configured backend and outputs it to stdout, allowing for inspection or backup of the remote state. It's useful for debugging or for performing manual state manipulations.\n\nThe `push` command does the opposite, uploading a local state file to the configured backend, overwriting the existing remote state. This is typically used to restore a backup or to manually reconcile state discrepancies. Both commands should be used with caution, especially push, as they can potentially overwrite important state information.\n\nLearn more from the following resources:",
"links": [
{
"title": "Command - State pull",
diff --git a/public/roadmap-content/typescript.json b/public/roadmap-content/typescript.json
index 20d78a321..c0c7f4531 100644
--- a/public/roadmap-content/typescript.json
+++ b/public/roadmap-content/typescript.json
@@ -685,7 +685,7 @@
},
"oxzcYXxy2I7GI7nbvFYVa": {
"title": "Constructor Overloading",
- "description": "In TypeScript, you can achieve constructor overloading by using multiple constructor definitions with different parameter lists in a single class. Given below is the example where we have multiple definitions for the constructor:\n\n class Point {\n // Overloads\n constructor(x: number, y: string);\n constructor(s: string);\n constructor(xs: any, y?: any) {\n // TBD\n }\n }\n \n\nNote that, similar to function overloading, we only have one implementation of the consructor and it's the only the signature that is overloaded.\n\nLearn more from the following resources:",
+ "description": "In TypeScript, you can achieve constructor overloading by using multiple constructor definitions with different parameter lists in a single class. Given below is the example where we have multiple definitions for the constructor:\n\n class Point {\n // Overloads\n constructor(x: number, y: string);\n constructor(s: string);\n constructor(xs: any, y?: any) {\n // TBD\n }\n }\n \n\nNote that, similar to function overloading, we only have one implementation of the constructor and it's the only the signature that is overloaded.\n\nLearn more from the following resources:",
"links": [
{
"title": "Constructors - TypeScript",
@@ -1029,7 +1029,7 @@
"links": [
{
"title": "Ambient Modules",
- "url": "https://www.typescriptlang.org/docs/handbook/modules.html#ambient-modules",
+ "url": "https://www.typescriptlang.org/docs/handbook/modules/reference.html#ambient-modules",
"type": "article"
}
]
@@ -1079,7 +1079,7 @@
},
"fU8Vnw1DobM4iXl1Tq6EK": {
"title": "Formatting",
- "description": "Prettier is an opinionated code formatter with support for JavaScript, HTML, CSS, YAML, Markdown, GraphQL Schemas. By far the biggest reason for adopting Prettier is to stop all the on-going debates over styles.\n\nVisit the following resources to learn more:",
+ "description": "Prettier is an opinionated code formatter with support for JavaScript, HTML, CSS, YAML, Markdown, GraphQL Schemas. By far the biggest reason for adopting Prettier is to stop all the on-going debates over styles. Biome is a faster alternative to Prettier! (It also does linting!)\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Prettier Website",
@@ -1090,6 +1090,11 @@
"title": "Why Prettier",
"url": "https://prettier.io/docs/en/why-prettier.html",
"type": "article"
+ },
+ {
+ "title": "BiomeJS Website",
+ "url": "https://biomejs.dev",
+ "type": "article"
}
]
},
diff --git a/public/roadmap-content/ux-design.json b/public/roadmap-content/ux-design.json
index 30824b4c4..f57a81c64 100644
--- a/public/roadmap-content/ux-design.json
+++ b/public/roadmap-content/ux-design.json
@@ -38,27 +38,55 @@
},
"zYCBEUqZVlvjlAKnh5cPQ": {
"title": "Behavior Design",
- "description": "Behavior Design is an approach that combines elements of psychology, neuroscience, and design principles to understand and influence human behaviors. The goal behind behavior design is to make it easier for users to accomplish their goals or desired actions within a product, service, or system.\n\nIn the context of UX Design, behavior design focuses on:\n\n* **Motivation**: Understanding what motivates users to take action, such as personal interests, external rewards, or social influence.\n \n* **Ability**: Ensuring that users have the necessary skills, time, and resources to complete a desired action.\n \n* **Triggers**: Implementing well-timed prompts that encourage users to take a specific action within the interface.\n \n\nTo create effective behavior designs, UX designers should:\n\n* Identify user goals and desired outcomes.\n* Analyze the user's environment and potential barriers that may affect their ability to complete the desired action.\n* Design solutions that address both the motivation and ability aspects of behavior change, as well as the appropriate triggers to prompt action.\n* Continuously test and iterate on the design to better understand user behavior and optimize engagement.\n\nBy focusing on behavior design, UX designers can create more engaging and user-friendly experiences that ultimately drive user satisfaction and increase the chances of achieving their desired goals.",
- "links": []
+ "description": "Behavior Design is an approach that combines elements of psychology, neuroscience, and design principles to understand and influence human behaviors. The goal behind behavior design is to make it easier for users to accomplish their goals or desired actions within a product, service, or system.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Behavior Design",
+ "url": "https://www.interaction-design.org/literature/topics/behavioral-design",
+ "type": "article"
+ }
+ ]
},
"D553-nVELaB5gdxtoKSVc": {
"title": "Behavioral Science",
- "description": "Behavioral science is the interdisciplinary study of human behavior, which encompasses disciplines like psychology, sociology, and anthropology. This field- primarily focuses on understanding what impacts our decisions, actions, and emotions. In the context of UX design, applying behavioral science concepts and principles can enhance user experience by improving user engagement, usability, and overall satisfaction.\n\nSome key principles of behavioral science that UX designers should consider include:\n\n* **Cognitive biases:** These are mental shortcuts our brains take when processing information which can lead to irrational decisions or judgments. Designers can use these biases to guide user behavior, as seen in the 'anchoring effect,' where users rely on the first piece of information provided on a page.\n \n* **Loss aversion:** People tend to prioritize avoiding losses over acquiring gains. Designers can use this to their advantage by highlighting potential losses that could occur without using a specific feature or product, increasing user motivation.\n \n* **Social proof:** People look to others for cues about how to behave in uncertain situations. To leverage this effect, designers can include testimonials, ratings, and user-generated content to demonstrate that others have found value in their product or service.\n \n* **Incentivization:** Users may be more likely to engage with a product if there are rewards or incentives for completing certain tasks. Gamifying an experience or offering exclusive benefits can encourage users to engage more deeply with the product.\n \n* **Choice architecture:** The way choices are presented influences users' decisions. Designers can use this to guide users to desired outcomes or simplify decision-making by reducing the number of options presented.\n \n* **Habit formation:** Creating a habit-forming experience can lead to increased user retention and engagement. Designers should consider features and elements that reinforce routine usage or solve recurring pain-points.\n \n\nBy integrating behavioral science principles into their design process, UX designers can better understand and anticipate users' needs, ultimately creating more enjoyable, effective, and engaging experiences.",
- "links": []
+ "description": "Behavioral science is the interdisciplinary study of human behavior, which encompasses disciplines like psychology, sociology, and anthropology. This field- primarily focuses on understanding what impacts our decisions, actions, and emotions. In the context of UX design, applying behavioral science concepts and principles can enhance user experience by improving user engagement, usability, and overall satisfaction.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Behavioral Science",
+ "url": "https://uxplanet.org/how-to-use-behavioral-science-to-influence-user-behavior-in-design-581dc0805f7c",
+ "type": "article"
+ },
+ {
+ "title": "Future of Behavioral Science",
+ "url": "https://behavioralscientist.org/what-is-the-future-of-design-and-behavioral-science-a-conversation-with-cliff-kuang/",
+ "type": "article"
+ }
+ ]
},
"_lv6GJ0wlMfhJ7PHRGQ_V": {
"title": "Behavioral Economics",
- "description": "Behavioral Economics is a subfield of economics that studies the psychological, social, and emotional factors that influence decision-making and economic behavior. It seeks to understand why people make choices that deviate from the traditional economic model, which assumes that individuals behave rationally and seek to maximize their utility.\n\nThe key concepts of Behavioral Economics include:\n\n* Bounded Rationality: People make decisions based on limited information, cognitive constraints, and personal biases.\n \n* Prospect Theory: Individuals perceive losses and gains asymmetrically, feeling greater pain from a loss than satisfaction from an equivalent gain.\n \n* Anchoring: People tend to rely on a reference point (the anchor) when assessing the value of an unknown option, which can lead to arbitrary or irrational decisions.\n \n* Mental Accounting: Individuals mentally categorize and allocate expenses differently, which can lead to biases like the sunk cost fallacy or the endowment effect.\n \n* Nudging: Subtle changes to choice architecture can influence people's decisions without restricting their freedom of choice, through methods like default options, framing, or social proof.\n \n\nUnderstanding and applying behavioral economic principles can help UX designers create interfaces and experiences that account for these biases and help users make better choices. By designing to minimize cognitive load, supporting decision-making, and presenting options effectively, UX designers can enhance user satisfaction and encourage desired actions.",
- "links": []
+ "description": "Behavioral Economics is a subfield of economics that studies the psychological, social, and emotional factors that influence decision-making and economic behavior. It seeks to understand why people make choices that deviate from the traditional economic model, which assumes that individuals behave rationally and seek to maximize their utility.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Behavioral Economics",
+ "url": "https://www.interaction-design.org/literature/article/behavioural-economics-ideas-that-you-can-use-in-ux-design",
+ "type": "article"
+ }
+ ]
},
"2NlgbLeLBYwZX2u2rKkIO": {
"title": "BJ Fogg's Behavior Model",
- "description": "B.J. Fogg, a renowned psychologist, and researcher at Stanford University, proposed the [Fogg Behavior Model (FBM)](https://www.behaviormodel.org/). This insightful model helps UX designers understand and influence user behavior by focusing on three core elements. These key factors are motivation, ability, and prompts.\n\n* **Motivation**: This element emphasizes the user's desire to perform a certain action or attain specific outcomes. Motivation can be linked to three core elements specified as sensation (pleasure/pain), anticipation (hope/fear), and social cohesion (belonging/rejection).\n \n* **Ability**: Ability refers to the user's capacity, both physical and mental, to perform desired actions. To enhance the ability of users, UX designers should follow the principle of simplicity. The easier it is to perform an action, the more likely users will engage with the product. Some factors to consider are time, financial resources, physical efforts, and cognitive load.\n \n* **Prompts**: Prompts are the cues, notifications, or triggers that signal users to take an action. For an action to occur, prompts should be presented at the right time when the user has adequate motivation and ability.\n \n\nUX designers should strive to find the balance between these three factors to facilitate the desired user behavior. By understanding your audience and their needs, implementing clear and concise prompts, and minimizing the effort required for action, the FBM can be an effective tool for designing user-centered products.",
+ "description": "B.J. Fogg, a renowned psychologist, and researcher at Stanford University, proposed the Fogg Behavior Model (FBM). This insightful model helps UX designers understand and influence user behavior by focusing on three core elements. These key factors are motivation, ability, and prompts.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "meaning of BJ fogg's behavior model",
+ "title": "Meaning of BJ Fogg's Behavior Model",
"url": "https://behaviormodel.org/",
"type": "article"
+ },
+ {
+ "title": "The Fogg Behavior Model",
+ "url": "https://blog.logrocket.com/ux-design/fogg-behavior-model/",
+ "type": "article"
}
]
},
@@ -85,8 +113,19 @@
},
"lRBC8VYJPsR65LHDuuIsL": {
"title": "BJ Fogg's Behavior Grid",
- "description": "The BJ Fogg Behavior Grid is a framework that helps UX designers, product managers, and marketers understand and identify different types of behavior change. Created by Stanford University professor B.J. Fogg, the grid consists of 15 behavior types based on the combination of three dimensions: Duration, Frequency, and Intensity.\n\nDuration\n--------\n\n* **One-time behaviors**: These are behaviors that happen only once (e.g., signing up for an account).\n* **Short-term behaviors**: Behaviors that take place for a limited period of time (e.g., using a trial version of a product).\n* **Long-term behaviors**: Behaviors that are ongoing or happen repeatedly over a considerable time (e.g., continued use of a product).\n\nFrequency\n---------\n\n* **Single-instance behaviors**: Behaviors that occur only one time per occasion (e.g., entering a password once to log in)\n* **Infrequent behaviors**: Behaviors that do not happen regularly or happen sporadically (e.g., posting on social media once a week)\n* **Frequent behaviors**: Behaviors that happen on a consistent and regular basis (e.g., checking email multiple times a day)\n\nIntensity\n---------\n\n* **Low-stakes behaviors**: Behaviors that have little impact or are considered less important (e.g., choosing a profile picture)\n* **Medium-stakes behaviors**: Behaviors that have moderate importance or impact (e.g., deciding how much personal information to share)\n* **High-stakes behaviors**: Behaviors that have significant impact on the user's experience or perception of the product (e.g., making a purchase or canceling a subscription)\n\nUsing this grid, designers can classify user behaviors into different types and tailor their UX design strategies to target the specific behavior they want to encourage, change, or eliminate. Additionally, the Behavior Grid can be used to analyze and understand user motivations, triggers, and barriers, enabling designers to create more effective behavior change interventions.",
- "links": []
+ "description": "The BJ Fogg Behavior Grid is a framework that helps UX designers, product managers, and marketers understand and identify different types of behavior change. Created by Stanford University professor B.J. Fogg, the grid consists of 15 behavior types based on the combination of three dimensions: Duration, Frequency, and Intensity.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "BJ Fogg’s Behavior Grid",
+ "url": "https://behaviordesign.stanford.edu/resources/fogg-behavior-grid",
+ "type": "article"
+ },
+ {
+ "title": "The Fogg Behavior Model",
+ "url": "https://blog.logrocket.com/ux-design/fogg-behavior-model/",
+ "type": "article"
+ }
+ ]
},
"PLLTcrHkhd1KYaMSRKALp": {
"title": "Nir Eyal's Hook Model",
@@ -136,8 +175,14 @@
},
"ZufrLRNkMoJ4e2T-vWxCR": {
"title": "Automate the Act of Repetition",
- "description": "As a UX designer, one of your main goals is to simplify and streamline user interactions. Often, users are required to perform repetitive tasks, which can lead to frustration and decrease efficiency. To enhance the user experience and ensure smooth sailing, it's essential to reduce or eliminate the need for repeated actions by automating repetitive tasks wherever possible.\n\nBenefits of Automation\n----------------------\n\nWhen properly implemented, automation can:\n\n* Save time: By cutting down on repeated actions, users can complete tasks more quickly, increasing productivity.\n* Reduce errors: Automating tasks can minimize human error and ensure that actions are completed correctly every time.\n* Improve user satisfaction: Reducing tedious tasks can lead to a more positive user experience and increase user retention.\n\nStrategies for Automation\n-------------------------\n\nAs a UX designer, consider the following strategies to automate repetitive tasks:\n\n* **Pre-fill forms**: Auto-fill form fields with the information that the user has entered previously or is likely to enter, such as their name, email address, or phone number. This can save users time and effort in filling out forms.\n* **Remember user preferences**: Store user settings and preferences, such as preferred language, currency, or theme, so that users don't have to set them again every time they visit your site or app.\n* **Smart suggestions**: Implement predictive text or auto-suggestions based on user input or past behavior. For example, when typing search queries or filling out forms, users may appreciate suggestions to help them complete their task quickly.\n* **Batch actions**: Allow users to perform actions, like selecting or deleting items, in groups rather than individually. This can significantly reduce the number of clicks and time required to complete the task.\n* **Keyboard shortcuts**: Provide keyboard shortcuts for common actions, enabling users to perform tasks without using a mouse or touch interactions. This can be particularly helpful for power users or users with accessibility needs.\n\nBy automating acts of repetition in your design, you can enhance the user experience, reduce frustration and improve overall satisfaction. Be mindful of your users' needs, analyze the repetitive tasks they may encounter, and implement effective automation techniques to create a seamless, efficient, and enjoyable experience.",
- "links": []
+ "description": "To enhance user experience and streamline interactions, it's crucial to automate repetitive tasks that often lead to frustration and decreased efficiency. Properly implemented automation can save time, reduce errors, and improve user satisfaction by minimizing tedious actions. As a UX designer, consider strategies such as pre-filling forms with previously entered information, remembering user preferences, providing smart suggestions based on past behavior, enabling batch actions for group tasks, and offering keyboard shortcuts for common actions. By focusing on these automation techniques, you can create a seamless and enjoyable experience that meets users' needs and increases retention.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Repeating Elements",
+ "url": "https://helpx.adobe.com/au/xd/help/create-repeating-elements.html",
+ "type": "article"
+ }
+ ]
},
"y6CqgqTvOt-LrvTnPJkQQ": {
"title": "Make or Change Habits",
@@ -171,8 +216,14 @@
},
"w_QWN80zCf1tsVROeyuvo": {
"title": "Behavior Change Strategies",
- "description": "Behavior change strategies are techniques that aim to help users adopt new behaviors or break existing ones to achieve specific goals, such as healthier lifestyles or improved productivity. In UX design, these strategies are applied to design elements and features within digital products or services to motivate and support users in making lasting changes in their actions.\n\nHere are some key behavior change strategies often employed in UX design:\n\n* **Goal Setting:** Asking users to set specific, measurable, achievable, relevant, and time-bound (SMART) goals can help them focus their efforts and track their progress.\n \n* **Feedback and Rewards:** Providing users with real-time feedback on their progress and rewarding them with positive reinforcement (e.g., badges or points) can increase motivation and engagement.\n \n* **Social Comparisons:** Facilitating comparisons between users or groups can tap into social influence and normative pressure, encouraging behavior change through competition or collaboration.\n \n* **Reminders and Prompts:** Sending timely reminders or prompts can help reinforce desired behaviors by making them more salient and top of mind.\n \n* **Choice Architecture:** Structuring the presentation of options, defaults, and information can nudge users towards better decisions without restricting their freedom of choice.\n \n* **Modeling and Stories:** Demonstrating desired behaviors through role models, cases, testimonials or stories can provide inspiration and social proof that change is possible and desirable.\n \n* **Progressive Disclosure:** Gradually introducing advanced features, content or challenges can help users build their skills and confidence, preventing them from feeling overwhelmed or disengaged.\n \n* **Personalization and Tailoring:** Customizing content or recommendations based on a user's preferences, history or characteristics can make interventions more relevant and effective.\n \n\nBy incorporating these behavior change strategies in your UX design, you improve the chances of users successfully adopting the desired behaviors, which can ultimately lead to a more positive and effective user experience.",
- "links": []
+ "description": "Behavior change strategies are techniques that aim to help users adopt new behaviors or break existing ones to achieve specific goals, such as healthier lifestyles or improved productivity. In UX design, these strategies are applied to design elements and features within digital products or services to motivate and support users in making lasting changes in their actions.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Behavioral Change Strategies",
+ "url": "https://blog.logrocket.com/ux-design/starter-guide-behavioral-design/",
+ "type": "article"
+ }
+ ]
},
"q1WX2Cp4k4-o1T1vgL8FH": {
"title": "Understanding the Product",
@@ -201,8 +252,14 @@
},
"SGO9hHju49_py0n0ASGBe": {
"title": "Business Model Canvas",
- "description": "The **Business Model Canvas** is a strategic management and visual representation tool that allows you to describe, design, challenge, and pivot your existing business model. Developed by Alexander Osterwalder and Yves Pigneur, it helps organizations to understand how they create, deliver, and capture value. The canvas is divided into nine building blocks, which represent the essential elements of a business model:\n\n* **Customer Segments (CS):** These are the target groups your business aims to serve, such as specific users, organizations, or market segments.\n* **Value Propositions (VP):** The unique combinations of products and services that create value for your customer segments. It describes the reasons why customers choose your product or service over your competitors'.\n* **Channels (CH):** The means by which your company communicates, delivers, and distributes its value propositions to the customers. This block includes both physical (e.g., stores) and virtual (e.g., online) channels.\n* **Customer Relationships (CR):** The type of relationships your business establishes and maintains with its customer segments, such as personal assistance, self-service, or automated services.\n* **Revenue Streams (RS):** The ways in which your company generates revenue from each customer segment, such as through sales, subscriptions, or advertising fees.\n* **Key Resources (KR):** The most important assets needed to make your business model work, including physical, financial, intellectual, and human resources.\n* **Key Activities (KA):** The primary actions your company must perform to deliver its value propositions, reach its customer segments, and maintain customer relationships. These can involve production, problem-solving, or service provision.\n* **Key Partnerships (KP):** The network of suppliers, partners, and allies that help your business execute its key activities, optimize resources, and reduce risks.\n* **Cost Structure (CS):** The major expenses associated with operating your business model, such as fixed and variable costs, economies of scale, and cost advantages.\n\nWhen designing or analyzing an existing business model, the Business Model Canvas enables you to visually map out all these critical components and understand how they are interconnected. By understanding your current business model, you can identify weaknesses, opportunities for improvement, and potential pivots to enhance the overall user experience and the success of the business.",
- "links": []
+ "description": "The **Business Model Canvas** is a strategic management and visual representation tool that allows you to describe, design, challenge, and pivot your existing business model. Developed by **Alexander Osterwalder** and **Yves Pigneur**, it helps organizations to understand how they create, deliver, and capture value. The canvas is divided into nine building blocks, which represent the essential elements of a business model:\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Business Model Canvas",
+ "url": "https://www.interaction-design.org/literature/topics/business-model-canvas",
+ "type": "article"
+ }
+ ]
},
"sc8jJ_77CrkQuxIJYk28Q": {
"title": "Lean Canvas",
@@ -211,8 +268,14 @@
},
"GI06-DbGyJlQXq5Tyi-aH": {
"title": "Business Model Inspirator",
- "description": "A Business Model Inspirator is a tool or method that helps you to generate new or creative ideas for the strategic, operational, and financial aspects of a business. It helps entrepreneurs, startups, and established companies to explore different ways of designing or improving their business models by drawing inspiration from various sources.\n\nSome key aspects of Business Model Inspirators include:\n\n* **Analyze Successful Models**: Look at successful companies from diverse industries to identify the core elements that made their business models successful. Understanding these elements can spark ideas for your own business model.\n \n* **Cross-Pollination**: Combine elements from various industries and business models to create an innovative approach that suits your specific domain. This process can lead to the development of a unique value proposition and competitive advantage.\n \n* **Experimentation**: Test different ideas to find the most feasible and scalable business model by iteratively prototyping, validating, and refining the model based on user/client feedback.\n \n* **Futuristic Thinking**: Stay aware of emerging trends, technologies, and structural changes in society that might affect your industry or target market. Use foresight to adapt your business model to future opportunities and challenges.\n \n* **Adaptability**: Be ready to pivot or evolve your business model based on changing market dynamics, user preferences, competitive forces, and other external factors. Developing a flexible business model is crucial to ensure long-term success and sustainability.\n \n\nImplementing a Business Model Inspirator can contribute to the creation of a more innovative and robust UX design, ultimately leading to enhanced customer experiences, increased revenue, and long-term success for your brand.",
- "links": []
+ "description": "A Business Model Inspirator is a tool or method that helps you to generate new or creative ideas for the strategic, operational, and financial aspects of a business. It helps entrepreneurs, startups, and established companies to explore different ways of designing or improving their business models by drawing inspiration from various sources.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Business Model Inspirator",
+ "url": "https://businessdesign.org/knowledge-base/business-model-inspirator",
+ "type": "article"
+ }
+ ]
},
"HUZ5n2MRHzQPyjwX2h6Q4": {
"title": "Competitor Analysis",
@@ -281,8 +344,19 @@
},
"jy5jtSEyNE8iJpad27rPX": {
"title": "Business Process Model & Notation (BPMN)",
- "description": "Business Process Model and Notation (BPMN) is a graphical representation of business processes, providing a standardized and easy-to-understand method for visualizing different aspects of a business. By using BPMN, UX designers can analyze and optimize business processes and workflows, which ultimately improves the overall user experience.\n\nKey Components of BPMN\n----------------------\n\n* Flow Objects: Main building blocks of a BPMN diagram, which include events, activities, and gateways.\n* Connecting Objects: Linking elements between flow objects, such as sequence flows, message flows, and associations.\n* Swimlanes: Visual elements that help organize activities based on roles or responsibilities.\n* Artifacts: Supplementary elements providing additional information, such as data objects, groupings, and annotations.\n\nBenefits of BPMN for UX Design\n------------------------------\n\n* **Visualization**: BPMN offers a clear visual layout of business processes, allowing UX designers to understand the overall structure easily.\n* **Standardization**: As an internationally recognized standard, BPMN ensures consistent interpretation and communication among team members.\n* **Flexibility**: BPMN can accommodate various levels of complexity, enabling designers to model simple or complex processes as needed.\n* **Collaboration**: By bridging the gap between technical and non-technical stakeholders, BPMN empowers cross-functional collaboration throughout the design process.\n\nTo incorporate BPMN in your UX design process, you'll need to familiarize yourself with its various elements and syntax. Consider leveraging BPMN tools and resources to create diagrams that accurately represent your target user's needs and the corresponding business processes. By doing so, you'll be able to craft a more precise and effective user experience.",
- "links": []
+ "description": "Business Process Model and Notation (BPMN) is a graphical representation of business processes, providing a standardized and easy-to-understand method for visualizing different aspects of a business. By using BPMN, UX designers can analyze and optimize business processes and workflows, which ultimately improves the overall user experience.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Business Process Model and Notation (BPMN)",
+ "url": "https://aguayo.co/en/blog-aguayp-user-experience/business-process-model-notation-for-ux/",
+ "type": "article"
+ },
+ {
+ "title": "How to Design BPNM",
+ "url": "https://devlight.io/blog/how-to-design-business-process-model-and-notation-for-a-mobile-app/",
+ "type": "article"
+ }
+ ]
},
"6yCBFwntQ_KxFmmGTJ8iR": {
"title": "Prototyping",
@@ -301,8 +375,24 @@
},
"HI_urBhPqT0m3AeBQJIej": {
"title": "Adobe XD",
- "description": "Adobe XD (Experience Design) is a powerful design and prototyping tool that allows UX designers to create wireframes, mockups, and interactive prototypes for various digital projects. It is available for both Mac and Windows, and it focuses on providing an easy-to-use, intuitive interface for designing responsive websites, mobile apps, and more.\n\nKey Features of Adobe XD\n------------------------\n\n* **Design tools**: Adobe XD offers a set of powerful design tools, such as vector drawing, the ability to import images, and a range of pre-defined UI components to help you create aesthetically pleasing designs. The built-in grid system allows for precise alignment and consistency across your designs.\n \n* **Responsive artboards**: XD allows you to create multiple artboards for different devices and screen sizes. This enables you to visualize and design in one go, for multiple device types.\n \n* **Prototype and Interactions**: With Adobe XD, you can easily add interactions to your designs. This helps in better communication of your ideas and makes it easier for clients and developers to understand your vision. The preview mode enables you to test your prototype and see the interactions in real-time.\n \n* **Collaboration and Sharing**: Adobe XD simplifies collaboration between team members, stakeholders, and developers. You can create shared design specs and live URLs for your prototypes, gather feedback, and even co-edit documents with other designers in real-time.\n \n* **Integrations**: XD seamlessly integrates with other Adobe Creative Cloud applications, such as Photoshop, Illustrator, and After Effects, enabling smoother workflows and consistency across your designs. It also supports third-party plugins to expand its capabilities.\n \n\nTo get started with Adobe XD, you'll need to download and install the application from the [Adobe Creative Cloud website](https://www.adobe.com/products/xd.html). Adobe offers a free basic plan for XD, which allows you to work on one shared document at a time and a limited number of shared prototypes and design specs.\n\nAs a designer, familiarizing yourself with Adobe XD's features and learning how to effectively use it can significantly improve your design process, making your wireframing and prototyping tasks quicker and more efficient.",
- "links": []
+ "description": "Adobe XD (Experience Design) is a powerful design and prototyping tool that allows UX designers to create wireframes, mockups, and interactive prototypes for various digital projects. It is available for both Mac and Windows, and it focuses on providing an easy-to-use, intuitive interface for designing responsive websites, mobile apps, and more.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Adobe XD Platform",
+ "url": "https://adobexdplatform.com/",
+ "type": "article"
+ },
+ {
+ "title": "Getting Started with Adobe XD",
+ "url": "https://helpx.adobe.com/xd/get-started.html",
+ "type": "article"
+ },
+ {
+ "title": "Learn Adobe XD",
+ "url": "https://www.adobe.com/ph_en/products/xd/learn/get-started-xd-design.html",
+ "type": "article"
+ }
+ ]
},
"nb7Ql1gvxqEucsGnIWTyY": {
"title": "Sketch",
@@ -311,8 +401,14 @@
},
"fZkARg6kPXPemYW1vDMTe": {
"title": "Balsamiq",
- "description": "Balsamiq is a popular wireframing tool that helps designers, developers, and product managers to quickly create and visualize user interfaces, web pages, or app screens. It's an easy-to-use software that allows you to focus on ideas and concepts rather than getting caught up in pixel-perfect designs.\n\n**Key Features of Balsamiq**",
- "links": []
+ "description": "Balsamiq is a popular wireframing tool that helps designers, developers, and product managers to quickly create and visualize user interfaces, web pages, or app screens. It's an easy-to-use software that allows you to focus on ideas and concepts rather than getting caught up in pixel-perfect designs.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Balsamiq Website",
+ "url": "https://balsamiq.com/",
+ "type": "article"
+ }
+ ]
},
"U4ZEFUcghr9XjSyf-0Np7": {
"title": "Call to Action",
@@ -341,8 +437,14 @@
},
"JSBiw0C6aq1LhA33y79PM": {
"title": "Behavior Change Games",
- "description": "Behavior change games are a powerful UX design pattern that help users adopt new habits or make positive lifestyle changes. These games are typically designed to be engaging, enjoyable, and motivating, utilizing various game elements and mechanics to encourage users to take desired actions.\n\nKey elements of behavior change games\n-------------------------------------\n\n* **Set clear objectives**: Define specific goals users should achieve, such as losing weight or learning a new skill. Well-defined objectives provide a strong focus for the game and encourage user engagement.\n \n* **Feedback and progress**: Provide real-time feedback and track user progress to create a sense of accomplishment. This can include visual cues, points, badges, or leveling up systems.\n \n* **Social interaction**: Utilize social features, such as sharing achievements, comparing results with friends, or team challenges. This enables users to work together, fosters a sense of community, and enhances motivation through friendly competition.\n \n* **Reward system**: Implement a reward system that grants virtual or real rewards for completing tasks or reaching milestones. These rewards can be intrinsic (e.g., personal satisfaction) or extrinsic (e.g., discounts or prizes).\n \n* **Gamification**: Incorporate game-like elements, such as storytelling, quests, or time-limited challenges. These elements add an entertaining aspect, improve user experience, and make the behavior change process more enjoyable.\n \n\nBenefits of behavior change games\n---------------------------------\n\n* **Increased motivation**: By turning the behavior change process into a game, users are often more motivated to participate and stay engaged.\n \n* **Higher user retention**: Engaging games can increase user retention, resulting in higher long-term success rates for behavior change.\n \n* **Measurable results**: These games allow users to easily track progress and outcomes, helping them understand the impact of their actions and reinforcing positive behavior.\n \n* **Personalization**: Games can be tailored to individual users' preferences and play styles, making the experience more enjoyable and relevant.\n \n* **Support network**: The inclusion of social features creates a community of support, forging connections between individuals with similar goals and fostering accountability.\n \n\nWhen designing behavior change games, it's essential to keep user experience in mind, and create an enjoyable and motivating experience. Balancing fun and educational elements can result in a powerful tool for guiding users towards positive change in their lives.",
- "links": []
+ "description": "Behavior change games are a powerful UX design pattern that help users adopt new habits or make positive lifestyle changes. These games are typically designed to be engaging, enjoyable, and motivating, utilizing various game elements and mechanics to encourage users to take desired actions. When designing behavior change games, it's essential to keep user experience in mind, and create an enjoyable and motivating experience. Balancing fun and educational elements can result in a powerful tool for guiding users towards positive change in their lives.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Behavioral Change Games",
+ "url": "https://medium.com/@jgruver/designing-for-behavioral-change-a-new-approach-in-ux-ui-design-59f9fb0086d1",
+ "type": "article"
+ }
+ ]
},
"fbIur1tEIdNDE6gls4Bru": {
"title": "Gamification",
@@ -411,8 +513,14 @@
},
"m30ePaw_qa36m9Rv9NSFf": {
"title": "Be Authentic and Personal",
- "description": "When creating a user experience (UX) design, it's essential to be authentic and personal. This means that your design should be genuine, truthful, and relatable to your users. By being authentic and personal, you can create a positive intuitive reaction in your users, as they feel connected and engaged with your website or application. Here are some tips to make your UX design authentic and personal:\n\n#### 1\\. Understand your user persona(s)\n\nBefore you start designing, define your target audience and create user personas that represent them. This may include their age, gender, occupation, interests, and pain points. By understanding the different personas, you can create a design that resonates with each of them, meeting their needs and expectations.\n\n#### 2\\. Use natural and conversational language\n\nTo make your design personal, use natural and conversational language that speaks directly to your users. Avoid jargons, buzzwords, or overly formal language that can create a barrier between you and your users. Your users should be able to understand the content and interact with it smoothly.\n\n#### 3\\. Employ appropriate imagery and visuals\n\nTo enhance authenticity, incorporate images and graphics that are relevant and relatable to your target audience. This means using high-quality, real-life pictures of people or objects that genuinely represent your brand or product. Avoid overused stock images, as they can significantly decrease the perceived authenticity of your design.\n\n#### 4\\. Make emotional connections\n\nEmotions play a vital role in creating personal connections with users. In your design, use color schemes, fonts, and visual elements that evoke emotions and encourage users to form an emotional attachment to your product or brand. The more emotionally invested users are, the more positive their intuitive reactions will be.\n\n#### 5\\. Consistency in design elements\n\nAn authentic user experience is characterized by consistency in design elements, including typography, colors, and visual hierarchy. This consistency helps users feel reassured and comfortable, as they can easily understand and navigate through the design.\n\n#### 6\\. Provide personalized experiences\n\nTo create an authentic UX design, offer personalized experiences to your users based on their preferences, browsing history, or other data. This might include recommending content they may be interested in or tailoring the website layout to meet their specific needs.\n\nBy being authentic and personal in your UX design, you can create a positive and memorable experience for your users. By understanding your target audience, using natural language, incorporating engaging visuals, and providing personalized experiences, you can foster user engagement, trust, and loyalty towards your product or brand.",
- "links": []
+ "description": "When creating a user experience (UX) design, it's essential to be authentic and personal. This means that your design should be genuine, truthful, and relatable to your users. By being authentic and personal, you can create a positive intuitive reaction in your users, as they feel connected and engaged with your website or application.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Rethinking Personas",
+ "url": "https://uxdesign.cc/rethinking-personas-empathy-and-inclusion-in-ux-design-37145d2ee807",
+ "type": "article"
+ }
+ ]
},
"jBQtuiHGl3eyCTZG85Vz5": {
"title": "Prime User-Relevant Associations",
@@ -436,18 +544,41 @@
},
"4AzPOKXUN32CkgchRMrRY": {
"title": "Avoid Cognitive Overhead",
- "description": "Cognitive overhead refers to the mental effort needed to understand or operate a given system, tool, or interface. In UX design, it is crucial to minimize cognitive overhead to create user-friendly and efficient experiences. The less mental effort a user needs to invest, the more likely they will have a positive conscious evaluation of your design. Here are three key strategies to help you avoid cognitive overhead in your designs:\n\n#### 1\\. Keep it simple\n\nA clutter-free, clean, and easy-to-navigate design is always a good starting point. In order to keep cognitive overhead to a minimum, focus on simplifying both the interface and the content:\n\n* Utilize white space: By providing ample space between functional elements, you make it easier for users to scan and process the interface.\n* Reduce the number of options: Offering too many choices can overwhelm users or cause them to second-guess their decisions. Aim for a balance of ease and functionality.\n\n#### 2\\. Establish a clear hierarchy\n\nA well-structured hierarchy helps users navigate your design and understand the relationship between elements. This reduces cognitive overhead as users don't have to work hard to make sense of the interface:\n\n* Organize content logically: Group related items together and place them in a consistent order.\n* Use size, color, and typography effectively: Make important information stand out and use visual cues to indicate less important elements.\n\n#### 3\\. Provide clear & concise instructions\n\nYour design should guide users effortlessly, which can be achieved by providing clear directions or prompts:\n\n* Use actionable language: Be precise and direct with your wording, and avoid using jargon.\n* Offer visual cues & feedback: Include well-placed icons, highlighted sections, or animation to support the user's actions and indicate the outcome of those actions.\n\nIn summary, reducing cognitive overhead in your UX design is essential to create an efficient and user-friendly experience. Adopt a simple and clean design, establish a clear hierarchy, and provide helpful instructions to ensure more favorable conscious evaluations from your users.",
- "links": []
+ "description": "Cognitive overhead refers to the mental effort needed to understand or operate a given system, tool, or interface. In UX design, it is crucial to minimize cognitive overhead to create user-friendly and efficient experiences. The less mental effort a user needs to invest, the more likely they will have a positive conscious evaluation of your design.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Cognitive Overload",
+ "url": "https://blog.logrocket.com/ux-design/cognitive-overload/",
+ "type": "article"
+ },
+ {
+ "title": "Reducing Cognitive Overload",
+ "url": "https://uxdesign.cc/reducing-cognitive-overload-designing-for-human-cognition-350f07cff9c4",
+ "type": "article"
+ }
+ ]
},
"8wxlu4KA2iu9CJa1UAUll": {
"title": "Avoid Choice Overload",
- "description": "Choice overload is a phenomenon that occurs when users are presented with too many options, causing decision paralysis, anxiety, and ultimately, dissatisfaction with their final choice. As a UX designer, it's essential to ensure that users can easily make decisions within your designs, so it's important to avoid choice overload. In this section, we'll discuss some strategies for managing the number of options and streamlining decision-making processes for users.\n\nLimit the Number of Options\n---------------------------\n\nResearch has shown that a user's ability to make decisions decreases as the number of options increases. To avoid overwhelming users, aim to present no more than 5-7 options at a time. This can be applied to menus, product listings, or any other area where users are asked to make a selection. Remember to prioritize the most important or commonly used options and make them more prominent within the design.\n\nCategorize and Organize Options\n-------------------------------\n\nWhen users are presented with multiple choices, it's crucial to make it easy for them to understand and differentiate between the available options. By categorizing and organizing options into logical groups, users can more quickly find the information or functionality they need. Consider using headings, icons, or other visual cues to assist in organizing content effectively.\n\nImplement Smart Defaults\n------------------------\n\nTo help users make decisions quicker, consider setting default selections for certain choices. By pre-selecting the most commonly used or recommended option, users can easily accept the default if it aligns with their needs, or quickly change it if necessary. This not only saves time and effort for the user, but it can also guide them towards an optimal outcome based on their needs.\n\nAdvanced Filtering and Sorting Options\n--------------------------------------\n\nIf your design requires users to make complex decisions, such as choosing a product from an extensive catalog, consider implementing advanced filtering and sorting options. By giving users the ability to refine their options based on specific attributes, they can more easily identify the best option for their needs. Make sure these filtering options are easy to understand and use, and provide clear feedback on the number of results remaining as users adjust their filters.\n\nBy being mindful of choice overload and implementing these strategies, you can create a more enjoyable and user-friendly experience for your users. Remember, the goal is to make their decision-making process as seamless and stress-free as possible.",
- "links": []
+ "description": "Choice overload occurs when users face too many options, leading to decision paralysis, anxiety, and dissatisfaction. As a UX designer, it's important to simplify decision-making by limiting the number of options to 5-7 at a time, prioritizing the most relevant choices. Organizing options into logical categories with visual cues can help users navigate their selections more easily. Implementing smart defaults can streamline decisions by pre-selecting commonly used options, while advanced filtering and sorting features allow users to refine their choices in complex scenarios. By addressing choice overload with these strategies, you can enhance user experience and facilitate a more seamless decision-making process.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Choice of Overload",
+ "url": "https://medium.com/@evamiller091/the-impact-of-choice-overload-in-ux-f5defb6cee5d",
+ "type": "article"
+ }
+ ]
},
"iQNvKhwhvbis4Yn1ZxQua": {
"title": "Avoid Direct Payments",
- "description": "Avoiding direct payments is a crucial aspect of UX design that can lead to favorable conscious evaluations from users. Direct payments refer to instances where users are required to pay for your product or service upfront, which can create a negative perception and less willingness to engage. By finding alternative ways to monetize or offer premium features, you can create an enjoyable experience and encourage users to appreciate and invest in your offerings without feeling forced.\n\nWhy it Matters?\n---------------\n\n* **Trust-Building**: When users are not asked to pay upfront or have no hidden costs, they are more likely to trust your product or service, increasing the likelihood of loyal customers.\n* **Accessibility**: Making your offerings available without direct payments ensures a larger and more diverse audience can experience the value your product provides, which can lead to increased traffic and eventual conversions.\n* **Reduced churn**: Users who do not feel \"locked-in\" by having to pay upfront are less likely to abandon your product or service in search of alternative solutions.\n\nStrategies to Avoid Direct Payments\n-----------------------------------\n\n* **Offer a free trial**: Provide users with a limited-time free trial of your product or service to showcase its value and encourage them to invest once the trial is over.\n* **Freemium model**: Allow users to access basic features of your product for free, while offering premium features at a cost. This model lets users experience your offerings without having to pay upfront and gives them the option to upgrade if they find value in it.\n* **In-app purchases**: Incorporate in-app purchases within your product, which enables users to access premium features and benefits without being forced to pay upfront.\n* **Subscriptions**: Offer subscriptions as an alternative payment method that allows users to access premium features and receive updates frequently, creating a sense of loyalty and commitment.\n* **Pay-as-you-go or usage-based pricing**: Implement a flexible pricing model where users only pay for what they use or when they use a specific feature, removing the barrier of direct payments and increasing user satisfaction.\n\nBy avoiding direct payments and implementing these strategies, a UX designer can create a user experience that fosters trust, accessibility, and user engagement. By doing so, you increase the likelihood of gaining favorable conscious evaluations of your product, ultimately leading to long-term success.",
- "links": []
+ "description": "Avoiding direct payments is a crucial aspect of UX design that can lead to favorable conscious evaluations from users. Direct payments refer to instances where users are required to pay for your product or service upfront, which can create a negative perception and less willingness to engage. By finding alternative ways to monetize or offer premium features, you can create an enjoyable experience and encourage users to appreciate and invest in your offerings without feeling forced.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Payment UX Best Practices",
+ "url": "https://gocardless.com/guides/posts/payment-ux-best-practices/",
+ "type": "article"
+ }
+ ]
},
"S9rJr8pc-Ln8BxG0suBWa": {
"title": "Frame Text to Avoid Temporal Myopia",
diff --git a/public/roadmap-content/vue.json b/public/roadmap-content/vue.json
index a81fd38a8..80d390ebc 100644
--- a/public/roadmap-content/vue.json
+++ b/public/roadmap-content/vue.json
@@ -17,8 +17,13 @@
},
"y9ToYDix-koRbR6FLydFw": {
"title": "create-vue",
- "description": "[create-vue](https://github.com/vuejs/create-vue) is a CLI tool that helps you create a new Vue project with a single command. It is a simple and easy-to-use tool that saves you time and effort when setting up a new Vue project.\n\nLearn more using the following resources:",
+ "description": "create-vue is a CLI tool that helps you create a new Vue project with a single command. It is a simple and easy-to-use tool that saves you time and effort when setting up a new Vue project.\n\nLearn more using the following resources:",
"links": [
+ {
+ "title": "vuejs/create-vue",
+ "url": "https://github.com/vuejs/create-vue",
+ "type": "opensource"
+ },
{
"title": "Creating a Vue Project",
"url": "https://cli.vuejs.org/guide/creating-a-project.html",
@@ -120,12 +125,17 @@
},
"CGdw3PqLRb9OqFU5SqmE1": {
"title": "Directives",
- "description": "Directives are special attributes with the `v-` prefix. Vue provides a number of [built-in directives](https://vuejs.org/api/built-in-directives.html).\n\nVisit the following resources to learn more:",
+ "description": "Directives are special attributes with the `v-` prefix. Vue provides a number of built-in directives.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Directives Documentation",
"url": "https://vuejs.org/guide/essentials/template-syntax.html#directives",
"type": "article"
+ },
+ {
+ "title": "Built-in Directives",
+ "url": "https://vuejs.org/api/built-in-directives.html",
+ "type": "article"
}
]
},
@@ -142,7 +152,7 @@
},
"PPUU3Rb73aCpT4zcyvlJE": {
"title": "Options API",
- "description": "We use Options API in a Vue application to write and define different components. With this API, we can use options such as data, methods, and mounted.\n\nTo state it simply, Options API is an old way to structure a Vue.JS application. Due to some limitations in this API, Composition API was introduced in Vue 3.\n\nVisit the following resources to learn more:",
+ "description": "We use Options API in a Vue application to write and define different components. With this API, we can use options such as data, methods, and mounted. To state it simply, Options API is an old way to structure a Vue.JS application. Due to some limitations in this API, Composition API was introduced in Vue 3.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "TypeScript with Options API",
@@ -192,14 +202,9 @@
"description": "Every application instance exposes a `config` object that contains the configuration settings for that application. You can modify its properties before mounting your application.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Official Documentation",
+ "title": "Vue.js Documentation",
"url": "https://vuejs.org/api/application.html#app-config",
"type": "article"
- },
- {
- "title": "official API Documentation",
- "url": "https://vuejs.org/api/application.html",
- "type": "article"
}
]
},
@@ -216,13 +221,25 @@
},
"1oIt_5OK-t2WaCgaYt9A8": {
"title": "Error / Warn Handler",
- "description": "",
- "links": []
+ "description": "Debugging in Vue.js involves identifying and fixing issues in your Vue applications. It’s an essential part of the development process, and there are several tools and techniques you can use to effectively debug your Vue code.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Debugging Documentation",
+ "url": "https://vuejs.org/v2/cookbook/debugging-in-vscode.html",
+ "type": "article"
+ }
+ ]
},
"gihxGgt177BK_EYsAfpx9": {
"title": "Global Properties",
- "description": "",
- "links": []
+ "description": "Global properties allows you to add properties or methods that can be accessed throughout your application. This is particularly useful for sharing functionality or data across components without the need to pass props explicitly.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Vue.js Global Properties",
+ "url": "https://blog.logrocket.com/vue-js-globalproperties/",
+ "type": "article"
+ }
+ ]
},
"f7N4pAp_jBlT8_8owAcbG": {
"title": "Performance",
@@ -242,10 +259,10 @@
},
"NCIzs3jbQTv1xXhAaGfZN": {
"title": "v-text",
- "description": "The `v-text` directive is used to set the textContent property of an element. It's important to note that when using this directive it will overwrite the HTML content inside the element. The expected input is a string, so it's important to wrap any text in single quotes.\n\nExample:\n\n \n
\n \n \n\nVisit the following resources to learn more:",
+ "description": "The `v-text` directive is used to set the textContent property of an element. It's important to note that when using this directive it will overwrite the HTML content inside the element. The expected input is a string, so it's important to wrap any text in single quotes.\n\nExample\n-------\n\n \n
\n \n \n\nVisit the following resources to learn more:",
"links": [
{
- "title": "v-text documentation",
+ "title": "v-text Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-text",
"type": "article"
}
@@ -253,10 +270,10 @@
},
"bZxtIBeIfeUcR32LZWrPW": {
"title": "v-html",
- "description": "The `v-html` directive is similar to the `v-text` directive, but the difference is that `v-html` renders its content as HTML. This means that if you pass an HTML element it will be rendered as an element and not plain text. Since the content is render as HTML, it can pose a security risk if the content contains malicius JavaScript code. For this reason you should never use this directive in combination with user input, unless the input is first properly sanitized.\n\nExample:\n\n \n Text'\">
\n \n \n\nVisit the following resources to learn more:",
+ "description": "The `v-html` directive is similar to the `v-text` directive, but the difference is that `v-html` renders its content as HTML. This means that if you pass an HTML element it will be rendered as an element and not plain text. Since the content is render as HTML, it can pose a security risk if the content contains malicious JavaScript code. For this reason you should never use this directive in combination with user input, unless the input is first properly sanitized.\n\nExample\n-------\n\n \n Text'\">
\n \n \n\nVisit the following resources to learn more:",
"links": [
{
- "title": "v-html documentation",
+ "title": "v-html Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-html",
"type": "article"
}
@@ -264,10 +281,10 @@
},
"_TlbGTKFCMO0wdLbC6xHX": {
"title": "v-show",
- "description": "`v-show` is similar to `v-if` in that it allows you to conditionally render components. However, it does not remove the component from the DOM and merely toggles its `display` CSS property to be `hidden`. It also does not work with `v-else-if` oe `v-else`.\n\nPrefer `v-show` over `v-if` if the component's visibility needs to change often, and `v-if` if not.\n\nVisit the following resources to learn more:",
+ "description": "`v-show` is similar to `v-if` in that it allows you to conditionally render components. However, it does not remove the component from the DOM and merely toggles its `display` CSS property to be `hidden`. It also does not work with `v-else-if` oe `v-else`. Prefer `v-show` over `v-if` if the component's visibility needs to change often, and `v-if` if not.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "Vue Conditional Rendering Docs",
+ "title": "Vue Conditional Rendering",
"url": "https://vuejs.org/guide/essentials/conditional.html#v-show",
"type": "article"
}
@@ -275,7 +292,7 @@
},
"xHj3W9Ig3MVuVlGyXchaP": {
"title": "v-if",
- "description": "Conditionally render an element or a template fragment based on the truthy-ness of the expression value.\n\nWhen a `v-if` element is toggled, the element and its contained directives / components are destroyed and re-constructed. If the initial condition is falsy, then the inner content won't be rendered at all.\n\nVisit the following resources to learn more:",
+ "description": "Conditionally render an element or a template fragment based on the truthy-ness of the expression value. When a `v-if` element is toggled, the element and its contained directives / components are destroyed and re-constructed. If the initial condition is falsy, then the inner content won't be rendered at all.\n\nExample\n-------\n\n Vue is awesome! \n \n\nVisit the following resources to learn more:",
"links": [
{
"title": "v-if Documentation",
@@ -289,7 +306,7 @@
"description": "The `v-else` conditionally renders an element or a template fragment as a function in case the `v-if` does not fulfil the condition.\n\nVisit the following resources for more information:",
"links": [
{
- "title": "v-else documentation",
+ "title": "v-else Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-else",
"type": "article"
}
@@ -308,10 +325,10 @@
},
"3ftwRjQ9e1-qDT9BV53zr": {
"title": "v-for",
- "description": "The `v-for` directive is used to render an HTML element, a block of elements, or even a component based on an array, an object, or a set number of times. When using this directive it is important to assign a unique key to each item to avoid issues and improve perfomance. This directive follows the `item in items` syntax.\n\nExample:\n\n \n \n \n {{ food.name }}
\n \n \n\nVisit the following resources to learn more:",
+ "description": "The `v-for` directive is used to render an HTML element, a block of elements, or even a component based on an array, an object, or a set number of times. When using this directive it is important to assign a unique key to each item to avoid issues and improve performance. This directive follows the `item in items` syntax.\n\nExample\n-------\n\n \n \n \n {{ food.name }}
\n \n \n\nVisit the following resources to learn more:",
"links": [
{
- "title": "v-for documentation",
+ "title": "v-for Documentation",
"url": "https://vuejs.org/guide/essentials/list#v-for",
"type": "article"
}
@@ -319,15 +336,21 @@
},
"hVuRmhXVP65IPtuHTORjJ": {
"title": "v-on",
- "description": "",
- "links": []
+ "description": "The v-on directive is placed on an element to attach an event listener. To attach an event listener with v-on we need to provide the event type, and any modifier, and a method or expression that should run when that event occurs.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "v-on Directive",
+ "url": "https://www.w3schools.com/vue/ref_v-on.php",
+ "type": "article"
+ }
+ ]
},
"cuM9q9vYy8JpZPGeBffd1": {
"title": "v-bind",
- "description": "The `v-bind` directive dynamically binds an HTML attribute to data.\n\nThe shorthand for this directive is `:`\n\nExample:\n\n \n \n \n \n \n \n\nVisit the following resources for more information:",
+ "description": "The `v-bind` directive dynamically binds an HTML attribute to data. The shorthand for this directive is `:`\n\nExample\n-------\n\n \n \n \n \n \n \n\nVisit the following resources for more information:",
"links": [
{
- "title": "v-bind documentation",
+ "title": "v-bind Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-bind",
"type": "article"
}
@@ -357,10 +380,10 @@
},
"5k9CrbzhNy9iiS6ez2UE6": {
"title": "v-once",
- "description": "The `v-once` directive makes an HTML element render only once, skipping every future update.\n\nExample:\n\n \n \n \n \n {{ input }}
\n \n \n\nIn this example the **p** element will not change its text even if the input variable is changed through the **input** element.\n\nVisit the following resources to learn more:",
+ "description": "The `v-once` directive makes an HTML element render only once, skipping every future update.\n\nExample\n-------\n\n \n \n \n \n {{ input }}
\n \n \n\nIn this example the **p** element will not change its text even if the input variable is changed through the **input** element.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "v-once documentation",
+ "title": "v-once Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-once",
"type": "article"
}
@@ -368,7 +391,7 @@
},
"mlsrhioiEkqnRIL6O3hNa": {
"title": "v-pre",
- "description": "The `v-pre` directive makes an element render its content as-is, skipping its compilation. The most common use case is when displaying raw mustache syntax.\n\nExample:\n\n \n \n \n {{ text }}
\n \n \n\nThe **p** element will display: `{{ text }}` and not `Some Text` because the compilation is skipped.\n\nVisit the following resources to learn more:",
+ "description": "The `v-pre` directive makes an element render its content as-is, skipping its compilation. The most common use case is when displaying raw mustache syntax.\n\nExample\n-------\n\n \n \n \n {{ text }}
\n \n \n\nThe **p** element will display: `{{ text }}` and not `Some Text` because the compilation is skipped.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "v-pre Documentation",
@@ -379,10 +402,10 @@
},
"RrSekP8Ub01coegMwLP6a": {
"title": "v-cloak",
- "description": "The v-cloak directive is used to prevent the uncompiled Vue template from being visible while the Vue instance is still loading. It temporarily hides the content until Vue has finished compiling the template\n\nThe v-cloak directive remains until the component instance is mounted.\n\n \n {{ message }}\n
\n \n\nCombined with CSS, you can hide elements with v-cloak until they are ready.\n\n [v-cloak] {\n display: none;\n }\n \n\nThe `` will not be visible until the compilation is done.\n\nVisit the following resources to learn more:",
+ "description": "The v-cloak directive is used to prevent the uncompiled Vue template from being visible while the Vue instance is still loading. It temporarily hides the content until Vue has finished compiling the template. The v-cloak directive remains until the component instance is mounted.\n\n
\n {{ message }}\n
\n \n\nCombined with CSS, you can hide elements with v-cloak until they are ready.\n\n [v-cloak] {\n display: none;\n }\n \n\nThe `
` will not be visible until the compilation is done.\n\nVisit the following resources to learn more:",
"links": [
{
- "title": "v-cloak documentation",
+ "title": "v-cloak Documentation",
"url": "https://vuejs.org/api/built-in-directives.html#v-cloak",
"type": "article"
}
@@ -393,7 +416,7 @@
"description": "Optimizing rendering is crucial for ensuring a smooth and efficient user experience across all your frontend projects. Sluggish webpages can lead to frustration for users, and potentially cause them to entirely abandon your web application. This issue comes up most often in single-page applications (SPAs), where the entirety of your application is loaded within a single webpage, and updates to it are handled dynamically without needing a full reload of the webpage.\n\nLearn more from the following resources:",
"links": [
{
- "title": "Optimizing rendering in Vue",
+ "title": "Optimizing Rendering in Vue",
"url": "https://blog.logrocket.com/optimizing-rendering-vue/",
"type": "article"
}
@@ -401,8 +424,14 @@
},
"dxwKfBxd5KYVkfEPMdHp-": {
"title": "Debugging",
- "description": "",
- "links": []
+ "description": "Debugging in Vue.js involves identifying and fixing issues in your Vue applications. It’s an essential part of the development process, and there are several tools and techniques you can use to effectively debug your Vue code.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Debugging Documentation",
+ "url": "https://vuejs.org/v2/cookbook/debugging-in-vscode.html",
+ "type": "article"
+ }
+ ]
},
"WiGG9_4G5y-AVA9byw6_g": {
"title": "Lifecycle Hooks",
@@ -448,12 +477,18 @@
},
"NfB3HlZ3uwYK5xszvV50b": {
"title": "Input Bindings",
- "description": "",
- "links": []
+ "description": "Input bindings are a way to bind user input to a component's data. This allows the component to react to user input and update its state accordingly. Input bindings are typically used with form elements such as text inputs, checkboxes, and select dropdowns.\n\nVisit the following resources to learn more:",
+ "links": [
+ {
+ "title": "Input Bindings",
+ "url": "https://vuejs.org/guide/essentials/forms",
+ "type": "article"
+ }
+ ]
},
"gMFndBcrTC6FtGryqN6dX": {
"title": "v-model",
- "description": "The v-model directive in Vue.js is used for creating two-way data bindings on form input elements, such as , , and . This means that the data can be updated in the component when the user inputs something, and the UI will update if the data in the component changes.",
+ "description": "The v-model directive in Vue.js is used for creating two-way data bindings on form input elements, such as `
`, `