From 65f51d9243691e50595defcecf43ff8ca0dabdd2 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Fri, 13 Sep 2024 16:37:44 +0600 Subject: [PATCH] chore: update roadmap content json (#7102) Co-authored-by: kamranahmedse <4921183+kamranahmedse@users.noreply.github.com> --- public/roadmap-content/android.json | 10 +- public/roadmap-content/backend.json | 1240 ++++++++++++++----------- public/roadmap-content/terraform.json | 20 +- public/roadmap-content/ux-design.json | 2 +- 4 files changed, 725 insertions(+), 547 deletions(-) diff --git a/public/roadmap-content/android.json b/public/roadmap-content/android.json index 2317745fc..4bcc79bde 100644 --- a/public/roadmap-content/android.json +++ b/public/roadmap-content/android.json @@ -412,8 +412,14 @@ }, "Bz-BkfzsDHAbAw3HD7WCd": { "title": "MVI", - "description": "", - "links": [] + "description": "The **MVI** `Model-View-Intent` pattern is a reactive architectural pattern, similar to **MVVM** and **MVP**, focusing on immutability and handling states in unidirectional cycles. The data flow is unidirectional: Intents update the Model's state through the `ViewModel`, and then the View reacts to the new state. This ensures a clear and predictable cycle between logic and the interface.\n\n* Model: Represents the UI state. It is immutable and contains all the necessary information to represent a screen.\n* View: Displays the UI state and receives the user's intentions.\n* Intent: The user's intentions trigger state updates, managed by the `ViewModel`.\n\nVisit the following resources to learn more:", + "links": [ + { + "title": "MVI with Kotlin", + "url": "https://proandroiddev.com/mvi-architecture-with-kotlin-flows-and-channels-d36820b2028d", + "type": "article" + } + ] }, "pSU-NZtjBh-u0WKTYfjk_": { "title": "MVVM", diff --git a/public/roadmap-content/backend.json b/public/roadmap-content/backend.json index cd432beb7..3a624aeb9 100644 --- a/public/roadmap-content/backend.json +++ b/public/roadmap-content/backend.json @@ -1,17 +1,17 @@ { "gKTSe9yQFVbPVlLzWB0hC": { "title": "Search Engines", - "description": "", + "description": "Search engines like Elasticsearch are specialized tools designed for fast, scalable, and flexible searching and analyzing of large volumes of data. Elasticsearch is an open-source, distributed search and analytics engine built on Apache Lucene, offering full-text search capabilities, real-time indexing, and advanced querying features. Key characteristics of search engines like Elasticsearch include:\n\n1. **Full-Text Search**: Support for complex search queries, including relevance scoring and text analysis.\n2. **Distributed Architecture**: Scalability through horizontal distribution across multiple nodes or servers.\n3. **Real-Time Indexing**: Ability to index and search data almost instantaneously.\n4. **Powerful Query DSL**: A domain-specific language for constructing and executing sophisticated queries.\n5. **Analytics**: Capabilities for aggregating and analyzing data, often used for log and event data analysis.\n\nElasticsearch is commonly used in applications requiring advanced search functionality, such as search engines, data analytics platforms, and real-time monitoring systems.", "links": [] }, "9Fpoor-Os_9lvrwu5Zjh-": { "title": "Design and Development Principles", - "description": "In this section, we'll discuss some essential design and development principles to follow while building the backend of any application. These principles will ensure that the backend is efficient, scalable, and maintainable.\n\n1\\. Separation of Concerns (SoC)\n--------------------------------\n\nSeparation of Concerns is a fundamental principle that states that different functionalities of a system should be as independent as possible. This approach improves maintainability and scalability by allowing developers to work on separate components without affecting each other. Divide your backend into clear modules and layers, such as data storage, business logic, and network communication.\n\n2\\. Reusability\n---------------\n\nReusability is the ability to use components, functions, or modules in multiple places without duplicating code. While designing the backend, look for opportunities where you can reuse existing code. Use techniques like creating utility functions, abstract classes, and interfaces to promote reusability and reduce redundancy.\n\n3\\. Keep It Simple and Stupid (KISS)\n------------------------------------\n\nKISS principle states that the simpler the system, the easier it is to understand, maintain, and extend. When designing the backend, try to keep the architecture and code as simple as possible. Use clear naming conventions and modular structures, and avoid over-engineering and unnecessary complexity.\n\n4\\. Don't Repeat Yourself (DRY)\n-------------------------------\n\nDo not duplicate code or functionality across your backend. Duplication can lead to inconsistency and maintainability issues. Instead, focus on creating reusable components, functions or modules, which can be shared across different parts of the backend.\n\n5\\. Scalability\n---------------\n\nA scalable system is one that can efficiently handle an increasing number of users, requests, or data. Design the backend with scalability in mind, considering factors such as data storage, caching, load balancing, and horizontal scaling (adding more instances of the backend server).\n\n6\\. Security\n------------\n\nSecurity is a major concern when developing any application. Always follow best practices to prevent security flaws, such as protecting sensitive data, using secure communication protocols (e.g., HTTPS), implementing authentication and authorization mechanisms, and sanitizing user inputs.\n\n7\\. Testing\n-----------\n\nTesting is crucial for ensuring the reliability and stability of the backend. Implement a comprehensive testing strategy, including unit, integration, and performance tests. Use automated testing tools and set up continuous integration (CI) and continuous deployment (CD) pipelines to streamline the testing and deployment process.\n\n8\\. Documentation\n-----------------\n\nProper documentation helps developers understand and maintain the backend codebase. Write clear and concise documentation for your code, explaining the purpose, functionality, and how to use it. Additionally, use comments and appropriate naming conventions to make the code itself more readable and self-explanatory.\n\nBy following these design and development principles, you'll be well on your way to creating an efficient, secure, and maintainable backend for your applications.", + "description": "Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:\n\n1. SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)\n2. DRY (Don't Repeat Yourself)\n3. KISS (Keep It Simple, Stupid)\n4. YAGNI (You Aren't Gonna Need It)\n5. Separation of Concerns\n6. Modularity\n7. Encapsulation\n8. Composition over Inheritance\n9. Loose Coupling and High Cohesion\n10. Principle of Least Astonishment\n\nThese principles aim to create more maintainable, scalable, and robust software. They encourage clean code, promote reusability, reduce complexity, and enhance flexibility. While not rigid rules, these principles guide developers in making design decisions that lead to better software architecture and easier long-term maintenance. Applying these principles helps in creating systems that are easier to understand, modify, and extend over time.", "links": [] }, "EwvLPSI6AlZ4TnNIJTZA4": { "title": "Learn about APIs", - "description": "API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other.\n\nVisit the following resources to learn more:", + "description": "An API (Application Programming Interface) is a set of defined rules and protocols that allow different software applications to communicate and interact with each other. It provides a standardized way for developers to access and manipulate the functionalities or data of a service, application, or platform without needing to understand its internal workings. APIs can be public or private and are commonly used to integrate disparate systems, facilitate third-party development, and enable interoperability between applications. They typically include endpoints, request methods (like GET, POST, PUT), and data formats (like JSON or XML) to interact with.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is an API?", @@ -19,15 +19,20 @@ "type": "article" }, { - "title": "What is an API?", - "url": "https://www.youtube.com/watch?v=s7wmiS2mSXY", + "title": "daily.dev API Feed", + "url": "https://app.daily.dev/tags/rest-api", + "type": "article" + }, + { + "title": "What is an API (in 5 minutes)", + "url": "https://www.youtube.com/watch?v=ByGJQzlzxQg", "type": "video" } ] }, "SiYUdtYMDImRPmV2_XPkH": { "title": "Internet", - "description": "The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.\n\nVisit the following resources to learn more:", + "description": "The internet is a global network of interconnected computers that communicate using standardized protocols, primarily TCP/IP. When you request a webpage, your device sends a data packet through your internet service provider (ISP) to a DNS server, which translates the website's domain name into an IP address. The packet is then routed across various networks (using routers and switches) to the destination server, which processes the request and sends back the response. This back-and-forth exchange enables the transfer of data like web pages, emails, and files, making the internet a dynamic, decentralized system for global communication.\n\nVisit the following resources to learn more:", "links": [ { "title": "How does the Internet Work?", @@ -55,31 +60,21 @@ "type": "video" }, { - "title": "How the Internet Works in 5 Minutes", - "url": "https://www.youtube.com/watch?v=7_LPdttKXPc", - "type": "video" - }, - { - "title": "Computer Network | Google IT Support Certificate", - "url": "https://www.youtube.com/watch?v=Z_hU2zm4_S8", + "title": "How does the internet work? (Full Course)", + "url": "https://www.youtube.com/watch?v=zN8YNNHcaZc", "type": "video" } ] }, "CWwh2abwqx4hAxpAGvhIx": { "title": "Rust", - "description": "Rust is a modern systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory safe without using garbage collection.\n\nVisit the following resources to learn more:", + "description": "Rust is a systems programming language known for its focus on safety, performance, and concurrency. It provides fine-grained control over system resources while ensuring memory safety without needing a garbage collector. Rust's ownership model enforces strict rules on how data is accessed and managed, preventing common issues like null pointer dereferences and data races. Its strong type system and modern features, such as pattern matching and concurrency support, make it suitable for a wide range of applications, from low-level systems programming to high-performance web servers and tools. Rust is gaining traction in both industry and open source for its reliability and efficiency.\n\nVisit the following resources to learn more:", "links": [ { "title": "The Rust Programming Language - online book", "url": "https://doc.rust-lang.org/book/", "type": "article" }, - { - "title": "Rust by Example - collection of runnable examples", - "url": "https://doc.rust-lang.org/stable/rust-by-example/index.html", - "type": "article" - }, { "title": "Rust vs. Go: Why They’re Better Together", "url": "https://thenewstack.io/rust-vs-go-why-theyre-better-together/", @@ -94,12 +89,17 @@ "title": "Explore top posts about Rust", "url": "https://app.daily.dev/tags/rust?ref=roadmapsh", "type": "article" + }, + { + "title": "Learn Rust Programming", + "url": "https://www.youtube.com/watch?v=BpPEoZW5IiY", + "type": "video" } ] }, "l9Wrq_Ad9-Ju4NIB0m5Ha": { "title": "PHP", - "description": "PHP is a general purpose scripting language often used for making dynamic and interactive Web pages. It was originally created by Danish-Canadian programmer Rasmus Lerdorf in 1994. The PHP reference implementation is now produced by The PHP Group and supported by PHP Foundation. PHP supports procedural and object-oriented styles of programming with some elements of functional programming as well.\n\nVisit the following resources to learn more:", + "description": "PHP (Hypertext Preprocessor) is a widely-used, open-source scripting language designed primarily for web development but also applicable for general-purpose programming. It is embedded within HTML to create dynamic web pages and interact with databases, often working with MySQL or other database systems. PHP is known for its simplicity, ease of integration with various web servers, and extensive support for web-related functionalities. Its wide adoption is driven by its role in powering major platforms and content management systems like WordPress, Joomla, and Drupal. PHP's features include server-side scripting, session management, and support for various web protocols and formats.\n\nVisit the following resources to learn more:", "links": [ { "title": "PHP Website", @@ -123,35 +123,20 @@ }, { "title": "PHP for Beginners", - "url": "https://www.youtube.com/watch?v=U2lQWR6uIuo&list=PL3VM-unCzF8ipG50KDjnzhugceoSG3RTC", - "type": "video" - }, - { - "title": "PHP For Absolute Beginners", - "url": "https://www.youtube.com/watch?v=2eebptXfEvw", - "type": "video" - }, - { - "title": "Full PHP 8 Tutorial - Learn PHP The Right Way In 2022", - "url": "https://www.youtube.com/watch?v=sVbEyFZKgqk&list=PLr3d3QYzkw2xabQRUpcZ_IBk9W50M9pe-", + "url": "https://www.youtube.com/watch?v=zZ6vybT1HQs", "type": "video" } ] }, "BdXbcz4-ar3XOX0wIKzBp": { "title": "Go", - "description": "Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.\n\nVisit the following resources to learn more:", + "description": "Go, also known as Golang, is a statically typed, compiled programming language designed by Google. It combines the efficiency of compiled languages with the ease of use of dynamically typed interpreted languages. Go features built-in concurrency support through goroutines and channels, making it well-suited for networked and multicore systems. It has a simple and clean syntax, fast compilation times, and efficient garbage collection. Go's standard library is comprehensive, reducing the need for external dependencies. The language emphasizes simplicity and readability, with features like implicit interfaces and a lack of inheritance. Go is particularly popular for building microservices, web servers, and distributed systems. Its performance, simplicity, and robust tooling make it a favored choice for cloud-native development, DevOps tools, and large-scale backend systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "Visit Dedicated Go Roadmap", "url": "/golang", "type": "article" }, - { - "title": "A Tour of Go – Go Basics", - "url": "https://go.dev/tour/welcome/1", - "type": "article" - }, { "title": "Go Reference Documentation", "url": "https://go.dev/doc/", @@ -162,16 +147,6 @@ "url": "https://gobyexample.com/", "type": "article" }, - { - "title": "W3Schools Go Tutorial ", - "url": "https://www.w3schools.com/go/", - "type": "article" - }, - { - "title": "Making a RESTful JSON API in Go", - "url": "https://thenewstack.io/make-a-restful-json-api-go/", - "type": "article" - }, { "title": "Go, the Programming Language of the Cloud", "url": "https://thenewstack.io/go-the-programming-language-of-the-cloud/", @@ -183,15 +158,15 @@ "type": "article" }, { - "title": "Go Class by Matt", - "url": "https://www.youtube.com/playlist?list=PLoILbKo9rG3skRCj37Kn5Zj803hhiuRK6", + "title": "Go Programming – Golang Course with Bonus Projects", + "url": "https://www.youtube.com/watch?v=un6ZyFkqFKo", "type": "video" } ] }, "8-lO-v6jCYYoklEJXULxN": { "title": "JavaScript", - "description": "JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on.\n\nVisit the following resources to learn more:", + "description": "JavaScript is a versatile, high-level programming language primarily used for adding interactivity and dynamic features to websites. It runs in the browser, allowing for client-side scripting that can manipulate HTML and CSS, respond to user events, and interact with web APIs. JavaScript is also used on the server side with environments like Node.js, enabling full-stack development. It supports event-driven, functional, and imperative programming styles, and has a rich ecosystem of libraries and frameworks (like React, Angular, and Vue) that enhance its capabilities and streamline development.\n\nVisit the following resources to learn more:", "links": [ { "title": "Visit Dedicated JavaScript Roadmap", @@ -222,7 +197,7 @@ }, "ANeSwxJDJyQ-49pO2-CCI": { "title": "Java", - "description": "Java is general-purpose language, primarily used for Internet-based applications. It was created in 1995 by James Gosling at Sun Microsystems and is one of the most popular options for backend developers.\n\nVisit the following resources to learn more:", + "description": "Java is a high-level, object-oriented programming language known for its portability, robustness, and scalability. Developed by Sun Microsystems (now Oracle), Java follows the \"write once, run anywhere\" principle, allowing code to run on any device with a Java Virtual Machine (JVM). It's widely used for building large-scale enterprise applications, Android mobile apps, and web services. Java features automatic memory management (garbage collection), a vast standard library, and strong security features, making it a popular choice for backend systems, distributed applications, and cloud-based solutions.\n\nVisit the following resources to learn more:", "links": [ { "title": "Visit Dedicated Java Roadmap", @@ -244,11 +219,6 @@ "url": "https://app.daily.dev/tags/java?ref=roadmapsh", "type": "article" }, - { - "title": "Java Crash Course", - "url": "https://www.youtube.com/watch?v=eIrMbAQSU34", - "type": "video" - }, { "title": "Complete Java course", "url": "https://www.youtube.com/watch?v=xk4_1vDrzzo", @@ -258,8 +228,13 @@ }, "J_sVHsD72Yzyqb9KCIvAY": { "title": "Python", - "description": "Python is a well known programming language which is both a strongly typed and a dynamically typed language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways.\n\nVisit the following resources to learn more:", + "description": "Python is a high-level, interpreted programming language known for its readability, simplicity, and versatility. Its design emphasizes code readability and a clear, straightforward syntax, making it accessible for both beginners and experienced developers. Python supports multiple programming paradigms, including procedural, object-oriented, and functional programming. It has a rich ecosystem of libraries and frameworks, such as Django and Flask for web development, Pandas and NumPy for data analysis, and TensorFlow and PyTorch for machine learning. Python is widely used in web development, data science, automation, and scripting, and it benefits from a strong community and extensive documentation.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "Python Full Course for free", + "url": "https://www.youtube.com/watch?v=ix9cRaBkVe0", + "type": "course" + }, { "title": "Visit Dedicated Python Roadmap", "url": "/python", @@ -270,41 +245,16 @@ "url": "https://www.python.org/", "type": "article" }, - { - "title": "Python Getting Started", - "url": "https://www.python.org/about/gettingstarted/", - "type": "article" - }, { "title": "Automate the Boring Stuff", "url": "https://automatetheboringstuff.com/", "type": "article" }, - { - "title": "Python principles - Python basics", - "url": "https://pythonprinciples.com/", - "type": "article" - }, - { - "title": "W3Schools - Python Tutorial ", - "url": "https://www.w3schools.com/python/", - "type": "article" - }, - { - "title": "Python Crash Course", - "url": "https://ehmatthes.github.io/pcc/", - "type": "article" - }, { "title": "An Introduction to Python for Non-Programmers", "url": "https://thenewstack.io/an-introduction-to-python-for-non-programmers/", "type": "article" }, - { - "title": "Getting Started with Python and InfluxDB", - "url": "https://thenewstack.io/getting-started-with-python-and-influxdb/", - "type": "article" - }, { "title": "Explore top posts about Python", "url": "https://app.daily.dev/tags/python?ref=roadmapsh", @@ -314,38 +264,33 @@ }, "rImbMHLLfJwjf3l25vBkc": { "title": "C#", - "description": "C# (pronounced \"C sharp\") is a general purpose programming language made by Microsoft. It is used to perform different tasks and can be used to create web apps, games, mobile apps, etc.\n\nVisit the following resources to learn more:", + "description": "C# (pronounced C-sharp) is a modern, object-oriented programming language developed by Microsoft as part of its .NET framework. It combines the power and efficiency of C++ with the simplicity of Visual Basic, featuring strong typing, lexical scoping, and support for functional, generic, and component-oriented programming paradigms. C# is widely used for developing Windows desktop applications, web applications with [ASP.NET](http://ASP.NET), games with Unity, and cross-platform mobile apps using Xamarin. It offers features like garbage collection, type safety, and extensive library support. C# continues to evolve, with regular updates introducing new capabilities such as asynchronous programming, nullable reference types, and pattern matching. Its integration with the .NET ecosystem and Microsoft's development tools makes it a popular choice for enterprise software development and large-scale applications.\n\nVisit the following resources to learn more:", "links": [ { "title": "C# Learning Path", "url": "https://docs.microsoft.com/en-us/learn/paths/csharp-first-steps/?WT.mc_id=dotnet-35129-website", - "type": "article" + "type": "course" }, { "title": "C# on W3 schools", "url": "https://www.w3schools.com/cs/index.php", "type": "article" }, - { - "title": "Introduction to C#", - "url": "https://docs.microsoft.com/en-us/shows/CSharp-101/?WT.mc_id=Educationalcsharp-c9-scottha", - "type": "article" - }, { "title": "Explore top posts about C#", "url": "https://app.daily.dev/tags/c#?ref=roadmapsh", "type": "article" }, { - "title": "C# tutorials", - "url": "https://www.youtube.com/watch?v=gfkTfcpWqAY&list=PLTjRvDozrdlz3_FPXwb6lX_HoGXa09Yef", + "title": "Learn C# Programming – Full Course with Mini-Projects", + "url": "https://www.youtube.com/watch?v=YrtFtdTTfv0", "type": "video" } ] }, "SlH0Rl07yURDko2nDPfFy": { "title": "Ruby", - "description": "Ruby is a high-level, interpreted programming language that blends Perl, Smalltalk, Eiffel, Ada, and Lisp. Ruby focuses on simplicity and productivity along with a syntax that reads and writes naturally. Ruby supports procedural, object-oriented and functional programming and is dynamically typed.\n\nVisit the following resources to learn more:", + "description": "Ruby is a high-level, object-oriented programming language known for its simplicity, productivity, and elegant syntax. Designed to be intuitive and easy to read, Ruby emphasizes developer happiness and quick development cycles. It supports multiple programming paradigms, including procedural, functional, and object-oriented programming. Ruby is particularly famous for its web framework, Ruby on Rails, which facilitates rapid application development by providing conventions and tools for building web applications efficiently. The language's flexibility, combined with its rich ecosystem of libraries and a strong community, makes it popular for web development, scripting, and prototyping.\n\nVisit the following resources to learn more:", "links": [ { "title": "Ruby Website", @@ -376,16 +321,16 @@ }, "2f0ZO6GJElfZ2Eis28Hzg": { "title": "Pick a Language", - "description": "Even if you’re a beginner the least you would have known is that Web Development is majorly classified into two facets: Frontend Development and Backend Development. And obviously, they both have their respective set of tools and technologies. For instance, when we talk about Frontend Development, there always comes 3 names first and foremost – HTML, CSS, and JavaScript.\n\nIn the same way, when it comes to Backend Web Development – we primarily require a backend (or you can say server-side) programming language to make the website function along with various other tools & technologies such as databases, frameworks, web servers, etc.\n\nPick a language from the given list and make sure to learn its quirks, core details about its runtime e.g. concurrency, memory model etc.\n\n[@article@ Top Languages for job ads](https://www.tiobe.com/tiobe-index/)", + "description": "Even if you’re a beginner the least you would have known is that Web Development is majorly classified into two facets: Frontend Development and Backend Development. And obviously, they both have their respective set of tools and technologies. For instance, when we talk about Frontend Development, there always comes 3 names first and foremost – HTML, CSS, and JavaScript. In the same way, when it comes to Backend Web Development – we primarily require a backend (or you can say server-side) programming language to make the website function along with various other tools & technologies such as databases, frameworks, web servers, etc.", "links": [] }, "_I1E__wCIVrhjMk6IMieE": { "title": "Git", - "description": "[Git](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.\n\nVisit the following resources to learn more:", + "description": "Git is a distributed version control system designed to handle projects of any size with speed and efficiency. Created by Linus Torvalds in 2005, it tracks changes in source code during software development, allowing multiple developers to work together on non-linear development. Git maintains a complete history of all changes, enabling easy rollbacks and comparisons between versions. Its distributed nature means each developer has a full copy of the repository, allowing for offline work and backup. Git's key features include branching and merging capabilities, staging area for commits, and support for collaborative workflows like pull requests. Its speed, flexibility, and robust branching and merging capabilities have made it the most widely used version control system in software development, particularly for open-source projects and team collaborations.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Introduction to Git", - "url": "https://learn.microsoft.com/en-us/training/modules/intro-to-git/", + "title": "Learn Git & GitHub", + "url": "/git-github", "type": "article" }, { @@ -403,11 +348,6 @@ "url": "https://learngitbranching.js.org/", "type": "article" }, - { - "title": "Git Tutorial", - "url": "https://www.w3schools.com/git/", - "type": "article" - }, { "title": "Explore top posts about Git", "url": "https://app.daily.dev/tags/git?ref=roadmapsh", @@ -422,8 +362,13 @@ }, "ezdqQW9wTUw93F6kjOzku": { "title": "Version Control Systems", - "description": "Version control/source control systems allow developers to track and control changes to code over time. These services often include the ability to make atomic revisions to code, branch/fork off of specific points, and to compare versions of code. They are useful in determining the who, what, when, and why code changes were made.\n\nVisit the following resources to learn more:", + "description": "Version Control Systems (VCS) are tools that manage and track changes to code or documents over time, allowing multiple users to collaborate on a project efficiently. They record every change made to files, enabling developers to revert to previous versions, compare changes, and maintain a history of modifications. VCS can be centralized, where the repository is hosted on a central server (e.g., Subversion), or distributed, where each user has a complete copy of the repository (e.g., Git, Mercurial). Version control facilitates collaboration, enhances code integrity, and supports continuous integration by enabling smooth management of concurrent changes and resolving conflicts.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "Learn Git & GitHub", + "url": "/git-github", + "type": "article" + }, { "title": "Git", "url": "https://git-scm.com/", @@ -433,31 +378,26 @@ "title": "What is Version Control?", "url": "https://www.atlassian.com/git/tutorials/what-is-version-control", "type": "article" + }, + { + "title": "Version Control System (VCS) - Everything you need to know", + "url": "https://www.youtube.com/watch?v=SVkuliabq4g", + "type": "video" } ] }, "ptD8EVqwFUYr4W5A_tABY": { "title": "GitHub", - "description": "GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.\n\nVisit the following resources to learn more:", + "description": "GitHub is a web-based platform for version control and collaboration using Git. Owned by Microsoft, it provides hosting for software development and offers features beyond basic Git functionality. GitHub includes tools for project management, code review, and social coding. Key features include repositories for storing code, pull requests for proposing and reviewing changes, issues for tracking bugs and tasks, and actions for automating workflows. It supports both public and private repositories, making it popular for open-source projects and private development. GitHub's collaborative features, like forking repositories and inline code comments, facilitate team development and community contributions. With its extensive integrations and large user base, GitHub has become a central hub for developers, serving as a portfolio, collaboration platform, and deployment tool for software projects of all sizes.\n\nVisit the following resources to learn more:", "links": [ { - "title": "GitHub Website", - "url": "https://github.com", - "type": "opensource" - }, - { - "title": "GitHub Documentation", - "url": "https://docs.github.com/en/get-started/quickstart", - "type": "article" - }, - { - "title": "How to Use Git in a Professional Dev Team", - "url": "https://ooloo.io/project/github-flow", + "title": "Learn Git & GitHub", + "url": "/git-github", "type": "article" }, { - "title": "Learn Git Branching", - "url": "https://learngitbranching.js.org/?locale=en_us", + "title": "GitHub Website", + "url": "https://github.com", "type": "article" }, { @@ -479,38 +419,23 @@ "title": "Git and GitHub for Beginners", "url": "https://www.youtube.com/watch?v=RGOj5yH7evk", "type": "video" - }, - { - "title": "Git and GitHub - CS50 Beyond 2019", - "url": "https://www.youtube.com/watch?v=eulnSXkhE7I", - "type": "video" } ] }, "Ry_5Y-BK7HrkIc6X0JG1m": { "title": "Bitbucket", - "description": "Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc\n\nBitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)\n\nVisit the following resources to learn more:", + "description": "Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations. While similar to GitHub in many aspects, Bitbucket's integration with Atlassian's ecosystem and its pricing model for private repositories are key differentiators. It's widely used for collaborative software development, particularly in enterprise environments already invested in Atlassian's suite of products.\n\nVisit the following resources to learn more:", "links": [ { "title": "Bitbucket Website", "url": "https://bitbucket.org/product", "type": "article" }, - { - "title": "Getting started with Bitbucket", - "url": "https://bitbucket.org/product/guides/basics/bitbucket-interface", - "type": "article" - }, { "title": "Using Git with Bitbucket Cloud", "url": "https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud", "type": "article" }, - { - "title": "A brief overview of Bitbucket", - "url": "https://bitbucket.org/product/guides/getting-started/overview#a-brief-overview-of-bitbucket", - "type": "article" - }, { "title": "Explore top posts about Bitbucket", "url": "https://app.daily.dev/tags/bitbucket?ref=roadmapsh", @@ -520,17 +445,12 @@ "title": "Bitbucket tutorial | How to use Bitbucket Cloud", "url": "https://www.youtube.com/watch?v=M44nEyd_5To", "type": "video" - }, - { - "title": "Bitbucket Tutorial | Bitbucket for Beginners", - "url": "https://www.youtube.com/watch?v=i5T-DB8tb4A", - "type": "video" } ] }, "Wcp-VDdFHipwa7hNAp1z_": { "title": "GitLab", - "description": "GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.\n\nVisit the following resources to learn more:", + "description": "GitLab is a web-based DevOps platform that provides a complete solution for the software development lifecycle. It offers source code management, continuous integration/continuous deployment (CI/CD), issue tracking, and more, all integrated into a single application. GitLab supports Git repositories and includes features like merge requests (similar to GitHub's pull requests), wiki pages, and issue boards. It emphasizes DevOps practices, providing built-in CI/CD pipelines, container registry, and Kubernetes integration. GitLab offers both cloud-hosted and self-hosted options, giving organizations flexibility in deployment. Its all-in-one approach differentiates it from competitors, as it includes features that might require multiple tools in other ecosystems. GitLab's focus on the entire DevOps lifecycle, from planning to monitoring, makes it popular among enterprises and teams seeking a unified platform for their development workflows.\n\nVisit the following resources to learn more:", "links": [ { "title": "GitLab Website", @@ -546,17 +466,22 @@ "title": "Explore top posts about GitLab", "url": "https://app.daily.dev/tags/gitlab?ref=roadmapsh", "type": "article" + }, + { + "title": "What is Gitlab and Why Use It?", + "url": "https://www.youtube.com/watch?v=bnF7f1zGpo4", + "type": "video" } ] }, "NvUcSDWBhzJZ31nzT4UlE": { "title": "Repo Hosting Services", - "description": "When working on a team, you often need a remote place to put your code so others can access it, create their own branches, and create or review pull requests. These services often include issue tracking, code review, and continuous integration features. A few popular choices are GitHub, GitLab, BitBucket, and AWS CodeCommit.\n\nVisit the following resources to learn more:", + "description": "Repo hosting services are platforms that provide storage, management, and collaboration tools for version-controlled code repositories. These services support version control systems like Git, Mercurial, or Subversion, allowing developers to manage and track changes to their codebases, collaborate with others, and automate workflows. Key features often include branching and merging, pull requests, issue tracking, code review, and integration with continuous integration/continuous deployment (CI/CD) pipelines. Popular repo hosting services include GitHub, GitLab, and Bitbucket, each offering various levels of free and paid features tailored to different team sizes and project requirements.\n\nVisit the following resources to learn more:", "links": [ { "title": "GitHub", - "url": "https://github.com/features/", - "type": "opensource" + "url": "https://github.com", + "type": "article" }, { "title": "GitLab", @@ -577,7 +502,7 @@ }, "FihTrMO56kj9jT8O_pO2T": { "title": "PostgreSQL", - "description": "PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance.\n\nVisit the following resources to learn more:", + "description": "PostgreSQL is an advanced, open-source relational database management system (RDBMS) known for its robustness, extensibility, and standards compliance. It supports a wide range of data types and advanced features, including complex queries, foreign keys, and full-text search. PostgreSQL is highly extensible, allowing users to define custom data types, operators, and functions. It supports ACID (Atomicity, Consistency, Isolation, Durability) properties for reliable transaction processing and offers strong support for concurrency and data integrity. Its capabilities make it suitable for various applications, from simple web apps to large-scale data warehousing and analytics solutions.\n\nVisit the following resources to learn more:", "links": [ { "title": "Visit Dedicated PostgreSQL DBA Roadmap", @@ -600,21 +525,26 @@ "type": "article" }, { - "title": "Learn PostgreSQL Tutorial - Full Course for Beginners", - "url": "https://www.youtube.com/watch?v=qw--VYLpxG4", + "title": "PostgreSQL in 100 Seconds", + "url": "https://www.youtube.com/watch?v=n2Fluyr3lbc", "type": "video" }, { "title": "Postgres tutorial for Beginners", - "url": "https://www.youtube.com/watch?v=eMIxuk0nOkU", + "url": "https://www.youtube.com/watch?v=SpfIwlAYaKk", "type": "video" } ] }, "dEsTje8kfHwWjCI3zcgLC": { "title": "MS SQL", - "description": "MS SQL (or Microsoft SQL Server) is the Microsoft developed relational database management system (RDBMS). MS SQL uses the T-SQL (Transact-SQL) query language to interact with the relational databases. There are many different versions and editions available of MS SQL\n\nVisit the following resources to learn more:", + "description": "Microsoft SQL Server (MS SQL) is a relational database management system developed by Microsoft for managing and storing structured data. It supports a wide range of data operations, including querying, transaction management, and data warehousing. SQL Server provides tools and features for database design, performance optimization, and security, including support for complex queries through T-SQL (Transact-SQL), data integration with SQL Server Integration Services (SSIS), and business intelligence with SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS). It is commonly used in enterprise environments for applications requiring reliable data storage, transaction processing, and reporting.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "SQL Roadmap", + "url": "/sql", + "type": "article" + }, { "title": "MS SQL website", "url": "https://www.microsoft.com/en-ca/sql-server/", @@ -634,7 +564,7 @@ }, "VPxOdjJtKAqmM5V0LR5OC": { "title": "MySQL", - "description": "MySQL is an incredibly popular open source relational database management system (RDBMS). MySQL can be used as a stand-alone client or in conjunction with other services to provide database connectivity. The **M** in LAMP stack stands for MySQL; that alone should provide an idea of its prevalence.\n\nVisit the following resources to learn more:", + "description": "MySQL is an open-source relational database management system (RDBMS) known for its speed, reliability, and ease of use. It uses SQL (Structured Query Language) for database interactions and supports a range of features for data management, including transactions, indexing, and stored procedures. MySQL is widely used for web applications, data warehousing, and various other applications due to its scalability and flexibility. It integrates well with many programming languages and platforms, and is often employed in conjunction with web servers and frameworks in popular software stacks like LAMP (Linux, Apache, MySQL, PHP/Python/Perl). MySQL is maintained by Oracle Corporation and has a large community and ecosystem supporting its development and use.\n\nVisit the following resources to learn more:", "links": [ { "title": "MySQL website", @@ -662,15 +592,15 @@ "type": "article" }, { - "title": "MySQL tutorial for beginners", - "url": "https://www.youtube.com/watch?v=7S_tz1z_5bA", + "title": "MySQL Full Course for free", + "url": "https://www.youtube.com/watch?v=5OdVJbNCSso", "type": "video" } ] }, "h1SAjQltHtztSt8QmRgab": { "title": "Oracle", - "description": "Oracle Database Server or sometimes called Oracle RDBMS or even simply Oracle is a world leading relational database management system produced by Oracle Corporation.\n\nVisit the following resources to learn more:", + "description": "Oracle Database is a highly robust, enterprise-grade relational database management system (RDBMS) developed by Oracle Corporation. Known for its scalability, reliability, and comprehensive features, Oracle Database supports complex data management tasks and mission-critical applications. It provides advanced functionalities like SQL querying, transaction management, high availability through clustering, and data warehousing. Oracle's database solutions include support for various data models, such as relational, spatial, and graph, and offer tools for security, performance optimization, and data integration. It is widely used in industries requiring large-scale, secure, and high-performance data processing.\n\nVisit the following resources to learn more:", "links": [ { "title": "Official Website", @@ -708,11 +638,6 @@ "url": "https://www.guru99.com/mariadb-vs-mysql.html", "type": "article" }, - { - "title": "W3Schools - MariaDB tutorial ", - "url": "https://www.w3schools.blog/mariadb-tutorial", - "type": "article" - }, { "title": "Explore top posts about Infrastructure", "url": "https://app.daily.dev/tags/infrastructure?ref=roadmapsh", @@ -727,7 +652,7 @@ }, "r45b461NxLN6wBODJ5CNP": { "title": "Relational Databases", - "description": "A relational database is **a type of database that stores and provides access to data points that are related to one another**. Relational databases store data in a series of tables. Interconnections between the tables are specified as foreign keys. A foreign key is a unique reference from one row in a relational table to another row in a table, which can be the same table but is most commonly a different table.\n\nVisit the following resources to learn more:", + "description": "Relational databases are a type of database management system (DBMS) that organizes data into structured tables with rows and columns, using a schema to define data relationships and constraints. They employ Structured Query Language (SQL) for querying and managing data, supporting operations such as data retrieval, insertion, updating, and deletion. Relational databases enforce data integrity through keys (primary and foreign) and constraints (such as unique and not-null), and they are designed to handle complex queries, transactions, and data relationships efficiently. Examples of relational databases include MySQL, PostgreSQL, and Oracle Database. They are commonly used for applications requiring structured data storage, strong consistency, and complex querying capabilities.\n\nVisit the following resources to learn more:", "links": [ { "title": "Databases and SQL", @@ -763,7 +688,7 @@ }, "F8frGuv1dunOdcVJ_IiGs": { "title": "NoSQL Databases", - "description": "NoSQL databases offer data storage and retrieval that is modelled differently to \"traditional\" relational databases. NoSQL databases typically focus more on horizontal scaling, eventual consistency, speed and flexibility and is used commonly for big data and real-time streaming applications. NoSQL is often described as a BASE system (**B**asically **A**vailable, **S**oft state, **E**ventual consistency) as opposed to SQL/relational which typically focus on ACID (Atomicity, Consistency, Isolation, Durability). Common NoSQL data structures include key-value pair, wide column, graph and document.\n\nVisit the following resources to learn more:", + "description": "NoSQL databases are a category of database management systems designed for handling unstructured, semi-structured, or rapidly changing data. Unlike traditional relational databases, which use fixed schemas and SQL for querying, NoSQL databases offer flexible data models and can be classified into several types:\n\n1. **Document Stores**: Store data in JSON, BSON, or XML formats, allowing for flexible and hierarchical data structures (e.g., MongoDB, CouchDB).\n2. **Key-Value Stores**: Store data as key-value pairs, suitable for high-speed read and write operations (e.g., Redis, Riak).\n3. **Column-Family Stores**: Store data in columns rather than rows, which is useful for handling large volumes of data and wide columnar tables (e.g., Apache Cassandra, HBase).\n4. **Graph Databases**: Optimize the storage and querying of data with complex relationships using graph structures (e.g., Neo4j, Amazon Neptune).\n\nNoSQL databases are often used for applications requiring high scalability, flexibility, and performance, such as real-time analytics, content management systems, and distributed data storage.\n\nVisit the following resources to learn more:", "links": [ { "title": "NoSQL Explained", @@ -789,54 +714,54 @@ }, "Z7jp_Juj5PffSxV7UZcBb": { "title": "ORMs", - "description": "Object-Relational Mapping (ORM) is a technique that lets you query and manipulate data from a database using an object-oriented paradigm. When talking about ORM, most people are referring to a library that implements the Object-Relational Mapping technique, hence the phrase \"an ORM\".\n\nVisit the following resources to learn more:", + "description": "Object-Relational Mapping (ORM) is a programming technique that allows developers to interact with a relational database using object-oriented programming concepts. ORM frameworks map database tables to classes and rows to objects, enabling developers to perform database operations through objects rather than writing raw SQL queries. This abstraction simplifies data manipulation and improves code maintainability by aligning database interactions with the application's object model. ORM tools handle the translation between objects and database schemas, manage relationships, and often provide features like lazy loading and caching. Popular ORM frameworks include Hibernate for Java, Entity Framework for .NET, and SQLAlchemy for Python.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Object Relational Mapping - Wikipedia", - "url": "https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping", + "title": "What is an ORM, how does it work, and how should I use one?", + "url": "https://stackoverflow.com/a/1279678", "type": "article" }, { - "title": "What is an ORM, how does it work, and how should I use one?", - "url": "https://stackoverflow.com/a/1279678", + "title": "What is an ORM", + "url": "https://www.freecodecamp.org/news/what-is-an-orm-the-meaning-of-object-relational-mapping-database-tools/", "type": "article" }, { "title": "Explore top posts about Backend Development", "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", "type": "article" + }, + { + "title": "Why Use an ORM?", + "url": "https://www.youtube.com/watch?v=vHt2LC1EM3Q", + "type": "video" } ] }, "Ge2SnKBrQQrU-oGLz6TmT": { "title": "Normalization", - "description": "Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model.\n\nNormalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).\n\nVisit the following resources to learn more:", + "description": "Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model. Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).\n\nVisit the following resources to learn more:", "links": [ { "title": "What is Normalization in DBMS (SQL)? 1NF, 2NF, 3NF, BCNF Database with Example", "url": "https://www.guru99.com/database-normalization.html", "type": "article" }, - { - "title": "Database normalization", - "url": "https://en.wikipedia.org/wiki/Database_normalization", - "type": "article" - }, { "title": "Explore top posts about Database", "url": "https://app.daily.dev/tags/database?ref=roadmapsh", "type": "article" }, { - "title": "Basic Concept of Database Normalization", - "url": "https://www.youtube.com/watch?v=xoTyrdT9SZI", + "title": "Complete guide to Database Normalization in SQL", + "url": "https://www.youtube.com/watch?v=rBPQ5fg_kiY", "type": "video" } ] }, "qSAdfaGUfn8mtmDjHJi3z": { "title": "ACID", - "description": "ACID are the four properties of relational database systems that help in making sure that we are able to perform the transactions in a reliable manner. It's an acronym which refers to the presence of four properties: atomicity, consistency, isolation and durability\n\nVisit the following resources to learn more:", + "description": "ACID is an acronym representing four key properties that guarantee reliable processing of database transactions. It stands for Atomicity, Consistency, Isolation, and Durability. Atomicity ensures that a transaction is treated as a single, indivisible unit that either completes entirely or fails completely. Consistency maintains the database in a valid state before and after the transaction. Isolation ensures that concurrent transactions do not interfere with each other, appearing to execute sequentially. Durability guarantees that once a transaction is committed, it remains so, even in the event of system failures. These properties are crucial in maintaining data integrity and reliability in database systems, particularly in scenarios involving multiple, simultaneous transactions or where data accuracy is critical, such as in financial systems or e-commerce platforms.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is ACID Compliant Database?", @@ -857,28 +782,49 @@ }, "GwApfL4Yx-b5Y8dB9Vy__": { "title": "Failure Modes", - "description": "There are several different failure modes that can occur in a database, including:\n\n* Read contention: This occurs when multiple clients or processes are trying to read data from the same location in the database at the same time, which can lead to delays or errors.\n* Write contention: This occurs when multiple clients or processes are trying to write data to the same location in the database at the same time, which can lead to delays or errors.\n* Thundering herd: This occurs when a large number of clients or processes try to access the same resource simultaneously, which can lead to resource exhaustion and reduced performance.\n* Cascade: This occurs when a failure in one part of the database system causes a chain reaction that leads to failures in other parts of the system.\n* Deadlock: This occurs when two or more transactions are waiting for each other to release a lock on a resource, leading to a standstill.\n* Corruption: This occurs when data in the database becomes corrupted, which can lead to errors or unexpected results when reading or writing to the database.\n* Hardware failure: This occurs when hardware components, such as disk drives or memory, fail, which can lead to data loss or corruption.\n* Software failure: This occurs when software components, such as the database management system or application, fail, which can lead to errors or unexpected results.\n* Network failure: This occurs when the network connection between the database and the client is lost, which can lead to errors or timeouts when trying to access the database.\n* Denial of service (DoS) attack: This occurs when a malicious actor attempts to overwhelm the database with requests, leading to resource exhaustion and reduced performance.", + "description": "Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.", "links": [] }, "rq_y_OBMD9AH_4aoecvAi": { "title": "Transactions", - "description": "In short, a database transaction is a sequence of multiple operations performed on a database, and all served as a single logical unit of work — taking place wholly or not at all. In other words, there's never a case where only half of the operations are performed and the results saved.\n\nVisit the following resources to learn more:", + "description": "In database systems, a transaction is a series of operations that are executed as a single, atomic unit to ensure data integrity and consistency. Transactions adhere to the ACID properties: Atomicity ensures all operations complete successfully or none are applied; Consistency maintains the database's valid state; Isolation prevents transactions from interfering with each other; and Durability guarantees that once a transaction is committed, its changes are permanent. These properties collectively ensure that databases handle concurrent operations reliably and maintain accurate and consistent data even in the face of failures.\n\nVisit the following resources to learn more:", "links": [ { "title": "What are Transactions?", "url": "https://fauna.com/blog/database-transaction", "type": "article" + }, + { + "title": "What is a Database transaction?", + "url": "https://www.youtube.com/watch?v=wHUOeXbZCYA", + "type": "video" + }, + { + "title": "ACID Properties in Databases With Examples", + "url": "https://www.youtube.com/watch?v=GAe5oB742dw", + "type": "video" } ] }, "SYXJhanu0lFmGj2m2XXhS": { "title": "Profiling Perfor.", - "description": "There are several ways to profile the performance of a database:\n\n* Monitor system performance: You can use tools like the Windows Task Manager or the Unix/Linux top command to monitor the performance of your database server. These tools allow you to see the overall CPU, memory, and disk usage of the system, which can help identify any resource bottlenecks.\n* Use database-specific tools: Most database management systems (DBMSs) have their own tools for monitoring performance. For example, Microsoft SQL Server has the SQL Server Management Studio (SSMS) and the sys.dm\\_os\\_wait\\_stats dynamic management view, while Oracle has the Oracle Enterprise Manager and the v$waitstat view. These tools allow you to see specific performance metrics, such as the amount of time spent waiting on locks or the number of physical reads and writes.\n* Use third-party tools: There are also several third-party tools that can help you profile the performance of a database. Some examples include SolarWinds Database Performance Analyzer, Quest Software Foglight, and Redgate SQL Monitor. These tools often provide more in-depth performance analysis and can help you identify specific issues or bottlenecks.\n* Analyze slow queries: If you have specific queries that are running slowly, you can use tools like EXPLAIN PLAN or SHOW PLAN in MySQL or SQL Server to see the execution plan for the query and identify any potential issues. You can also use tools like the MySQL slow query log or the SQL Server Profiler to capture slow queries and analyze them further.\n* Monitor application performance: If you are experiencing performance issues with a specific application that is using the database, you can use tools like Application Insights or New Relic to monitor the performance of the application and identify any issues that may be related to the database.\n\nHave a look at the documentation for the database that you are using.", - "links": [] + "description": "Profiling performance involves analyzing a system or application's behavior to identify bottlenecks, inefficiencies, and areas for optimization. This process typically involves collecting detailed information about resource usage, such as CPU and memory consumption, I/O operations, and execution time of functions or methods. Profiling tools can provide insights into how different parts of the code contribute to overall performance, highlighting slow or resource-intensive operations. By understanding these performance characteristics, developers can make targeted improvements, optimize code paths, and enhance system responsiveness and scalability. Profiling is essential for diagnosing performance issues and ensuring that applications meet desired performance standards.\n\nLearn more from the following resources:", + "links": [ + { + "title": "How to Profile SQL Queries for Better Performance", + "url": "https://servebolt.com/articles/profiling-sql-queries/", + "type": "article" + }, + { + "title": "Performance Profiling", + "url": "https://www.youtube.com/watch?v=MaauQTeGg2k", + "type": "video" + } + ] }, "bQnOAu863hsHdyNMNyJop": { "title": "N+1 Problem", - "description": "The N+1 query problem happens when your code executes N additional query statements to fetch the same data that could have been retrieved when executing the primary query.\n\nVisit the following resources to learn more:", + "description": "The N+1 problem occurs in database querying when an application performs a query to retrieve a list of items and then issues additional queries to fetch related data for each item individually. This often results in inefficiencies and performance issues because the number of queries issued grows proportionally with the number of items retrieved. For example, if an application retrieves 10 items and then performs an additional query for each item to fetch related details, it ends up executing 11 queries (1 for the list and 10 for the details), leading to a total of 11 queries instead of 2. This can severely impact performance, especially with larger datasets. Solutions to the N+1 problem typically involve optimizing queries to use joins or batching techniques to retrieve related data in fewer, more efficient queries.\n\nVisit the following resources to learn more:", "links": [ { "title": "In Detail Explanation of N+1 Problem", @@ -894,6 +840,11 @@ "title": "Solving N+1 Problem: For Java Backend Developers", "url": "https://dev.to/jackynote/solving-the-notorious-n1-problem-optimizing-database-queries-for-java-backend-developers-2o0p", "type": "article" + }, + { + "title": "SQLite and the N+1 (no) problem", + "url": "https://www.youtube.com/watch?v=qPfAQY_RahA", + "type": "video" } ] }, @@ -920,11 +871,11 @@ }, "y-xkHFE9YzhNIX3EiWspL": { "title": "Database Indexes", - "description": "An index is a data structure that you build and assign on top of an existing table that basically looks through your table and tries to analyze and summarize so that it can create shortcuts.\n\nVisit the following resources to learn more:", + "description": "Database indexes are data structures that improve the speed of data retrieval operations in a database management system. They work similarly to book indexes, providing a quick way to look up information based on specific columns or sets of columns. Indexes create a separate structure that holds a reference to the actual data, allowing the database engine to find information without scanning the entire table. While indexes significantly enhance query performance, especially for large datasets, they come with trade-offs. They increase storage space requirements and can slow down write operations as the index must be updated with each data modification. Common types include B-tree indexes for general purpose use, bitmap indexes for low-cardinality data, and hash indexes for equality comparisons. Proper index design is crucial for optimizing database performance, balancing faster reads against slower writes and increased storage needs.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Database index - Wikipedia", - "url": "https://en.wikipedia.org/wiki/Database_index", + "title": "What is a Database Index?", + "url": "https://www.codecademy.com/article/sql-indexes", "type": "article" }, { @@ -957,12 +908,17 @@ "title": "Explore top posts about Backend Development", "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", "type": "article" + }, + { + "title": "What is Database Sharding?", + "url": "https://www.youtube.com/watch?v=XP98YCr-iXQ", + "type": "video" } ] }, "wrl7HHWXOaxoKVlNZxZ6d": { "title": "Data Replication", - "description": "Data replication is the process by which data residing on a physical/virtual server(s) or cloud instance (primary instance) is continuously replicated or copied to a secondary server(s) or cloud instance (standby instance). Organizations replicate data to support high availability, backup, and/or disaster recovery.\n\nVisit the following resources to learn more:", + "description": "Data replication is the process of creating and maintaining multiple copies of the same data across different locations or nodes in a distributed system. It enhances data availability, reliability, and performance by ensuring that data remains accessible even if one or more nodes fail. Replication can be synchronous (changes are applied to all copies simultaneously) or asynchronous (changes are propagated after being applied to the primary copy). It's widely used in database systems, content delivery networks, and distributed file systems. Replication strategies include master-slave, multi-master, and peer-to-peer models. While improving fault tolerance and read performance, replication introduces challenges in maintaining data consistency across copies and managing potential conflicts. Effective replication strategies must balance consistency, availability, and partition tolerance, often in line with the principles of the CAP theorem.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is data replication?", @@ -971,25 +927,20 @@ }, { "title": "What is Data Replication?", - "url": "https://youtu.be/fUrKt-AQYtE", + "url": "https://www.youtube.com/watch?v=iO8a1nMbL1o", "type": "video" } ] }, "LAdKDJ4LcMaDWqslMvE8X": { "title": "CAP Theorem", - "description": "CAP is an acronym that stands for Consistency, Availability and Partition Tolerance. According to CAP theorem, any distributed system can only guarantee two of the three properties at any point of time. You can't guarantee all three properties at once.\n\nVisit the following resources to learn more:", + "description": "The CAP Theorem, also known as Brewer's Theorem, is a fundamental principle in distributed database systems. It states that in a distributed system, it's impossible to simultaneously guarantee all three of the following properties: Consistency (all nodes see the same data at the same time), Availability (every request receives a response, without guarantee that it contains the most recent version of the data), and Partition tolerance (the system continues to operate despite network failures between nodes). According to the theorem, a distributed system can only strongly provide two of these three guarantees at any given time. This principle guides the design and architecture of distributed systems, influencing decisions on data consistency models, replication strategies, and failure handling. Understanding the CAP Theorem is crucial for designing robust, scalable distributed systems and for choosing appropriate database solutions for specific use cases in distributed computing environments.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is CAP Theorem?", "url": "https://www.bmc.com/blogs/cap-theorem/", "type": "article" }, - { - "title": "CAP Theorem - Wikipedia", - "url": "https://en.wikipedia.org/wiki/CAP_theorem", - "type": "article" - }, { "title": "An Illustrated Proof of the CAP Theorem", "url": "https://mwhittaker.github.io/blog/an_illustrated_proof_of_the_cap_theorem/", @@ -1009,28 +960,22 @@ }, "95d9itpUZ4s9roZN8kG9x": { "title": "Scaling Databases", - "description": "Scaling databases is the process of adapting them to handle more data and users efficiently. It's achieved by either upgrading existing hardware (vertical scaling) or adding more servers (horizontal scaling). Techniques like sharding and replication are key. This ensures databases continue to be a robust asset as they grow.\n\nVisit the following resources to learn more:", - "links": [ - { - "title": "MongoDB: Database Scaling Basics", - "url": "https://www.mongodb.com/basics/scaling", - "type": "article" - }, - { - "title": "Explore top posts about Backend Development", - "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", - "type": "article" - } - ] + "description": "Scaling databases is the process of adapting them to handle more data and users efficiently. It's achieved by either upgrading existing hardware (vertical scaling) or adding more servers (horizontal scaling). Techniques like sharding and replication are key. This ensures databases continue to be a robust asset as they grow.", + "links": [] }, "dLY0KafPstajCcSbslC4M": { "title": "HATEOAS", - "description": "HATEOAS is an acronym for **H**ypermedia **A**s **T**he **E**ngine **O**f **A**pplication **S**tate, it's the concept that when sending information over a RESTful API the document received should contain everything the client needs in order to parse and use the data i.e they don't have to contact any other endpoint not explicitly mentioned within the Document.", + "description": "HATEOAS (Hypermedia As The Engine Of Application State) is a constraint of RESTful architecture that allows clients to navigate an API dynamically through hypermedia links provided in responses. Instead of hard-coding URLs or endpoints, the client discovers available actions through these links, much like a web browser following links on a webpage. This enables greater flexibility and decouples clients from server-side changes, making the system more adaptable and scalable without breaking existing clients. It's a key element of REST's principle of statelessness and self-descriptive messages.\n\nLearn more from the following resources:", "links": [ { "title": "What is HATEOAS and why is it important for my REST API?", "url": "https://restcookbook.com/Basics/hateoas/", "type": "article" + }, + { + "title": "What happened to HATEOAS", + "url": "https://www.youtube.com/watch?v=HNTSrytKCoQ", + "type": "video" } ] }, @@ -1044,12 +989,12 @@ "type": "article" }, { - "title": "Official Docs", - "url": "https://jsonapi.org/implementations/", + "title": "What is JSON API?", + "url": "https://medium.com/@niranjan.cs/what-is-json-api-3b824fba2788", "type": "article" }, { - "title": "JSON API: Explained in 4 minutes ", + "title": "JSON API: Explained in 4 minutes", "url": "https://www.youtube.com/watch?v=N-4prIh7t38", "type": "video" } @@ -1057,7 +1002,7 @@ }, "9cD5ag1L0GqHx4_zxc5JX": { "title": "Open API Specs", - "description": "The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.\n\nAn OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.\n\nVisit the following resources to learn more:", + "description": "The OpenAPI Specification (OAS), formerly known as Swagger, is a standard for defining and documenting RESTful APIs. It provides a structured format in YAML or JSON to describe API endpoints, request and response formats, authentication methods, and other metadata. By using OAS, developers can create a comprehensive and machine-readable API description that facilitates client generation, automated documentation, and testing. This specification promotes consistency and clarity in API design, enhances interoperability between different systems, and enables tools to generate client libraries, server stubs, and interactive API documentation.\n\nVisit the following resources to learn more:", "links": [ { "title": "OpenAPI Specification Website", @@ -1070,8 +1015,8 @@ "type": "article" }, { - "title": "Official training guide", - "url": "https://swagger.io/docs/specification/about/", + "title": " REST API and OpenAPI: It’s Not an Either/Or Question ", + "url": "https://www.youtube.com/watch?v=pRS9LRBgjYg", "type": "article" }, { @@ -1089,23 +1034,23 @@ "title": "w3school SOAP explanation", "url": "https://www.w3schools.com/xml/xml_soap.asp", "type": "article" + }, + { + "title": "REST vs SOAP", + "url": "https://www.youtube.com/watch?v=_fq8Ye8kodA", + "type": "video" } ] }, "J-TOE2lT4At1mSdNoxPS1": { "title": "gRPC", - "description": "gRPC is a high-performance, open source universal RPC framework\n\nRPC stands for Remote Procedure Call, there's an ongoing debate on what the g stands for. RPC is a protocol that allows a program to execute a procedure of another program located on another computer. The great advantage is that the developer doesn’t need to code the details of the remote interaction. The remote procedure is called like any other function. But the client and the server can be coded in different languages.\n\nVisit the following resources to learn more:", + "description": "gRPC is a high-performance, open source universal RPC framework, RPC stands for Remote Procedure Call, there's an ongoing debate on what the g stands for. RPC is a protocol that allows a program to execute a procedure of another program located on another computer. The great advantage is that the developer doesn’t need to code the details of the remote interaction. The remote procedure is called like any other function. But the client and the server can be coded in different languages.\n\nVisit the following resources to learn more:", "links": [ { "title": "gRPC Website", "url": "https://grpc.io/", "type": "article" }, - { - "title": "gRPC Docs", - "url": "https://grpc.io/docs/", - "type": "article" - }, { "title": "What Is GRPC?", "url": "https://www.wallarm.com/what/the-concept-of-grpc", @@ -1125,13 +1070,8 @@ }, "lfNFDZZNdrB0lbEaMtU71": { "title": "REST", - "description": "REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other.\n\nVisit the following resources to learn more:", + "description": "A REST API (Representational State Transfer Application Programming Interface) is an architectural style for designing networked applications. It relies on standard HTTP methods (GET, POST, PUT, DELETE) to interact with resources, which are represented as URIs (Uniform Resource Identifiers). REST APIs are stateless, meaning each request from a client to a server must contain all the information needed to understand and process the request. They use standard HTTP status codes to indicate the outcome of requests and often communicate in formats like JSON or XML. REST APIs are widely used due to their simplicity, scalability, and ease of integration with web services and applications.\n\nVisit the following resources to learn more:", "links": [ - { - "title": "REST Fundamental", - "url": "https://dev.to/cassiocappellari/fundamentals-of-rest-api-2nag", - "type": "article" - }, { "title": "What is a REST API?", "url": "https://www.redhat.com/en/topics/api/what-is-a-rest-api", @@ -1151,13 +1091,23 @@ "title": "Explore top posts about REST API", "url": "https://app.daily.dev/tags/rest-api?ref=roadmapsh", "type": "article" + }, + { + "title": "What is a REST API?", + "url": "https://www.youtube.com/watch?v=-mN3VyJuCjM", + "type": "video" } ] }, "zp3bq38tMnutT2N0tktOW": { "title": "GraphQL", - "description": "GraphQL is a query language and runtime system for APIs (application programming interfaces). It is designed to provide a flexible and efficient way for clients to request data from servers, and it is often used as an alternative to REST (representational state transfer) APIs.\n\nOne of the main features of GraphQL is its ability to specify exactly the data that is needed, rather than receiving a fixed set of data from an endpoint. This allows clients to request only the data that they need, and it reduces the amount of data that needs to be transferred over the network.\n\nGraphQL also provides a way to define the structure of the data that is returned from the server, allowing clients to request data in a predictable and flexible way. This makes it easier to build and maintain client applications that depend on data from the server.\n\nGraphQL is widely used in modern web and mobile applications, and it is supported by a large and active developer community.\n\nVisit the following resources to learn more:", + "description": "GraphQL is a query language for APIs and a runtime for executing those queries, developed by Facebook. Unlike REST, where fixed endpoints return predefined data, GraphQL allows clients to request exactly the data they need, making API interactions more flexible and efficient. It uses a single endpoint and relies on a schema that defines the types and structure of the available data. This approach reduces over-fetching and under-fetching of data, making it ideal for complex applications with diverse data needs across multiple platforms (e.g., web, mobile).\n\nVisit the following resources to learn more:", "links": [ + { + "title": "GraphQL Roadmap", + "url": "/graphql", + "type": "article" + }, { "title": "GraphQL Official Website", "url": "https://graphql.org/", @@ -1177,8 +1127,13 @@ }, "KWTbEVX_WxS8jmSaAX3Fe": { "title": "Client Side", - "description": "Client-side caching is the storage of network data to a local cache for future re-use. After an application fetches network data, it stores that resource in a local cache. Once a resource has been cached, the browser uses the cache on future requests for that resource to boost performance.\n\nVisit the following resources to learn more:", + "description": "Client-side caching is a technique where web browsers or applications store data locally on the user's device to improve performance and reduce server load. It involves saving copies of web pages, images, scripts, and other resources on the client's system for faster access on subsequent visits. Modern browsers implement various caching mechanisms, including HTTP caching (using headers like Cache-Control and ETag), service workers for offline functionality, and local storage APIs. Client-side caching significantly reduces network traffic and load times, enhancing user experience, especially on slower connections. However, it requires careful management to balance improved performance with the need for up-to-date content. Developers must implement appropriate cache invalidation strategies and consider cache-busting techniques for critical updates. Effective client-side caching is crucial for creating responsive, efficient web applications while minimizing server resource usage.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "Client-side Caching", + "url": "https://redis.io/docs/latest/develop/use/client-side-caching/", + "type": "article" + }, { "title": "Everything you need to know about HTTP Caching", "url": "https://www.youtube.com/watch?v=HiBDZgTNpXY", @@ -1195,26 +1150,21 @@ "url": "https://www.cloudflare.com/en-ca/learning/cdn/what-is-a-cdn/", "type": "article" }, - { - "title": "Wikipedia - Content Delivery Network", - "url": "https://en.wikipedia.org/wiki/Content_delivery_network", - "type": "article" - }, { "title": "What is Cloud CDN?", "url": "https://www.youtube.com/watch?v=841kyd_mfH0", "type": "video" }, { - "title": "What is a Content Delivery Network (CDN)?", - "url": "https://www.youtube.com/watch?v=Bsq5cKkS33I", + "title": "What is a CDN and how does it work?", + "url": "https://www.youtube.com/watch?v=RI9np1LWzqw", "type": "video" } ] }, "z1-eP4sV75GBEIdM4NvL9": { "title": "Server Side", - "description": "Server-side caching temporarily stores web files and data on the origin server to reuse later.\n\nWhen the user first requests for the webpage, the website goes under the normal process of retrieving data from the server and generates or constructs the webpage of the website. After the request has happened and the response has been sent back, the server copies the webpage and stores it as a cache.\n\nNext time the user revisits the website, it loads the already saved or cached copy of the webpage, thus making it faster.\n\nVisit the following resources to learn more:", + "description": "Server-side caching is a technique used to improve application performance by storing frequently accessed data in memory on the server, reducing the need for repeated data retrieval or computation. This approach helps to speed up response times and reduce the load on databases and other backend services. Common methods include caching database query results, HTML fragments, and API responses. Popular server-side caching tools and technologies include Redis, Memcached, and built-in caching mechanisms in web frameworks. By efficiently managing and serving cached content, server-side caching enhances scalability and responsiveness of applications.\n\nVisit the following resources to learn more:", "links": [ { "title": "Server-side caching and Client-side caching", @@ -1231,11 +1181,6 @@ "url": "https://redis.io/glossary/distributed-caching/", "type": "article" }, - { - "title": "Example - Hibernate caching", - "url": "https://medium.com/@himani.prasad016/caching-in-hibernate-3ad4f479fcc0", - "type": "article" - }, { "title": "Explore top posts about Web Development", "url": "https://app.daily.dev/tags/webdev?ref=roadmapsh", @@ -1245,12 +1190,12 @@ }, "ELj8af7Mi38kUbaPJfCUR": { "title": "Caching", - "description": "Caching is a technique of storing frequently used data or results of complex computations in a local memory, for a certain period. So, next time, when the client requests the same information, instead of retrieving the information from the database, it will give the information from the local memory. The main advantage of caching is that it improves performance by reducing the processing burden.\n\nNB! Caching is a complicated topic that has obvious benefits but can lead to pitfalls like stale data, cache invalidation, distributed caching etc", + "description": "Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.", "links": [] }, "RBrIP5KbVQ2F0ly7kMfTo": { "title": "Web Security", - "description": "Web security refers to the protective measures taken by the developers to protect the web applications from threats that could affect the business.\n\nVisit the following resources to learn more:", + "description": "Web security involves protecting web applications from threats and vulnerabilities to ensure data confidentiality, integrity, and availability. Key practices include strong authentication and authorization mechanisms, using encryption (e.g., SSL/TLS) for secure data transmission, and validating user inputs to prevent attacks like SQL injection and cross-site scripting (XSS). Secure coding practices, effective session management, and regular updates and patching are crucial for maintaining security. Additionally, ongoing security testing, including penetration testing and vulnerability assessments, helps identify and address potential weaknesses, safeguarding applications and maintaining user trust.\n\nVisit the following resources to learn more:", "links": [ { "title": "OWASP Web Application Security Testing Checklist", @@ -1262,36 +1207,21 @@ "url": "https://developers.google.com/web/fundamentals/security/encrypt-in-transit/why-https", "type": "article" }, - { - "title": "Wikipedia - OWASP", - "url": "https://en.wikipedia.org/wiki/OWASP", - "type": "article" - }, - { - "title": "OWASP Top 10 Security Risks", - "url": "https://sucuri.net/guides/owasp-top-10-security-vulnerabilities-2021/", - "type": "article" - }, - { - "title": "OWASP Cheatsheets", - "url": "https://cheatsheetseries.owasp.org/cheatsheets/AJAX_Security_Cheat_Sheet.html", - "type": "article" - }, - { - "title": "Content Security Policy (CSP)", - "url": "https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP", - "type": "article" - }, { "title": "Explore top posts about Security", "url": "https://app.daily.dev/tags/security?ref=roadmapsh", "type": "article" + }, + { + "title": "7 Security Risks and Hacking Stories for Web Developers", + "url": "https://www.youtube.com/watch?v=4YOpILi9Oxs", + "type": "video" } ] }, "381Kw1IMRv7CJp-Uf--qd": { "title": "Integration Testing", - "description": "Integration testing is a broad category of tests where multiple software modules are **integrated** and tested as a group. It is meant to test the **interaction** between multiple services, resources, or modules. For example, an API's interaction with a backend service, or a service with a database.\n\nVisit the following resources to learn more:", + "description": "Integration testing focuses on verifying the interactions between different components or modules of a software system to ensure they work together as expected. It comes after unit testing and tests how modules communicate with each other, often using APIs, databases, or third-party services. The goal is to catch issues related to the integration points, such as data mismatches, protocol errors, or misconfigurations. Integration tests help ensure that independently developed components can function seamlessly as part of a larger system, making them crucial for identifying bugs that wouldn't surface in isolated unit tests.\n\nVisit the following resources to learn more:", "links": [ { "title": "Integration Testing", @@ -1310,7 +1240,7 @@ }, { "title": "What is Integration Testing?", - "url": "https://youtu.be/QYCaaNz8emY", + "url": "https://www.youtube.com/watch?v=kRD6PA6uxiY", "type": "video" } ] @@ -1331,14 +1261,14 @@ }, { "title": "Functional Testing vs Non-Functional Testing", - "url": "https://youtu.be/j_79AXkG4PY", + "url": "https://www.youtube.com/watch?v=NgQT7miTP9M", "type": "video" } ] }, "3OYm6b9f6WOrKi4KTOZYK": { "title": "Unit Testing", - "description": "Unit testing is where individual **units** (modules, functions/methods, routines, etc.) of software are tested to ensure their correctness. This low-level testing ensures smaller components are functionally sound while taking the burden off of higher-level tests. Generally, a developer writes these tests during the development process and they are run as automated tests.\n\nVisit the following resources to learn more:", + "description": "Unit testing is a software testing method where individual components or units of a program are tested in isolation to ensure they function correctly. This approach focuses on verifying the smallest testable parts of an application, such as functions or methods, by executing them with predefined inputs and comparing the results to expected outcomes. Unit tests are typically automated and written by developers during the coding phase to catch bugs early, facilitate code refactoring, and ensure that each unit of code performs as intended. By isolating and testing each component, unit testing helps improve code reliability and maintainability.\n\nVisit the following resources to learn more:", "links": [ { "title": "Unit Testing Tutorial", @@ -1352,14 +1282,14 @@ }, { "title": "What is Unit Testing?", - "url": "https://youtu.be/3kzHmaeozDI", + "url": "https://www.youtube.com/watch?v=W2KOSaetWBk", "type": "video" } ] }, "STQQbPa7PE3gbjMdL6P-t": { "title": "Testing", - "description": "A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.\n\nVisit the following resources to learn more:", + "description": "Testing is a systematic process used to evaluate the functionality, performance, and quality of software or systems to ensure they meet specified requirements and standards. It involves various methodologies and levels, including unit testing (testing individual components), integration testing (verifying interactions between components), system testing (assessing the entire system's behavior), and acceptance testing (confirming it meets user needs). Testing can be manual or automated and aims to identify defects, validate that features work as intended, and ensure the system performs reliably under different conditions. Effective testing is critical for delivering high-quality software and mitigating risks before deployment.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is Software Testing?", @@ -1380,7 +1310,7 @@ }, "mGfD7HfuP184lFkXZzGjG": { "title": "CI / CD", - "description": "CI/CD (Continuous Integration/Continuous Deployment) is the practice of automating building, testing, and deployment of applications with the main goal of detecting issues early, and provide quicker releases to the production environment.\n\nVisit the following resources to learn more:", + "description": "CI/CD (Continuous Integration/Continuous Delivery) is a set of practices and tools in software development that automate the process of building, testing, and deploying code changes. Continuous Integration involves frequently merging code changes into a central repository, where automated builds and tests are run. Continuous Delivery extends this by automatically deploying all code changes to a testing or staging environment after the build stage. Some implementations include Continuous Deployment, where changes are automatically released to production. CI/CD pipelines typically involve stages like code compilation, unit testing, integration testing, security scans, and deployment. This approach aims to improve software quality, reduce time to market, and increase development efficiency by catching and addressing issues early in the development cycle.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is CI/CD?", @@ -1392,11 +1322,6 @@ "url": "https://thenewstack.io/a-primer-continuous-integration-and-continuous-delivery-ci-cd/", "type": "article" }, - { - "title": "3 Ways to Use Automation in CI/CD Pipelines", - "url": "https://thenewstack.io/3-ways-to-use-automation-in-ci-cd-pipelines/", - "type": "article" - }, { "title": "Articles about CI/CD", "url": "https://thenewstack.io/category/ci-cd/", @@ -1421,7 +1346,7 @@ }, "6XIWO0MoE-ySl4qh_ihXa": { "title": "GOF Design Patterns", - "description": "The Gang of Four (GoF) design patterns are a set of design patterns for object-oriented software development that were first described in the book \"Design Patterns: Elements of Reusable Object-Oriented Software\" by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (also known as the Gang of Four).\n\nThe GoF design patterns are divided into three categories: Creational, Structural and Behavioral.\n\n* Creational Patterns\n* Structural Patterns\n* Behavioral Patterns\n\nLearn more from the following links:", + "description": "The Gang of Four (GoF) Design Patterns are a collection of 23 foundational software design patterns that provide solutions to common object-oriented design problems. These patterns are grouped into three categories: _Creational_ (focused on object creation like Singleton and Factory), _Structural_ (focused on class and object composition like Adapter and Composite), and _Behavioral_ (focused on communication between objects like Observer and Strategy). Each pattern offers a proven template for addressing specific design challenges, promoting code reusability, flexibility, and maintainability across software systems.\n\nLearn more from the following links:", "links": [ { "title": "Gangs of Four (GoF) Design Patterns", @@ -1437,39 +1362,44 @@ }, "u8IRw5PuXGUcmxA0YYXgx": { "title": "CQRS", - "description": "CQRS, or command query responsibility segregation, defines an architectural pattern where the main focus is to separate the approach of reading and writing operations for a data store. CQRS can also be used along with Event Sourcing pattern in order to persist application state as an ordered of sequence events, making it possible to restore data to any point in time.\n\nVisit the following resources to learn more:", + "description": "CQRS (Command Query Responsibility Segregation) is an architectural pattern that separates read and write operations for a data store. In this pattern, \"commands\" handle data modification (create, update, delete), while \"queries\" handle data retrieval. The principle behind CQRS is that for many systems, especially complex ones, the requirements for reading data differ significantly from those for writing data. By separating these concerns, CQRS allows for independent scaling, optimization, and evolution of the read and write sides. This can lead to improved performance, scalability, and security. CQRS is often used in event-sourced systems and can be particularly beneficial in high-performance, complex domain applications. However, it also introduces additional complexity and should be applied judiciously based on the specific needs and constraints of the system.\n\nVisit the following resources to learn more:", "links": [ { "title": "CQRS Pattern", "url": "https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs", "type": "article" + }, + { + "title": "Learn CQRS Pattern in 5 minutes!", + "url": "https://www.youtube.com/watch?v=eiut3FIY1Cg", + "type": "video" } ] }, "BvHi5obg0L1JDZFKBzx9t": { "title": "Domain Driven Design", - "description": "Domain-driven design (DDD) is a software design approach focusing on modeling software to match a domain according to input from that domain's experts.\n\nIn terms of object-oriented programming, it means that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes like LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.\n\nDDD connects the implementation to an evolving model and it is predicated on the following goals:\n\n* Placing the project's primary focus on the core domain and domain logic;\n* Basing complex designs on a model of the domain;\n* Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.\n\nVisit the following resources to learn more:", + "description": "Domain-Driven Design (DDD) is a software development approach that focuses on creating a deep understanding of the business domain and using this knowledge to inform the design of software systems. It emphasizes close collaboration between technical and domain experts to develop a shared language (ubiquitous language) and model that accurately represents the core concepts and processes of the business. DDD promotes organizing code around business concepts (bounded contexts), using rich domain models to encapsulate business logic, and separating the domain logic from infrastructure concerns. Key patterns in DDD include entities, value objects, aggregates, repositories, and domain services. This approach aims to create more maintainable and flexible software systems that closely align with business needs and can evolve with changing requirements. DDD is particularly valuable for complex domains where traditional CRUD-based architectures may fall short in capturing the nuances and rules of the business.\n\nVisit the following resources to learn more:", "links": [ { "title": "Domain-Driven Design", "url": "https://redis.com/glossary/domain-driven-design-ddd/", "type": "article" }, - { - "title": "Domain-Driven Design: Tackling Complexity in the Heart of Software", - "url": "https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215", - "type": "article" - }, { "title": "Explore top posts about Domain-Driven Design", "url": "https://app.daily.dev/tags/domain-driven-design?ref=roadmapsh", "type": "article" + }, + { + "title": "Domain Driven Design: What You Need To Know", + "url": "https://www.youtube.com/watch?v=4rhzdZIDX_k", + "type": "video" } ] }, "wqE-mkxvehOzOv8UyE39p": { "title": "Event Sourcing", - "description": "Event sourcing is a design pattern in which the state of a system is represented as a sequence of events that have occurred over time. In an event-sourced system, changes to the state of the system are recorded as events and stored in an event store. The current state of the system is derived by replaying the events from the event store.\n\nOne of the main benefits of event sourcing is that it provides a clear and auditable history of all the changes that have occurred in the system. This can be useful for debugging and for tracking the evolution of the system over time.\n\nEvent sourcing is often used in conjunction with other patterns, such as Command Query Responsibility Segregation (CQRS) and domain-driven design, to build scalable and responsive systems with complex business logic. It is also useful for building systems that need to support undo/redo functionality or that need to integrate with external systems.\n\nVisit the following resources to learn more:", + "description": "Event sourcing is a design pattern in which the state of a system is represented as a sequence of events that have occurred over time. In an event-sourced system, changes to the state of the system are recorded as events and stored in an event store. The current state of the system is derived by replaying the events from the event store. One of the main benefits of event sourcing is that it provides a clear and auditable history of all the changes that have occurred in the system. This can be useful for debugging and for tracking the evolution of the system over time.Event sourcing is often used in conjunction with other patterns, such as Command Query Responsibility Segregation (CQRS) and domain-driven design, to build scalable and responsive systems with complex business logic. It is also useful for building systems that need to support undo/redo functionality or that need to integrate with external systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "Event Sourcing - Martin Fowler", @@ -1480,6 +1410,11 @@ "title": "Explore top posts about Architecture", "url": "https://app.daily.dev/tags/architecture?ref=roadmapsh", "type": "article" + }, + { + "title": "Event Sourcing 101", + "url": "https://www.youtube.com/watch?v=lg6aF5PP4Tc", + "type": "video" } ] }, @@ -1503,15 +1438,15 @@ "type": "article" }, { - "title": "Agile in Practice: Test Driven Development", - "url": "https://youtu.be/uGaNkTahrIw", + "title": "Test-Driven Development", + "url": "https://www.youtube.com/watch?v=Jv2uxzhPFl4", "type": "video" } ] }, "Ke522R-4k6TDeiDRyZbbU": { "title": "Monolithic Apps", - "description": "Monolithic architecture is a pattern in which an application handles requests, executes business logic, interacts with the database, and creates the HTML for the front end. In simpler terms, this one application does many things. It's inner components are highly coupled and deployed as one unit.\n\nIt is recommended to build simple applications as a monolith for faster development cycle. Also suitable for Proof-of-Concept(PoC) projects.\n\nVisit the following resources to learn more:", + "description": "Monolithic applications are designed as a single, cohesive unit where all components—such as user interface, business logic, and data access—are tightly integrated and run as a single service. This architecture simplifies development and deployment since the entire application is managed and deployed together. However, it can lead to challenges with scalability, maintainability, and agility as the application grows. Changes to one part of the application may require redeploying the entire system, and scaling might necessitate duplicating the entire application rather than scaling individual components. Monolithic architectures can be suitable for smaller applications or projects with less complex requirements, but many organizations transition to microservices or modular architectures to address these limitations as they scale.\n\nVisit the following resources to learn more:", "links": [ { "title": "Pattern: Monolithic Architecture", @@ -1522,12 +1457,17 @@ "title": "Monolithic Architecture - Advantages & Disadvantages", "url": "https://datamify.medium.com/monolithic-architecture-advantages-and-disadvantages-e71a603eec89", "type": "article" + }, + { + "title": "Monolithic vs Microservice Architecture", + "url": "https://www.youtube.com/watch?v=NdeTGlZ__Do", + "type": "video" } ] }, "nkmIv3dNwre4yrULMgTh3": { "title": "Serverless", - "description": "Serverless is an architecture in which a developer builds and runs applications without provisioning or managing servers. With cloud computing/serverless, servers exist but are managed by the cloud provider. Resources are used as they are needed, on demand and often using auto scaling.\n\nVisit the following resources to learn more:", + "description": "Serverless computing is a cloud computing model where developers build and run applications without managing server infrastructure. In this model, cloud providers handle the server management, scaling, and maintenance tasks. Developers deploy code in the form of functions, which are executed in response to events or triggers, and are billed based on the actual usage rather than reserved capacity. This approach simplifies development by abstracting infrastructure concerns, enabling automatic scaling, and reducing operational overhead. Common serverless platforms include AWS Lambda, Google Cloud Functions, and Azure Functions, which support a range of event-driven applications and microservices.\n\nVisit the following resources to learn more:", "links": [ { "title": "Serverless", @@ -1553,7 +1493,7 @@ }, "K55h3aqOGe6-hgVhiFisT": { "title": "Microservices", - "description": "Microservice architecture is a pattern in which highly cohesive, loosely coupled services are separately developed, maintained, and deployed. Each component handles an individual function, and when combined, the application handles an overall business function.\n\nVisit the following resources to learn more:", + "description": "Microservices is an architectural style that structures an application as a collection of loosely coupled, independently deployable services. Each microservice focuses on a specific business capability and communicates with others via lightweight protocols, typically HTTP or messaging queues. This approach allows for greater scalability, flexibility, and resilience, as services can be developed, deployed, and scaled independently. Microservices also facilitate the use of diverse technologies and languages for different components, and they support continuous delivery and deployment. However, managing microservices involves complexity in terms of inter-service communication, data consistency, and deployment orchestration.\n\nVisit the following resources to learn more:", "links": [ { "title": "Pattern: Microservice Architecture", @@ -1570,11 +1510,6 @@ "url": "https://thenewstack.io/microservices-101/", "type": "article" }, - { - "title": "Primer: Microservices Explained", - "url": "https://thenewstack.io/primer-microservices-explained/", - "type": "article" - }, { "title": "Articles about Microservices", "url": "https://thenewstack.io/category/microservices/", @@ -1584,12 +1519,17 @@ "title": "Explore top posts about Microservices", "url": "https://app.daily.dev/tags/microservices?ref=roadmapsh", "type": "article" + }, + { + "title": "Microservices explained in 5 minutes", + "url": "https://www.youtube.com/watch?v=lL_j7ilk7rc", + "type": "video" } ] }, "n14b7sfTOwsjKTpFC9EZ2": { "title": "Service Mesh", - "description": "A service mesh is an architectural pattern for enhancing communication, security, and management between microservices in a distributed network. It employs a collection of intelligent proxies to manage service-to-service communication, ensuring high availability, efficient load balancing, and robust service discovery. Additionally, a service mesh offers advanced features like observability for monitoring network behavior, and various traffic management capabilities.\n\nIn a typical service mesh setup, each microservice is paired with a proxy. This proxy, often deployed using a sidecar pattern, is responsible not only for handling communication to and from its associated microservice but also for implementing various network functionalities. These functionalities include load balancing, intelligent routing, and ensuring secure data transfer.\n\nThe sidecar pattern, integral to service meshes, involves deploying the proxy as a sidecar container alongside the main microservice container, especially in Kubernetes environments. This design allows the service mesh to function independently from the microservices themselves, simplifying management and updates.\n\nPopular service mesh implementations include Istio and Linkerd, which offer robust solutions tailored to modern, cloud-based application architectures.\n\nVisit the following resources to learn more:", + "description": "A service mesh is an architectural pattern for enhancing communication, security, and management between microservices in a distributed network. It employs a collection of intelligent proxies to manage service-to-service communication, ensuring high availability, efficient load balancing, and robust service discovery. Additionally, a service mesh offers advanced features like observability for monitoring network behavior, and various traffic management capabilities. In a typical service mesh setup, each microservice is paired with a proxy. This proxy, often deployed using a sidecar pattern, is responsible not only for handling communication to and from its associated microservice but also for implementing various network functionalities. These functionalities include load balancing, intelligent routing, and ensuring secure data transfer. The sidecar pattern, integral to service meshes, involves deploying the proxy as a sidecar container alongside the main microservice container, especially in Kubernetes environments. This design allows the service mesh to function independently from the microservices themselves, simplifying management and updates.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is a Service Mesh (AWS blog)?", @@ -1607,15 +1547,15 @@ "type": "article" }, { - "title": "Microservices pain points and how service mesh can help solve those issues", - "url": "https://www.youtube.com/watch?v=QiXK0B9FhO0", + "title": "What is a Service Mesh?", + "url": "https://www.youtube.com/watch?v=vh1YtWjfcyk", "type": "video" } ] }, "tObmzWpjsJtK4GWhx6pwB": { "title": "SOA", - "description": "SOA, or service-oriented architecture, defines a way to make software components reusable via service interfaces. These interfaces utilize common communication standards in such a way that they can be rapidly incorporated into new applications without having to perform deep integration each time.\n\nVisit the following resources to learn more:", + "description": "Service-Oriented Architecture (SOA) is an architectural pattern where software components, known as services, are designed to be reusable, loosely coupled, and interact over a network. Each service is a self-contained unit that performs a specific business function and communicates with other services through standardized protocols and data formats, such as HTTP and XML. SOA enables organizations to build scalable, flexible, and interoperable systems by allowing services to be developed, deployed, and maintained independently. This approach promotes modularity, easier integration of disparate systems, and agility in adapting to changing business requirements.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is SOA?", @@ -1631,17 +1571,32 @@ "title": "Explore top posts about Architecture", "url": "https://app.daily.dev/tags/architecture?ref=roadmapsh", "type": "article" + }, + { + "title": "Service Oriented Architecture (SOA) Simplified", + "url": "https://www.youtube.com/watch?v=PA9RjHI463g", + "type": "video" } ] }, "8DmabQJXlrT__COZrDVTV": { "title": "Twelve Factor Apps", - "description": "The Twelve-Factor App is a methodology for building scalable and maintainable software-as-a-service (SaaS) applications. It is based on a set of best practices that were identified by the authors of the methodology as being essential for building modern, cloud-native applications.\n\nThe Twelve-Factor App methodology consists of the following principles:\n\n* Codebase: There should be a single codebase for the application, with multiple deployments.\n* Dependencies: The application should explicitly declare and isolate its dependencies.\n* Config: The application should store configuration in the environment.\n* Backing services: The application should treat backing services as attached resources.\n* Build, release, run: The application should be built, released, and run as an isolated unit.\n* Processes: The application should be executed as one or more stateless processes.\n* Port binding: The application should expose its services through port binding.\n* Concurrency: The application should scale out by adding more processes, not by adding threads.\n* Disposability: The application should be designed to start and stop quickly.\n* Dev/prod parity: The development, staging, and production environments should be as similar as possible.\n* Logs: The application should treat logs as event streams.\n* Admin processes: The application should run admin/maintenance tasks as one-off processes.\n\nThe Twelve-Factor App methodology is widely adopted by developers of SaaS applications, and it is seen as a best practice for building cloud-native applications that are scalable, maintainable, and easy to deploy.\n\nVisit the following resources to learn more:", + "description": "The Twelve-Factor App methodology is a set of principles for building modern, scalable, and maintainable web applications, particularly suited for cloud environments. It emphasizes best practices for developing applications in a way that facilitates portability, scalability, and ease of deployment. Key principles include:\n\n1. **Codebase**: One codebase tracked in version control, with many deploys.\n2. **Dependencies**: Explicitly declare and isolate dependencies.\n3. **Config**: Store configuration in the environment.\n4. **Backing Services**: Treat backing services as attached resources.\n5. **Build, Release, Run**: Separate build and run stages.\n6. **Processes**: Execute the app as one or more stateless processes.\n7. **Port Binding**: Export services via port binding.\n8. **Concurrency**: Scale out via the process model.\n9. **Disposability**: Maximize robustness with fast startup and graceful shutdown.\n10. **Dev/Prod Parity**: Keep development, staging, and production environments as similar as possible.\n11. **Logs**: Treat logs as streams of events.\n12. **Admin Processes**: Run administrative or management tasks as one-off processes.\n\nThese principles help create applications that are easy to deploy, manage, and scale in cloud environments, promoting operational simplicity and consistency.\n\nVisit the following resources to learn more:", "links": [ { "title": "The Twelve-Factor App", "url": "https://12factor.net/", "type": "article" + }, + { + "title": "An illustrated guide to 12 Factor Apps", + "url": "https://www.redhat.com/architect/12-factor-app", + "type": "article" + }, + { + "title": "Every Developer NEEDS To Know 12-Factor App Principles", + "url": "https://www.youtube.com/watch?v=FryJt0Tbt9Q", + "type": "video" } ] }, @@ -1663,7 +1618,7 @@ }, "GPFRMcY1DEtRgnaZwJ3vW": { "title": "RabbitMQ", - "description": "With tens of thousands of users, RabbitMQ is one of the most popular open-source message brokers. RabbitMQ is lightweight and easy to deploy on-premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.\n\nVisit the following resources to learn more:", + "description": "RabbitMQ is an open-source message broker that facilitates the exchange of messages between distributed systems using the Advanced Message Queuing Protocol (AMQP). It enables asynchronous communication by queuing and routing messages between producers and consumers, which helps decouple application components and improve scalability and reliability. RabbitMQ supports features such as message durability, acknowledgments, and flexible routing through exchanges and queues. It is highly configurable, allowing for various messaging patterns, including publish/subscribe, request/reply, and point-to-point communication. RabbitMQ is widely used in enterprise environments for handling high-throughput messaging and integrating heterogeneous systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "RabbitMQ Tutorials", @@ -1689,7 +1644,7 @@ }, "VoYSis1F1ZfTxMlQlXQKB": { "title": "Kafka", - "description": "Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.\n\nVisit the following resources to learn more:", + "description": "Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant data processing. It acts as a message broker, allowing systems to publish and subscribe to streams of records, similar to a distributed commit log. Kafka is highly scalable and can handle large volumes of data with low latency, making it ideal for real-time analytics, log aggregation, and data integration. It features topics for organizing data streams, partitions for parallel processing, and replication for fault tolerance, enabling reliable and efficient handling of large-scale data flows across distributed systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "Apache Kafka quickstart", @@ -1705,17 +1660,32 @@ "title": "Apache Kafka Fundamentals", "url": "https://www.youtube.com/watch?v=B5j3uNBH8X4", "type": "video" + }, + { + "title": "Kafka in 100 Seconds", + "url": "https://www.youtube.com/watch?v=uvb00oaa3k8", + "type": "video" } ] }, "nJ5FpFgGCRaALcWmAKBKT": { "title": "Message Brokers", - "description": "Message brokers are an inter-application communication technology to help build a common integration mechanism to support cloud-native, microservices-based, serverless, and hybrid cloud architectures. Two of the most famous message brokers are `RabbitMQ` and `Apache Kafka`\n\nVisit the following resources to learn more:", + "description": "Message brokers are intermediaries that facilitate communication between distributed systems or components by receiving, routing, and delivering messages. They enable asynchronous message passing, decoupling producers (senders) from consumers (receivers), which improves scalability and flexibility. Common functions of message brokers include message queuing, load balancing, and ensuring reliable message delivery through features like persistence and acknowledgment. Popular message brokers include Apache Kafka, RabbitMQ, and ActiveMQ, each offering different features and capabilities suited to various use cases like real-time data processing, event streaming, or task management.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "What are message brokers?", + "url": "https://www.ibm.com/topics/message-brokers", + "type": "article" + }, { "title": "Introduction to Message Brokers", "url": "https://www.youtube.com/watch?v=57Qr9tk6Uxc", "type": "video" + }, + { + "title": "Kafka vs RabbitMQ", + "url": "https://www.youtube.com/watch?v=_5mu7lZz5X4", + "type": "video" } ] }, @@ -1734,8 +1704,8 @@ "type": "article" }, { - "title": "Linux Container (LXC) Introduction", - "url": "https://youtu.be/_KnmRdK69qM", + "title": "Getting started with LXD Containerization", + "url": "https://www.youtube.com/watch?v=aIwgPKkVj8s", "type": "video" }, { @@ -1747,7 +1717,7 @@ }, "SGVwJme-jT_pbOTvems0v": { "title": "Containerization vs Virtualization", - "description": "Containers and virtual machines are the two most popular approaches to setting up a software infrastructure for your organization.\n\nVisit the following resources to learn more:", + "description": "Containerization and virtualization are both technologies for isolating and running multiple applications on shared hardware, but they differ significantly in approach and resource usage. Virtualization creates separate virtual machines (VMs), each with its own operating system, running on a hypervisor. This provides strong isolation but consumes more resources. Containerization, exemplified by Docker, uses a shared operating system kernel to create isolated environments (containers) for applications. Containers are lighter, start faster, and use fewer resources than VMs. They're ideal for microservices architectures and rapid deployment. Virtualization offers better security isolation and is suitable for running different operating systems on the same hardware. Containerization provides greater efficiency and scalability, especially for cloud-native applications. The choice between them depends on specific use cases, security requirements, and infrastructure needs.\n\nVisit the following resources to learn more:", "links": [ { "title": "Containerization vs. Virtualization: Everything you need to know", @@ -1760,15 +1730,15 @@ "type": "article" }, { - "title": "Containerization or Virtualization - The Differences ", - "url": "https://www.youtube.com/watch?v=1WnDHitznGY", + "title": "Virtual Machine (VM) vs Docker", + "url": "https://www.youtube.com/watch?v=a1M_thDTqmU", "type": "video" } ] }, "sVuIdAe08IWJVqAt4z-ag": { "title": "WebSockets", - "description": "Web sockets are defined as a two-way communication between the servers and the clients, which mean both the parties, communicate and exchange data at the same time. This protocol defines a full duplex communication from the ground up. Web sockets take a step forward in bringing desktop rich functionalities to the web browsers.\n\nVisit the following resources to learn more:", + "description": "WebSockets provide a protocol for full-duplex, real-time communication between a client (usually a web browser) and a server over a single, long-lived connection. Unlike traditional HTTP, which requires multiple request-response cycles to exchange data, WebSockets establish a persistent connection that allows for continuous data exchange in both directions. This enables efficient real-time interactions, such as live chat, online gaming, and real-time updates on web pages. WebSocket connections start with an HTTP handshake, then upgrade to a WebSocket protocol, facilitating low-latency communication and reducing overhead compared to HTTP polling or long polling.\n\nVisit the following resources to learn more:", "links": [ { "title": "Introduction to WebSockets", @@ -1784,23 +1754,33 @@ "title": "A Beginners Guide to WebSockets", "url": "https://www.youtube.com/watch?v=8ARodQ4Wlf4", "type": "video" + }, + { + "title": "How Web Sockets Work", + "url": "https://www.youtube.com/watch?v=G0_e02DdH7I", + "type": "video" } ] }, "RUSdlokJUcEYbCvq5FJBJ": { "title": "Server Sent Events", - "description": "Server-Sent Events (SSE) is a technology that allows a web server to push data to a client in real-time. It uses an HTTP connection to send a stream of data from the server to the client, and the client can listen for these events and take action when they are received.\n\nSSE is useful for applications that require real-time updates, such as chat systems, stock tickers, and social media feeds. It is a simple and efficient way to establish a long-lived connection between a client and a server, and it is supported by most modern web browsers.\n\nTo use SSE, the client must create an EventSource object and specify the URL of the server-side script that will send the events. The server can then send events by writing them to the response stream with the proper formatting.\n\nVisit the following resources to learn more:", + "description": "Server-Sent Events (SSE) is a technology for sending real-time updates from a server to a web client over a single, persistent HTTP connection. It enables servers to push updates to clients efficiently and automatically reconnects if the connection is lost. SSE is ideal for applications needing one-way communication, such as live notifications or real-time data feeds, and uses a simple text-based format for transmitting event data, which can be easily handled by clients using the `EventSource` API in JavaScript.\n\nVisit the following resources to learn more:", "links": [ { "title": "Server-Sent Events - MDN", "url": "https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events", "type": "article" + }, + { + "title": "Server-Sent Events | Postman Level Up", + "url": "https://www.youtube.com/watch?v=KrE044J8jEQ&t=1s", + "type": "video" } ] }, "z5AdThp9ByulmM9uekgm-": { "title": "Nginx", - "description": "NGINX is a powerful web server and uses a non-threaded, event-driven architecture that enables it to outperform Apache if configured correctly. It can also do other important things, such as load balancing, HTTP caching, or be used as a reverse proxy.\n\nVisit the following resources to learn more:", + "description": "Nginx is a high-performance, open-source web server and reverse proxy server known for its efficiency, scalability, and low resource consumption. Originally developed as a web server, Nginx is also commonly used as a load balancer, HTTP cache, and mail proxy. It excels at handling a large number of concurrent connections due to its asynchronous, event-driven architecture. Nginx's features include support for serving static content, handling dynamic content through proxying to application servers, and providing SSL/TLS termination. Its modular design allows for extensive customization and integration with various applications and services, making it a popular choice for modern web infrastructures.\n\nVisit the following resources to learn more:", "links": [ { "title": "Official Website", @@ -1816,28 +1796,38 @@ "title": "NGINX Explained in 100 Seconds", "url": "https://www.youtube.com/watch?v=JKxlsvZXG7c", "type": "video" + }, + { + "title": "NGINX Tutorial for Beginners", + "url": "https://www.youtube.com/watch?v=9t9Mp0BGnyI", + "type": "video" } ] }, "Op-PSPNoyj6Ss9CS09AXh": { "title": "Caddy", - "description": "The Caddy web server is an extensible, cross-platform, open-source web server written in Go. It has some really nice features like automatic SSL/HTTPs and a really easy configuration file.\n\nVisit the following resources to learn more:", + "description": "Caddy is a modern, open-source web server written in Go. It's known for its simplicity, automatic HTTPS encryption, and HTTP/2 support out of the box. Caddy stands out for its ease of use, with a simple configuration syntax and the ability to serve static files with zero configuration. It automatically obtains and renews SSL/TLS certificates from Let's Encrypt, making secure deployments straightforward. Caddy supports various plugins and modules for extended functionality, including reverse proxying, load balancing, and dynamic virtual hosting. It's designed with security in mind, implementing modern web standards by default. While it may not match the raw performance of servers like Nginx in extremely high-load scenarios, Caddy's simplicity, built-in security features, and low resource usage make it an attractive choice for many web hosting needs, particularly for smaller to medium-sized projects or developers seeking a hassle-free server setup.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "caddyserver/caddy", + "url": "https://github.com/caddyserver/caddy", + "type": "opensource" + }, { "title": "Official Website", "url": "https://caddyserver.com/", "type": "article" }, { - "title": "Getting started with Caddy the HTTPS Web Server from scratch", - "url": "https://www.youtube.com/watch?v=t4naLFSlBpQ", + "title": "How to Make a Simple Caddy 2 Website", + "url": "https://www.youtube.com/watch?v=WgUV_BlHvj0", "type": "video" } ] }, "jjjonHTHHo-NiAf6p9xPv": { "title": "Apache", - "description": "Apache is a free, open-source HTTP server, available on many operating systems, but mainly used on Linux distributions. It is one of the most popular options for web developers, as it accounts for over 30% of all the websites, as estimated by W3Techs.\n\nVisit the following resources to learn more:", + "description": "Apache, officially known as the Apache HTTP Server, is a free, open-source web server software developed and maintained by the Apache Software Foundation. It's one of the most popular web servers worldwide, known for its robustness, flexibility, and extensive feature set. Apache supports a wide range of operating systems and can handle various content types and programming languages through its modular architecture. It offers features like virtual hosting, SSL/TLS support, and URL rewriting. Apache's configuration files allow for detailed customization of server behavior. While it has faced competition from newer alternatives like Nginx, especially in high-concurrency scenarios, Apache remains widely used due to its stability, comprehensive documentation, and large community support. It's particularly favored for its ability to integrate with other open-source technologies in the LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack.\n\nVisit the following resources to learn more:", "links": [ { "title": "Apache Server Website", @@ -1853,12 +1843,17 @@ "title": "What is Apache Web Server?", "url": "https://www.youtube.com/watch?v=kaaenHXO4t4", "type": "video" + }, + { + "title": "Apache vs NGINX", + "url": "https://www.youtube.com/watch?v=9nyiY-psbMs", + "type": "video" } ] }, "0NJDgfe6eMa7qPUOI6Eya": { "title": "MS IIS", - "description": "Internet Information Services (IIS) for Windows® Server is a flexible, secure and manageable Web server for hosting anything on the Web.\n\nVisit the following resources to learn more:", + "description": "Microsoft Internet Information Services (IIS) is a flexible, secure, and high-performance web server developed by Microsoft for hosting and managing web applications and services on Windows Server. IIS supports a variety of web technologies, including [ASP.NET](http://ASP.NET), PHP, and static content. It provides features such as request handling, authentication, SSL/TLS encryption, and URL rewriting. IIS also offers robust management tools, including a graphical user interface and command-line options, for configuring and monitoring web sites and applications. It is commonly used for deploying enterprise web applications and services in a Windows-based environment, offering integration with other Microsoft technologies and services.\n\nVisit the following resources to learn more:", "links": [ { "title": "Official Website", @@ -1874,18 +1869,28 @@ "title": "Learn Windows Web Server IIS", "url": "https://www.youtube.com/watch?v=1VdxPWwtISA", "type": "video" + }, + { + "title": "What is IIS?", + "url": "https://www.youtube.com/watch?v=hPWSqEXOjQY", + "type": "video" } ] }, "fekyMpEnaGqjh1Cu4Nyc4": { "title": "Web Servers", - "description": "Web servers can be either hardware or software, or perhaps a combination of the two.\n\n### Hardware Side:\n\nA hardware web server is a computer that houses web server software and the files that make up a website (for example, HTML documents, images, CSS stylesheets, and JavaScript files). A web server establishes a connection to the Internet and facilitates the physical data exchange with other web-connected devices.\n\n### Software side:\n\nA software web server has a number of software components that regulate how hosted files are accessed by online users. This is at the very least an HTTP server. Software that knows and understands HTTP and URLs (web addresses) is known as an HTTP server (the protocol your browser uses to view webpages). The content of these hosted websites is sent to the end user's device through an HTTP server, which may be accessed via the domain names of the websites it holds.\n\nBasically, an HTTP request is made by a browser anytime it wants a file that is stored on a web server. The relevant (hardware) web server receives the request, which is then accepted by the appropriate (software) HTTP server, which then locates the requested content and returns it to the browser over HTTP. (If the server cannot locate the requested page, it responds with a 404 error.)\n\nVisit the following resources to learn more:", + "description": "Web servers are software or hardware systems that handle requests from clients (usually web browsers) and serve web content, such as HTML pages, images, and other resources. They process incoming HTTP or HTTPS requests, interact with application servers or databases if needed, and send the appropriate response back to the client. Popular web servers include Apache HTTP Server, Nginx, and Microsoft Internet Information Services (IIS). Web servers are essential for hosting websites and web applications, managing traffic, and ensuring reliable access to online resources by handling concurrent connections, serving static and dynamic content, and providing security features like SSL/TLS encryption.\n\nVisit the following resources to learn more:", "links": [ { - "title": "What is a Web Server ", + "title": "What is a Web Server? - Mozilla", "url": "https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_web_server", "type": "article" }, + { + "title": "What is a Web Server?", + "url": "https://www.hostinger.co.uk/tutorials/what-is-a-web-server", + "type": "article" + }, { "title": "Web Server Concepts and Examples", "url": "https://youtu.be/9J1nJOivdyw", @@ -1895,7 +1900,7 @@ }, "SHmbcMRsc3SygEDksJQBD": { "title": "Building For Scale", - "description": "Speaking in general terms, scalability is the ability of a system to handle a growing amount of work by adding resources to it.\n\nA software that was conceived with a scalable architecture in mind, is a system that will support higher workloads without any fundamental changes to it, but don't be fooled, this isn't magic. You'll only get so far with smart thinking without adding more sources to it.\n\nFor a system to be scalable, there are certain things you must pay attention to, like:\n\n* Coupling\n* Observability\n* Evolvability\n* Infrastructure\n\nWhen you think about the infrastructure of a scalable system, you have two main ways of building it: using on-premises resources or leveraging all the tools a cloud provider can give you.\n\nThe main difference between on-premises and cloud resources will be FLEXIBILITY, on cloud providers you don't really need to plan ahead, you can upgrade your infrastructure with a couple of clicks, while with on-premises resources you will need a certain level of planning.\n\nVisit the following resources to learn more:", + "description": "Speaking in general terms, scalability is the ability of a system to handle a growing amount of work by adding resources to it. A software that was conceived with a scalable architecture in mind, is a system that will support higher workloads without any fundamental changes to it, but don't be fooled, this isn't magic. You'll only get so far with smart thinking without adding more sources to it. When you think about the infrastructure of a scalable system, you have two main ways of building it: using on-premises resources or leveraging all the tools a cloud provider can give you.\n\nThe main difference between on-premises and cloud resources will be **flexibility**, on cloud providers you don't really need to plan ahead, you can upgrade your infrastructure with a couple of clicks, while with on-premises resources you will need a certain level of planning.\n\nVisit the following resources to learn more:", "links": [ { "title": "Scalable Architecture: A Definition and How-To Guide", @@ -1911,8 +1916,13 @@ }, "g8GjkJAhvnSxXTZks0V1g": { "title": "Redis", - "description": "Redis is an open source (BSD licensed), in-memory **data structure store** used as a database, cache, message broker, and streaming engine. Redis provides data structures such as [strings](https://redis.io/topics/data-types-intro#strings), [hashes](https://redis.io/topics/data-types-intro#hashes), [lists](https://redis.io/topics/data-types-intro#lists), [sets](https://redis.io/topics/data-types-intro#sets), [sorted sets](https://redis.io/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](https://redis.io/topics/data-types-intro#bitmaps), [hyperloglogs](https://redis.io/topics/data-types-intro#hyperloglogs), [geospatial indexes](https://redis.io/commands/geoadd), and [streams](https://redis.io/topics/streams-intro). Redis has built-in [replication](https://redis.io/topics/replication), [Lua scripting](https://redis.io/commands/eval), [LRU eviction](https://redis.io/topics/lru-cache), [transactions](https://redis.io/topics/transactions), and different levels of [on-disk persistence](https://redis.io/topics/persistence), and provides high availability via [Redis Sentinel](https://redis.io/topics/sentinel) and automatic partitioning with [Redis Cluster](https://redis.io/topics/cluster-tutorial).\n\nVisit the following resources to learn more:", + "description": "Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "Redis Crash Course", + "url": "https://www.youtube.com/watch?v=XCsS_NVAa1g", + "type": "course" + }, { "title": "Redis Website", "url": "https://redis.io/", @@ -1932,28 +1942,28 @@ }, "xPvVwGQw28uMeLYIWn8yn": { "title": "Memcached", - "description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library.\n\nMemcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nMemcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.\n\nVisit the following resources to learn more:", + "description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nMemcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Memcached, From Official Github", + "title": "memcached/memcached", "url": "https://github.com/memcached/memcached#readme", "type": "opensource" }, - { - "title": "Memcached, From Wikipedia", - "url": "https://en.wikipedia.org/wiki/Memcached", - "type": "article" - }, { "title": "Memcached Tutorial", "url": "https://www.tutorialspoint.com/memcached/index.htm", "type": "article" + }, + { + "title": "Redis vs Memcached", + "url": "https://www.youtube.com/watch?v=Gyy1SiE8avE", + "type": "video" } ] }, "28U6q_X-NTYf7OSKHjoWH": { "title": "MongoDB", - "description": "MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL).\n\nVisit the following resources to learn more:", + "description": "MongoDB is a NoSQL, open-source database designed for storing and managing large volumes of unstructured or semi-structured data. It uses a document-oriented data model where data is stored in BSON (Binary JSON) format, which allows for flexible and hierarchical data representation. Unlike traditional relational databases, MongoDB doesn't require a fixed schema, making it suitable for applications with evolving data requirements or varying data structures. It supports horizontal scaling through sharding and offers high availability with replica sets. MongoDB is commonly used for applications requiring rapid development, real-time analytics, and large-scale data handling, such as content management systems, IoT applications, and big data platforms.\n\nVisit the following resources to learn more:", "links": [ { "title": "Visit Dedicated MongoDB Roadmap", @@ -1965,29 +1975,19 @@ "url": "https://www.mongodb.com/", "type": "article" }, - { - "title": "MongoDB Documentation", - "url": "https://docs.mongodb.com/", - "type": "article" - }, - { - "title": "MongoDB Online Sandbox", - "url": "https://mongoplayground.net/", - "type": "article" - }, { "title": "Learning Path for MongoDB Developers", "url": "https://learn.mongodb.com/catalog", "type": "article" }, { - "title": "Dynamo DB Docs", - "url": "https://docs.aws.amazon.com/dynamodb/index.html", + "title": "MongoDB Online Sandbox", + "url": "https://mongoplayground.net/", "type": "article" }, { - "title": "Official Developers Guide", - "url": "https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html", + "title": "daily.dev MongoDB Feed", + "url": "https://app.daily.dev/tags/mongodb", "type": "article" } ] @@ -2001,47 +2001,47 @@ "url": "https://couchdb.apache.org/", "type": "article" }, - { - "title": "CouchDB Documentation", - "url": "https://docs.couchdb.org/", - "type": "article" - }, - { - "title": "The big NoSQL databases comparison", - "url": "https://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis/", - "type": "article" - }, - { - "title": "pouchdb - a JavaScript database inspired by CouchDB", - "url": "https://pouchdb.com/", - "type": "article" - }, { "title": "Explore top posts about CouchDB", "url": "https://app.daily.dev/tags/couchdb?ref=roadmapsh", "type": "article" + }, + { + "title": "What is CouchDB?", + "url": "https://www.youtube.com/watch?v=Mru4sHzIfSA", + "type": "video" } ] }, "BTNJfWemFKEeNeTyENXui": { "title": "Neo4j", - "description": "A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it.\n\nVisit the following resources to learn more:", + "description": "Neo4j is a highly popular open-source graph database designed to store, manage, and query data as interconnected nodes and relationships. Unlike traditional relational databases that use tables and rows, Neo4j uses a graph model where data is represented as nodes (entities) and edges (relationships), allowing for highly efficient querying of complex, interconnected data. It supports Cypher, a declarative query language specifically designed for graph querying, which simplifies operations like traversing relationships and pattern matching. Neo4j is well-suited for applications involving complex relationships, such as social networks, recommendation engines, and fraud detection, where understanding and leveraging connections between data points is crucial.\n\nVisit the following resources to learn more:", "links": [ { - "title": "What is a Graph Database?", - "url": "https://neo4j.com/developer/graph-database/", + "title": "Neo4j Website", + "url": "https://neo4j.com", "type": "article" }, { "title": "Explore top posts about Backend Development", "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", "type": "article" + }, + { + "title": "Neo4j in 100 Seconds", + "url": "https://www.youtube.com/watch?v=T6L9EoBy8Zk", + "type": "video" + }, + { + "title": "Neo4j Course for Beginners", + "url": "https://www.youtube.com/watch?v=_IgbB24scLI", + "type": "video" } ] }, "G9AI_i3MkUE1BsO3_-PH7": { "title": "Graceful Degradation", - "description": "Graceful degradation is a design principle that states that a system should be designed to continue functioning, even if some of its components or features are not available. In the context of web development, graceful degradation refers to the ability of a web page or application to continue functioning, even if the user's browser or device does not support certain features or technologies.\n\nGraceful degradation is often used as an alternative to progressive enhancement, a design principle that states that a system should be designed to take advantage of advanced features and technologies if they are available.\n\nVisit the following resources to learn more:", + "description": "Graceful degradation is a design principle that states that a system should be designed to continue functioning, even if some of its components or features are not available. In the context of web development, graceful degradation refers to the ability of a web page or application to continue functioning, even if the user's browser or device does not support certain features or technologies. Graceful degradation is often used as an alternative to progressive enhancement, a design principle that states that a system should be designed to take advantage of advanced features and technologies if they are available.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is Graceful Degradation & Why Does it Matter?", @@ -2054,42 +2054,68 @@ "type": "article" }, { - "title": "The Art of Graceful Degradation", - "url": "https://farfetchtechblog.com/en/blog/post/the-art-of-failure-ii-graceful-degradation/", - "type": "article" + "title": "Graceful Degradation - Georgia Tech", + "url": "https://www.youtube.com/watch?v=Tk7e0LMsAlI", + "type": "video" } ] }, "qAu-Y4KI2Z_y-EqiG86cR": { "title": "Throttling", - "description": "Throttling is a design pattern that is used to limit the rate at which a system or component can be used. It is commonly used in cloud computing environments to prevent overuse of resources, such as compute power, network bandwidth, or storage capacity.\n\nThere are several ways to implement throttling in a cloud environment:\n\n* Rate limiting: This involves setting a maximum number of requests that can be made to a system or component within a specified time period.\n* Resource allocation: This involves allocating a fixed amount of resources to a system or component, and then limiting the use of those resources if they are exceeded.\n* Token bucket: This involves using a \"bucket\" of tokens to represent the available resources, and then allowing a certain number of tokens to be \"consumed\" by each request. When the bucket is empty, additional requests are denied until more tokens become available.\n\nThrottling is an important aspect of cloud design, as it helps to ensure that resources are used efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as auto-scaling and load balancing, to provide a scalable and resilient cloud environment.\n\nVisit the following resources to learn more:", + "description": "Throttling is a technique used to control the rate at which requests or operations are processed, typically to prevent overloading a system or service. It involves setting limits on the number of requests a user or application can make within a specific time period. Throttling helps manage resource consumption, ensure fair usage, and maintain system stability by avoiding excessive load that could degrade performance or cause outages. It is commonly implemented in APIs, network services, and databases to balance demand, protect against abuse, and ensure consistent performance across users and services.\n\nVisit the following resources to learn more:", "links": [ { "title": "Throttling - AWS Well-Architected Framework", "url": "https://docs.aws.amazon.com/wellarchitected/2022-03-31/framework/rel_mitigate_interaction_failure_throttle_requests.html", "type": "article" + }, + { + "title": "Throttling and Debouncing", + "url": "https://dev.to/aneeqakhan/throttling-and-debouncing-explained-1ocb", + "type": "article" + }, + { + "title": "Throttling vs Debouncing", + "url": "https://www.youtube.com/watch?v=tJhA0DrH5co", + "type": "video" } ] }, "JansCqGDyXecQkD1K7E7e": { "title": "Backpressure", - "description": "Backpressure is a design pattern that is used to manage the flow of data through a system, particularly in situations where the rate of data production exceeds the rate of data consumption. It is commonly used in cloud computing environments to prevent overloading of resources and to ensure that data is processed in a timely and efficient manner.\n\nThere are several ways to implement backpressure in a cloud environment:\n\n* Buffering: This involves storing incoming data in a buffer until it can be processed, allowing the system to continue receiving data even if it is temporarily unable to process it.\n* Batching: This involves grouping incoming data into batches and processing the batches in sequence, rather than processing each piece of data individually.\n* Flow control: This involves using mechanisms such as flow control signals or windowing to regulate the rate at which data is transmitted between systems.\n\nBackpressure is an important aspect of cloud design, as it helps to ensure that data is processed efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as auto-scaling and load balancing, to provide a scalable and resilient cloud environment.\n\nVisit the following resources to learn more:", + "description": "Back pressure is a flow control mechanism in systems processing asynchronous data streams, where the receiving component signals its capacity to handle incoming data to the sending component. This feedback loop prevents overwhelming the receiver with more data than it can process, ensuring system stability and optimal performance. In software systems, particularly those dealing with high-volume data or event-driven architectures, back pressure helps manage resource allocation, prevent memory overflows, and maintain responsiveness. It's commonly implemented in reactive programming, message queues, and streaming data processing systems. By allowing the receiver to control the flow of data, back pressure helps create more resilient, efficient systems that can gracefully handle varying loads and prevent cascading failures in distributed systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "Awesome Architecture: Backpressure", "url": "https://awesome-architecture.com/back-pressure/", "type": "article" + }, + { + "title": "Backpressure explained — the resisted flow of data through software", + "url": "https://medium.com/@jayphelps/backpressure-explained-the-flow-of-data-through-software-2350b3e77ce7", + "type": "article" + }, + { + "title": "What is Back Pressure", + "url": "https://www.youtube.com/watch?v=viTGm_cV7lE", + "type": "video" } ] }, "HoQdX7a4SnkFRU4RPQ-D5": { "title": "Loadshifting", - "description": "Load shifting is a design pattern that is used to manage the workload of a system by shifting the load to different components or resources at different times. It is commonly used in cloud computing environments to balance the workload of a system and to optimize the use of resources.\n\nThere are several ways to implement load shifting in a cloud environment:\n\n* Scheduling: This involves scheduling the execution of tasks or workloads to occur at specific times or intervals.\n* Load balancing: This involves distributing the workload of a system across multiple resources, such as servers or containers, to ensure that the workload is balanced and that resources are used efficiently.\n* Auto-scaling: This involves automatically adjusting the number of resources that are available to a system based on the workload, allowing the system to scale up or down as needed.\n\nLoad shifting is an important aspect of cloud design, as it helps to ensure that resources are used efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as throttling and backpressure, to provide a scalable and resilient cloud environment.", - "links": [] + "description": "Load shifting is a strategy used to manage and distribute computing or system workloads more efficiently by moving or redistributing the load from peak times to off-peak periods. This approach helps in balancing the demand on resources, optimizing performance, and reducing costs. In cloud computing and data centers, load shifting can involve rescheduling jobs, leveraging different regions or availability zones, or adjusting resource allocation based on real-time demand. By smoothing out peak loads, organizations can enhance system reliability, minimize latency, and better utilize their infrastructure.\n\nLearn more from the following resources:", + "links": [ + { + "title": "Load Shifting 101", + "url": "https://www.youtube.com/watch?v=DOyMJEdk5aE", + "type": "video" + } + ] }, "spkiQTPvXY4qrhhVUkoPV": { "title": "Circuit Breaker", - "description": "The circuit breaker design pattern is a way to protect a system from failures or excessive load by temporarily stopping certain operations if the system is deemed to be in a failed or overloaded state. It is commonly used in cloud computing environments to prevent cascading failures and to improve the resilience and availability of a system.\n\nA circuit breaker consists of three states: closed, open, and half-open. In the closed state, the circuit breaker allows operations to proceed as normal. If the system encounters a failure or becomes overloaded, the circuit breaker moves to the open state, and all subsequent operations are immediately stopped. After a specified period of time, the circuit breaker moves to the half-open state, and a small number of operations are allowed to proceed. If these operations are successful, the circuit breaker moves back to the closed state; if they fail, the circuit breaker moves back to the open state.\n\nThe circuit breaker design pattern is useful for protecting a system from failures or excessive load by providing a way to temporarily stop certain operations and allow the system to recover. It is often used in conjunction with other design patterns, such as retries and fallbacks, to provide a more robust and resilient cloud environment.\n\nVisit the following resources to learn more:", + "description": "The circuit breaker design pattern is a way to protect a system from failures or excessive load by temporarily stopping certain operations if the system is deemed to be in a failed or overloaded state. It is commonly used in cloud computing environments to prevent cascading failures and to improve the resilience and availability of a system. A circuit breaker consists of three states: closed, open, and half-open. In the closed state, the circuit breaker allows operations to proceed as normal. If the system encounters a failure or becomes overloaded, the circuit breaker moves to the open state, and all subsequent operations are immediately stopped. After a specified period of time, the circuit breaker moves to the half-open state, and a small number of operations are allowed to proceed. If these operations are successful, the circuit breaker moves back to the closed state; if they fail, the circuit breaker moves back to the open state.\n\nVisit the following resources to learn more:", "links": [ { "title": "Circuit Breaker - AWS Well-Architected Framework", @@ -2097,26 +2123,36 @@ "type": "article" }, { - "title": "Circuit Breaker - Complete Guide", - "url": "https://mateus4k.github.io/posts/circuit-breakers/", + "title": "The Circuit Breaker Pattern", + "url": "https://aerospike.com/blog/circuit-breaker-pattern/", "type": "article" + }, + { + "title": "Back to Basics: Static Stability Using a Circuit Breaker Pattern", + "url": "https://www.youtube.com/watch?v=gy1RITZ7N7s", + "type": "video" } ] }, "f7iWBkC0X7yyCoP_YubVd": { "title": "Migration Strategies", - "description": "Learn how to run database migrations effectively. Especially zero downtime multi-phase schema migrations. Rather than make all changes at once, do smaller incremental changes to allow old code, and new code to work with the database at the same time, before removing old code, and finally removing the parts of the database schema which is no longer used.\n\nVisit the following resources to learn more:", + "description": "Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:\n\n1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.\n2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.\n3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.\n4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.\n5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.\n6. **Retire**: Decommissioning applications that are no longer needed or are redundant.\n\nEach strategy has its own trade-offs in terms of cost, complexity, and benefits, and the choice depends on factors like the application’s architecture, business needs, and resource availability.\n\nVisit the following resources to learn more:", "links": [ { "title": "Databases as a Challenge for Continuous Delivery", "url": "https://phauer.com/2015/databases-challenge-continuous-delivery/", "type": "article" + }, + { + "title": "AWS Cloud Migration Strategies", + "url": "https://www.youtube.com/watch?v=9ziB82V7qVM", + "type": "video" } ] }, "osQlGGy38xMcKLtgZtWaZ": { "title": "Types of Scaling", - "description": "Horizontal scaling is a change in the **number** of a resource. For example, increasing the number of virtual machines processing messages in a queue. Vertical scaling is a change in the **size/power** of a resource. For example, increasing the memory or disk space available to a machine. Scaling can be applied to databases, cloud resources, and other areas of computing.\n\nVisit the following resources to learn more:", + "description": "Horizontal scaling (scaling out/in) involves adding or removing instances of resources, such as servers or containers, to handle increased or decreased loads. It distributes the workload across multiple instances to improve performance and redundancy. This method enhances the system's capacity by expanding the number of nodes in a distributed system.\n\nVertical scaling (scaling up/down) involves increasing or decreasing the resources (CPU, memory, storage) of a single instance or server to handle more load or reduce capacity. This method improves performance by upgrading the existing hardware or virtual machine but has limits based on the maximum capacity of the individual resource.\n\nBoth approaches have their advantages: horizontal scaling offers better fault tolerance and flexibility, while vertical scaling is often simpler to implement but can be limited by the hardware constraints of a single machine.\n\nVisit the following resources to learn more:", "links": [ { "title": "Horizontal vs Vertical Scaling", @@ -2124,8 +2160,8 @@ "type": "article" }, { - "title": "System Design Basics: Horizontal vs. Vertical Scaling", - "url": "https://youtu.be/xpDnVSmNFX0", + "title": "Vertical Vs Horizontal Scaling: Key Differences You Should Know", + "url": "https://www.youtube.com/watch?v=dvRFHG2-uYs", "type": "video" }, { @@ -2137,7 +2173,7 @@ }, "4X-sbqpP0NDhM99bKdqIa": { "title": "Instrumentation", - "description": "Instrumentation refers to the measure of a product's performance, in order to diagnose errors and to write trace information. Instrumentation can be of two types: source instrumentation and binary instrumentation.\n\nBackend monitoring allows the user to view the performance of infrastructure i.e. the components that run a web application. These include the HTTP server, middleware, database, third-party API services, and more.\n\nTelemetry is the process of continuously collecting data from different components of the application. This data helps engineering teams to troubleshoot issues across services and identify the root causes. In other words, telemetry data powers observability for your distributed applications.\n\nVisit the following resources to learn more:", + "description": "Instrumentation, monitoring, and telemetry are critical components for ensuring system reliability and performance. _Instrumentation_ refers to embedding code or tools within applications to capture key metrics, logs, and traces. _Monitoring_ involves observing these metrics in real time to detect anomalies, failures, or performance issues, often using dashboards and alerting systems. _Telemetry_ is the automated collection and transmission of this data from distributed systems, enabling visibility into system behavior. Together, these practices provide insights into the health, usage, and performance of systems, aiding in proactive issue resolution and optimizing overall system efficiency.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is Instrumentation?", @@ -2158,43 +2194,59 @@ "title": "Explore top posts about Monitoring", "url": "https://app.daily.dev/tags/monitoring?ref=roadmapsh", "type": "article" + }, + { + "title": "Observability vs. APM vs. Monitoring", + "url": "https://www.youtube.com/watch?v=CAQ_a2-9UOI", + "type": "video" } ] }, "QvMEEsXh0-rzn5hDGcmEv": { "title": "Monitoring", - "description": "Distributed systems are hard to build, deploy and maintain. They consist of multiple components which communicate with each other. In parallel to that, users use the system, resulting in multiple requests. Making sense of this noise is important to understand:\n\n* how the system behaves\n* is it broken\n* is it fast enough\n* what can be improved\n\nA product can integrate with existing monitoring products (APM - application performance management). They can show a detailed view of each request - its user, time, components involved, state(error or OK) etc.\n\nWe can build dashboards with custom events or metrics according to our needs. Automatic alert rules can be configured on top of these events/metrics.\n\nA few popular tools are Grafana, Sentry, Mixpanel, NewRelic etc", + "description": "Monitoring involves continuously observing and tracking the performance, availability, and health of systems, applications, and infrastructure. It typically includes collecting and analyzing metrics, logs, and events to ensure systems are operating within desired parameters. Monitoring helps detect anomalies, identify potential issues before they escalate, and provides insights into system behavior. It often involves tools and platforms that offer dashboards, alerts, and reporting features to facilitate real-time visibility and proactive management. Effective monitoring is crucial for maintaining system reliability, performance, and for supporting incident response and troubleshooting.\n\nA few popular tools are Grafana, Sentry, Mixpanel, NewRelic.", "links": [ { - "title": "Observability vs Monitoring?", - "url": "https://www.dynatrace.com/news/blog/observability-vs-monitoring/", + "title": "Top monitoring tools 2024", + "url": "https://thectoclub.com/tools/best-application-monitoring-software/", "type": "article" }, { - "title": "What is APM?", - "url": "https://www.sumologic.com/blog/the-role-of-apm-and-distributed-tracing-in-observability/", + "title": "daily.dev Monitoring Feed", + "url": "https://app.daily.dev/tags/monitoring", "type": "article" }, { - "title": "Top monitoring tools 2024", - "url": "https://thectoclub.com/tools/best-application-monitoring-software/", + "title": "Grafana Explained in 5 Minutes", + "url": "https://www.youtube.com/watch?v=lILY8eSspEo", + "type": "video" + } + ] + }, + "neVRtPjIHP_VG7lHwfah0": { + "title": "Telemetry", + "description": "Telemetry involves the automated collection, transmission, and analysis of data from remote or distributed systems to monitor their performance and health. It provides real-time insights into system operations, helping to identify and diagnose issues, optimize performance, and ensure reliability. Telemetry systems collect metrics such as resource usage, error rates, and system events, which are then analyzed to detect anomalies, track trends, and inform decision-making. This data-driven approach is crucial for maintaining and improving the performance and stability of software applications, networks, and hardware systems.\n\nLearn more from the following resources:", + "links": [ + { + "title": "OpenTelemetry Course - Understand Software Performance", + "url": "https://www.youtube.com/watch?v=r8UvWSX3KA8", + "type": "course" + }, + { + "title": "What is telemetry and how does it work?", + "url": "https://www.techtarget.com/whatis/definition/telemetry", "type": "article" }, { - "title": "Caching strategies", - "url": "https://medium.com/@genchilu/cache-strategy-in-backend-d0baaacd2d79", + "title": "daily.dev OpenTelemetry feed", + "url": "https://app.daily.dev/tags/opentelemetry", "type": "article" } ] }, - "neVRtPjIHP_VG7lHwfah0": { - "title": "Telemetry", - "description": "", - "links": [] - }, "jWwA6yX4Zjx-r_KpDaD3c": { "title": "MD5", - "description": "MD5 (Message-Digest Algorithm 5) is a hash function that is currently advised not to be used due to its extensive vulnerabilities. It is still used as a checksum to verify data integrity.\n\nVisit the following resources to learn more:", + "description": "MD5 (Message-Digest Algorithm 5) is a widely used cryptographic hash function that produces a 128-bit hash value, typically represented as a 32-character hexadecimal number. It was designed to provide a unique identifier for data by generating a fixed-size output (the hash) for any input. While MD5 was once popular for verifying data integrity and storing passwords, it is now considered cryptographically broken and unsuitable for security-sensitive applications due to vulnerabilities that allow for collision attacks (where two different inputs produce the same hash). As a result, MD5 has largely been replaced by more secure hash functions like SHA-256.\n\nVisit the following resources to learn more:", "links": [ { "title": "Wikipedia - MD5", @@ -2210,44 +2262,49 @@ "title": "Why is MD5 not safe?", "url": "https://infosecscout.com/why-md5-is-not-safe/", "type": "article" + }, + { + "title": "How the MD5 hash function works", + "url": "https://www.youtube.com/watch?v=5MiMK45gkTY", + "type": "video" } ] }, "JVN38r5jENoteia3YeIQ3": { "title": "SHA", - "description": "SHA (Secure Hash Algorithms) is a family of cryptographic hash functions created by the NIST (National Institute of Standards and Technology). The family includes:\n\n* SHA-0: Published in 1993, this is the first algorithm in the family. Shortly after its release, it was discontinued for an undisclosed significant flaw.\n* SHA-1: Created to replace SHA-0 and which resembles MD5, this algorithm has been considered insecure since 2010.\n* SHA-2: This isn't an algorithm, but a set of them, with SHA-256 and SHA-512 being the most popular. SHA-2 is still secure and widely used.\n* SHA-3: Born in a competition, this is the newest member of the family. SHA-3 is very secure and doesn't carry the same design flaws as its brethren.\n\nVisit the following resources to learn more:", + "description": "SHA (Secure Hash Algorithm) is a family of cryptographic hash functions designed to generate a fixed-size hash value from variable-sized input data, ensuring data integrity and security. SHA functions are used for tasks such as verifying data integrity, storing passwords securely, and creating digital signatures. The SHA family includes several versions, such as SHA-1, SHA-2, and SHA-3. SHA-1 produces a 160-bit hash value but is now considered weak due to vulnerabilities, while SHA-2, with hash sizes of 224, 256, 384, and 512 bits, offers stronger security. SHA-3 is the latest member, providing additional security features and flexibility.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Wikipedia - SHA-1", - "url": "https://en.wikipedia.org/wiki/SHA-1", + "title": "What is SHA?", + "url": "https://www.encryptionconsulting.com/education-center/what-is-sha/", "type": "article" }, { - "title": "Wikipedia - SHA-2", - "url": "https://en.wikipedia.org/wiki/SHA-2", - "type": "article" - }, - { - "title": "Wikipedia - SHA-3", - "url": "https://en.wikipedia.org/wiki/SHA-3", - "type": "article" + "title": "SHA: Secure Hashing Algorithm", + "url": "https://www.youtube.com/watch?v=DMtFhACPnTY", + "type": "video" } ] }, "kGTALrvCpxyVCXHRmkI7s": { "title": "scrypt", - "description": "Scrypt (pronounced \"ess crypt\") is a password hashing function (like bcrypt). It is designed to use a lot of hardware, which makes brute-force attacks more difficult. Scrypt is mainly used as a proof-of-work algorithm for cryptocurrencies.\n\nVisit the following resources to learn more:", + "description": "scrypt is a key derivation function designed to be computationally intensive and memory-hard to resist brute-force attacks and hardware-based attacks, such as those using GPUs or ASICs. It was developed to provide secure password hashing by making it difficult and costly for attackers to perform large-scale attacks. scrypt combines a hash function with a large amount of memory usage and a CPU-intensive computation process, which ensures that even if an attacker can perform many computations in parallel, the memory requirements make such attacks impractical. It is commonly used in cryptographic applications, including secure password storage and cryptocurrency mining.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Wikipedia - Scrypt", - "url": "https://en.wikipedia.org/wiki/Scrypt", + "title": "sCrypt Website", + "url": "https://scrypt.io/", + "type": "article" + }, + { + "title": "sCrypt: A Beginner’s Guide", + "url": "https://medium.com/@yusufedresmaina/scrypt-a-beginners-guide-cf1aecf8b010", "type": "article" } ] }, "dlG1bVkDmjI3PEGpkm1xH": { "title": "bcrypt", - "description": "bcrypt is a password hashing function, that has been proven reliable and secure since it's release in 1999. It has been implemented into most commonly-used programming languages.\n\nVisit the following resources to learn more:", + "description": "Bcrypt is a password-hashing function designed to securely hash passwords for storage in databases. Created by Niels Provos and David Mazières, it's based on the Blowfish cipher and incorporates a salt to protect against rainbow table attacks. Bcrypt's key feature is its adaptive nature, allowing for the adjustment of its cost factor to make it slower as computational power increases, thus maintaining resistance against brute-force attacks over time. It produces a fixed-size hash output, typically 60 characters long, which includes the salt and cost factor. Bcrypt is widely used in many programming languages and frameworks due to its security strength and relative ease of implementation. Its deliberate slowness in processing makes it particularly effective for password storage, where speed is not a priority but security is paramount.\n\nVisit the following resources to learn more:", "links": [ { "title": "bcrypts npm package", @@ -2261,14 +2318,14 @@ }, { "title": "bcrypt explained", - "url": "https://www.youtube.com/watch?v=O6cmuiTBZVs", + "url": "https://www.youtube.com/watch?v=AzA_LTDoFqY", "type": "video" } ] }, "x-WBJjBd8u93ym5gtxGsR": { "title": "HTTPS", - "description": "HTTPS is a secure way to send data between a web server and a browser.\n\nA communication through HTTPS starts with the handshake phase during which the server and the client agree on how to encrypt the communication, in particular they choose an encryption algorithm and a secret key. After the handshake all the communication between the server and the client will be encrypted using the agreed upon algorithm and key.\n\nThe handshake phase uses a particular kind of cryptography, called asymmetric cryptography, to communicate securely even though client and server have not yet agreed on a secret key. After the handshake phase the HTTPS communication is encrypted with symmetric cryptography, which is much more efficient but requires client and server to both have knowledge of the secret key.\n\nVisit the following resources to learn more:", + "description": "HTTPS (Hypertext Transfer Protocol Secure) is an extension of HTTP designed to secure data transmission between a client (e.g., browser) and a server. It uses encryption through SSL/TLS protocols to ensure data confidentiality, integrity, and authenticity. This prevents sensitive information, like login credentials or payment details, from being intercepted or tampered with by attackers. HTTPS is essential for securing web applications and has become a standard for most websites, especially those handling user data, as it helps protect against man-in-the-middle attacks and eavesdropping.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is HTTPS?", @@ -2280,11 +2337,6 @@ "url": "https://developers.google.com/web/fundamentals/security/encrypt-in-transit/why-https", "type": "article" }, - { - "title": "Enabling HTTPS on Your Servers", - "url": "https://web.dev/articles/enable-https", - "type": "article" - }, { "title": "How HTTPS works (comic)", "url": "https://howhttps.works/", @@ -2296,13 +2348,8 @@ "type": "article" }, { - "title": "SSL, TLS, HTTP, HTTPS Explained", - "url": "https://www.youtube.com/watch?v=hExRDVZHhig", - "type": "video" - }, - { - "title": "HTTPS — Stories from the field", - "url": "https://www.youtube.com/watch?v=GoXgl9r0Kjk", + "title": "HTTP vs HTTPS", + "url": "https://www.youtube.com/watch?v=nOmT_5hqgPk", "type": "video" } ] @@ -2317,8 +2364,8 @@ "type": "opensource" }, { - "title": "Wikipedia - OWASP", - "url": "https://en.wikipedia.org/wiki/OWASP", + "title": "OWASP Website", + "url": "https://owasp.org/", "type": "article" }, { @@ -2356,7 +2403,7 @@ }, "LU6WUbkWKbPM1rb2_gEqa": { "title": "CORS", - "description": "Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.\n\nVisit the following resources to learn more:", + "description": "Cross-Origin Resource Sharing (CORS) is a security mechanism implemented by web browsers to control access to resources (like APIs or fonts) on a web page from a different domain than the one serving the web page. It extends and adds flexibility to the Same-Origin Policy, allowing servers to specify who can access their resources. CORS works through a system of HTTP headers, where browsers send a preflight request to the server hosting the cross-origin resource, and the server responds with headers indicating whether the actual request is allowed. This mechanism helps prevent unauthorized access to sensitive data while enabling legitimate cross-origin requests. CORS is crucial for modern web applications that often integrate services and resources from multiple domains, balancing security needs with the functionality requirements of complex, distributed web systems.\n\nVisit the following resources to learn more:", "links": [ { "title": "Cross-Origin Resource Sharing (CORS)", @@ -2382,12 +2429,23 @@ }, "TZ0BWOENPv6pQm8qYB8Ow": { "title": "Server Security", - "description": "Learn about the security of your server and how to secure it. Here are some of the topics off the top of my head:\n\n* Use a firewall: One of the most effective ways to secure a server is to use a firewall to block all unnecessary incoming traffic. You can use iptables on Linux systems or a hardware firewall to do this.\n* Close unnecessary ports: Make sure to close any ports that are not needed for your server to function properly. This will reduce the attack surface of your server and make it more difficult for attackers to gain access.\n* Use strong passwords: Use long, complex passwords for all of your accounts, and consider using a password manager to store them securely.\n* Keep your system up to date: Make sure to keep your operating system and software up to date with the latest security patches. This will help to prevent vulnerabilities from being exploited by attackers.\n* Use SSL/TLS for communication: Use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to encrypt communication between your server and client devices. This will help to protect against man-in-the-middle attacks and other types of cyber threats.\n* Use a intrusion detection system (IDS): An IDS monitors network traffic and alerts you to any suspicious activity, which can help you to identify and respond to potential threats in a timely manner.\n* Enable two-factor authentication: Two-factor authentication adds an extra layer of security to your accounts by requiring a second form of authentication, such as a code sent to your phone, in addition to your password.\n\nAlso learn about OpenSSL and creating your own PKI as well as managing certs, renewals, and mutual client auth with x509 certs", - "links": [] + "description": "Server security involves protecting servers from threats and vulnerabilities to ensure the confidentiality, integrity, and availability of the data and services they manage. Key practices include:\n\n1. **Patch Management**: Regularly updating software and operating systems to fix vulnerabilities.\n2. **Access Control**: Implementing strong authentication mechanisms and restricting access to authorized users only.\n3. **Firewalls and Intrusion Detection**: Using firewalls to block unauthorized access and intrusion detection systems to monitor and respond to suspicious activities.\n4. **Encryption**: Encrypting data both in transit and at rest to protect sensitive information from unauthorized access.\n5. **Security Hardening**: Configuring servers with minimal services and features, applying security best practices to reduce the attack surface.\n6. **Regular Backups**: Performing regular backups to ensure data can be restored in case of loss or corruption.\n7. **Monitoring and Logging**: Continuously monitoring server activity and maintaining logs for auditing and detecting potential security incidents.\n\nEffective server security is crucial for safeguarding against attacks, maintaining system stability, and protecting sensitive data.\n\nLearn more from the following resources:", + "links": [ + { + "title": "What is a hardened server?", + "url": "https://www.sophos.com/en-us/cybersecurity-explained/what-is-server-hardening", + "type": "article" + }, + { + "title": "10 Tips for Hardening your Linux Servers", + "url": "https://www.youtube.com/watch?v=Jnxx_IAC0G4", + "type": "video" + } + ] }, "HgQBde1zLUFtlwB66PR6_": { "title": "CSP", - "description": "Content Security Policy is a computer security standard introduced to prevent cross-site scripting, clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.\n\nVisit the following resources to learn more:", + "description": "Content Security Policy (CSP) is a security standard implemented by web browsers to prevent cross-site scripting (XSS), clickjacking, and other code injection attacks. It works by allowing web developers to specify which sources of content are trusted and can be loaded on a web page. CSP is typically implemented through HTTP headers or meta tags, defining rules for various types of resources like scripts, stylesheets, images, and fonts. By restricting the origins from which content can be loaded, CSP significantly reduces the risk of malicious code execution. It also provides features like reporting violations to help developers identify and fix potential security issues. While powerful, implementing CSP requires careful configuration to balance security with functionality, especially for sites using third-party resources or inline scripts.\n\nVisit the following resources to learn more:", "links": [ { "title": "MDN — Content Security Policy (CSP)", @@ -2403,12 +2461,17 @@ "title": "Explore top posts about Security", "url": "https://app.daily.dev/tags/security?ref=roadmapsh", "type": "article" + }, + { + "title": "Content Security Policy Explained", + "url": "https://www.youtube.com/watch?v=-LjPRzFR5f0", + "type": "video" } ] }, "yCnn-NfSxIybUQ2iTuUGq": { "title": "How does the internet work?", - "description": "The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.\n\nVisit the following resources to learn more:", + "description": "The internet is a global network of interconnected computers that communicate using standardized protocols, primarily TCP/IP. When you request a webpage, your device sends a data packet through your internet service provider (ISP) to a DNS server, which translates the website's domain name into an IP address. The packet is then routed across various networks (using routers and switches) to the destination server, which processes the request and sends back the response. This back-and-forth exchange enables the transfer of data like web pages, emails, and files, making the internet a dynamic, decentralized system for global communication.\n\nVisit the following resources to learn more:", "links": [ { "title": "How does the Internet Work?", @@ -2435,11 +2498,6 @@ "url": "https://www.youtube.com/watch?v=x3c1ih2NJEg", "type": "video" }, - { - "title": "How the Internet Works in 5 Minutes", - "url": "https://www.youtube.com/watch?v=7_LPdttKXPc", - "type": "video" - }, { "title": "How does the internet work? (Full Course)", "url": "https://www.youtube.com/watch?v=zN8YNNHcaZc", @@ -2449,12 +2507,12 @@ }, "R12sArWVpbIs_PHxBqVaR": { "title": "What is HTTP?", - "description": "HTTP is the `TCP/IP` based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet.\n\nVisit the following resources to learn more:", + "description": "HTTP (Hypertext Transfer Protocol) is a protocol used for transmitting hypertext via the World Wide Web. It defines how messages are formatted and transmitted, and how web servers and browsers should respond to various commands. HTTP operates on a request-response model: a client (usually a web browser) sends an HTTP request to a server for resources, such as web pages or files, and the server responds with the requested content and an HTTP status code indicating the result of the request. HTTP is stateless, meaning each request from a client to a server is independent and does not retain information about previous interactions. It forms the foundation of data communication on the web and is typically used with secure HTTP (HTTPS) for encrypted communication.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Everything you need to know about HTTP", - "url": "https://cs.fyi/guide/http-in-depth", - "type": "article" + "title": "Full HTTP Networking Course", + "url": "https://www.youtube.com/watch?v=2JYT5f2isg4", + "type": "course" }, { "title": "What is HTTP?", @@ -2471,21 +2529,11 @@ "url": "https://www.smashingmagazine.com/2021/08/http3-core-concepts-part1/", "type": "article" }, - { - "title": "Full HTTP Networking Course", - "url": "https://www.youtube.com/watch?v=2JYT5f2isg4", - "type": "video" - }, { "title": "HTTP/1 to HTTP/2 to HTTP/3", "url": "https://www.youtube.com/watch?v=a-sBfyiXysI", "type": "video" }, - { - "title": "HTTP Crash Course & Exploration", - "url": "https://www.youtube.com/watch?v=iYM2zFP3Zn0", - "type": "video" - }, { "title": "SSL, TLS, HTTPS Explained", "url": "https://www.youtube.com/watch?v=j9QmMEWmcfo", @@ -2495,7 +2543,7 @@ }, "ZhSuu2VArnzPDp6dPQQSC": { "title": "What is Domain Name?", - "description": "A domain name is a unique, easy-to-remember address used to access websites, such as ‘[google.com](http://google.com)’, and ‘[facebook.com](http://facebook.com)’. Users can connect to websites using domain names thanks to the DNS system.\n\nVisit the following resources to learn more:", + "description": "A domain name is a human-readable address used to identify a specific location on the internet, making it easier to access websites and online services. It translates to an IP address, which is a numerical identifier used by computers to locate and connect to servers. A domain name consists of two main parts: the **second-level domain** (e.g., \"example\" in \"[example.com](http://example.com)\") and the **top-level domain** (e.g., \".com\"). Domain names are managed by domain name registrars and are essential for establishing a web presence, providing a user-friendly way to navigate to websites instead of using numeric IP addresses.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is a Domain Name?", @@ -2511,12 +2559,17 @@ "title": "A Beginners Guide to How Domain Names Work", "url": "https://www.youtube.com/watch?v=Y4cRx19nhJk", "type": "video" + }, + { + "title": "Everything You Need to Know About Domain Names", + "url": "https://www.youtube.com/watch?v=qO5qcQgiNX4", + "type": "video" } ] }, "aqMaEY8gkKMikiqleV5EP": { "title": "What is hosting?", - "description": "Web hosting is an online service that allows you to publish your website files onto the internet. So, anyone who has access to the internet has access to your website.\n\nVisit the following resources to learn more:", + "description": "Hosting refers to the service of providing server space and resources for storing and delivering website files and applications to users over the internet. Hosting providers offer the infrastructure, such as servers, storage, and network connectivity, required to make websites and applications accessible online. There are various types of hosting, including shared hosting (where multiple websites share a single server), virtual private servers (VPS), dedicated hosting (where a single server is dedicated to one user), and cloud hosting (which uses a network of servers to provide scalable resources). Hosting services often include domain registration, security features, and technical support to ensure websites are reliably available and perform well.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is the difference between webpage, website, web server, and search engine?", @@ -2529,8 +2582,8 @@ "type": "article" }, { - "title": "What Is Web Hosting? Explained", - "url": "https://www.youtube.com/watch?v=htbY9-yggB0", + "title": "What is Web Hosting and How Does It Work?", + "url": "https://www.youtube.com/watch?v=H8oAvyqQwew", "type": "video" }, { @@ -2547,7 +2600,7 @@ }, "hkxw9jPGYphmjhTjw8766": { "title": "DNS and how it works?", - "description": "The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, like [nytimes.com](http://nytimes.com) or [espn.com](http://espn.com). Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources.\n\nVisit the following resources to learn more:", + "description": "DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like [www.example.com](http://www.example.com)) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.\n\nVisit the following resources to learn more:", "links": [ { "title": "What is DNS?", @@ -2559,11 +2612,6 @@ "url": "https://howdns.works/", "type": "article" }, - { - "title": "Understanding Domain names", - "url": "https://developer.mozilla.org/en-US/docs/Glossary/DNS/", - "type": "article" - }, { "title": "Explore top posts about DNS", "url": "https://app.daily.dev/tags/dns?ref=roadmapsh", @@ -2573,31 +2621,16 @@ "title": "DNS and How does it Work?", "url": "https://www.youtube.com/watch?v=Wj0od2ag5sk", "type": "video" - }, - { - "title": "DNS Records", - "url": "https://www.youtube.com/watch?v=7lxgpKh_fRY", - "type": "video" - }, - { - "title": "Complete DNS mini-series", - "url": "https://www.youtube.com/watch?v=zEmUuNFBgN8&list=PLTk5ZYSbd9MhMmOiPhfRJNW7bhxHo4q-K", - "type": "video" } ] }, "P82WFaTPgQEPNp5IIuZ1Y": { "title": "Browsers and how they work?", - "description": "A web browser is a software application that enables a user to access and display web pages or other online content through its graphical user interface.\n\nVisit the following resources to learn more:", + "description": "Web browsers are software applications that enable users to access, retrieve, and navigate information on the World Wide Web. They interpret and display HTML, CSS, and JavaScript to render web pages. Modern browsers like Google Chrome, Mozilla Firefox, Apple Safari, and Microsoft Edge offer features such as tabbed browsing, bookmarks, extensions, and synchronization across devices. They incorporate rendering engines (e.g., Blink, Gecko, WebKit) to process web content, and JavaScript engines for executing code. Browsers also manage security through features like sandboxing, HTTPS enforcement, and pop-up blocking. They support various web standards and technologies, including HTML5, CSS3, and Web APIs, enabling rich, interactive web experiences. With the increasing complexity of web applications, browsers have evolved to become powerful platforms, balancing performance, security, and user experience in the ever-changing landscape of the internet.\n\nVisit the following resources to learn more:", "links": [ { "title": "How Browsers Work", - "url": "https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/", - "type": "article" - }, - { - "title": "Role of Rendering Engine in Browsers", - "url": "https://www.browserstack.com/guide/browser-rendering-engine", + "url": "https://www.ramotion.com/blog/what-is-web-browser/", "type": "article" }, { @@ -2609,23 +2642,18 @@ "title": "Explore top posts about Browsers", "url": "https://app.daily.dev/tags/browsers?ref=roadmapsh", "type": "article" + }, + { + "title": "How Do Web Browsers Work?", + "url": "https://www.youtube.com/watch?v=5rLFYtXHo9s", + "type": "video" } ] }, "PY9G7KQy8bF6eIdr1ydHf": { "title": "Authentication", - "description": "The API authentication process validates the identity of the client attempting to make a connection by using an authentication protocol. The protocol sends the credentials from the remote client requesting the connection to the remote access server in either plain text or encrypted form. The server then knows whether it can grant access to that remote client or not.\n\nHere is the list of common ways of authentication:\n\n* JWT Authentication\n* Token based Authentication\n* Session based Authentication\n* Basic Authentication\n* OAuth - Open Authorization\n* SSO - Single Sign On\n\nVisit the following resources to learn more:", + "description": "API authentication is the process of verifying the identity of clients attempting to access an API, ensuring that only authorized users or applications can interact with the API's resources. Common methods include API keys, OAuth 2.0, JSON Web Tokens (JWT), basic authentication, and OpenID Connect. These techniques vary in complexity and security level, from simple token-based approaches to more sophisticated protocols that handle both authentication and authorization. API authentication protects sensitive data, prevents unauthorized access, enables usage tracking, and can provide granular control over resource access. The choice of authentication method depends on factors such as security requirements, types of clients, ease of implementation, and scalability needs. Implementing robust API authentication is crucial for maintaining the integrity, security, and controlled usage of web services and applications in modern, interconnected software ecosystems.\n\nVisit the following resources to learn more:", "links": [ - { - "title": "User Authentication: Understanding the Basics & Top Tips", - "url": "https://swoopnow.com/user-authentication/", - "type": "article" - }, - { - "title": "An overview about authentication methods", - "url": "https://betterprogramming.pub/how-do-you-authenticate-mate-f2b70904cc3a", - "type": "article" - }, { "title": "SSO - Single Sign On", "url": "https://roadmap.sh/guides/sso", @@ -2665,18 +2693,13 @@ }, "UxS_mzVUjLigEwKrXnEeB": { "title": "JWT", - "description": "JWT stands for JSON Web Token is a token-based encryption open standard/methodology that is used to transfer information securely as a JSON object. Clients and Servers use JWT to securely share information, with the JWT containing encoded JSON objects and claims. JWT tokens are designed to be compact, safe to use within URLs, and ideal for SSO contexts.\n\nVisit the following resources to learn more:", + "description": "JWT (JSON Web Token) is an open standard for securely transmitting information between parties as a JSON object. It consists of three parts: a header (which specifies the token type and algorithm used for signing), a payload (which contains the claims or the data being transmitted), and a signature (which is used to verify the token’s integrity and authenticity). JWTs are commonly used for authentication and authorization purposes, allowing users to securely transmit and validate their identity and permissions across web applications and APIs. They are compact, self-contained, and can be easily transmitted in HTTP headers, making them popular for modern web and mobile applications.\n\nVisit the following resources to learn more:", "links": [ { "title": "jwt.io Website", "url": "https://jwt.io/", "type": "article" }, - { - "title": "Introduction to JSON Web Tokens", - "url": "https://jwt.io/introduction", - "type": "article" - }, { "title": "What is JWT?", "url": "https://www.akana.com/blog/what-is-jwt", @@ -2691,17 +2714,12 @@ "title": "What Is JWT and Why Should You Use JWT", "url": "https://www.youtube.com/watch?v=7Q17ubqLfaM", "type": "video" - }, - { - "title": "What is JWT? JSON Web Token Explained", - "url": "https://www.youtube.com/watch?v=926mknSW9Lo", - "type": "video" } ] }, "yRiJgjjv2s1uV9vgo3n8m": { "title": "Basic Authentication", - "description": "Given the name \"Basic Authentication\", you should not confuse Basic Authentication with the standard username and password authentication. Basic authentication is a part of the HTTP specification, and the details can be [found in the RFC7617](https://www.rfc-editor.org/rfc/rfc7617.html).\n\nBecause it is a part of the HTTP specifications, all the browsers have native support for \"HTTP Basic Authentication\".\n\nVisit the following resources to learn more:", + "description": "Basic Authentication is a simple HTTP authentication scheme built into the HTTP protocol. It works by sending a user's credentials (username and password) encoded in base64 format within the HTTP header. When a client makes a request to a server requiring authentication, the server responds with a 401 status code and a \"WWW-Authenticate\" header. The client then resends the request with the Authorization header containing the word \"Basic\" followed by the base64-encoded string of \"username:password\". While easy to implement, Basic Authentication has significant security limitations: credentials are essentially sent in plain text (base64 is easily decoded), and it doesn't provide any encryption. Therefore, it should only be used over HTTPS connections to ensure the credentials are protected during transmission. Due to its simplicity and lack of advanced security features, Basic Authentication is generally recommended only for simple, low-risk scenarios or as a fallback mechanism.\n\nVisit the following resources to learn more:", "links": [ { "title": "HTTP Basic Authentication", @@ -2713,6 +2731,11 @@ "url": "https://app.daily.dev/tags/authentication?ref=roadmapsh", "type": "article" }, + { + "title": "Basic Authentication in 5 minutes", + "url": "https://www.youtube.com/watch?v=rhi1eIjSbvk", + "type": "video" + }, { "title": "Illustrated HTTP Basic Authentication", "url": "https://www.youtube.com/watch?v=mwccHwUn7Gc", @@ -2722,7 +2745,7 @@ }, "0rGj7FThLJZouSQUhnqGW": { "title": "Token Authentication", - "description": "Token-based authentication is a protocol which allows users to verify their identity, and in return receive a unique access token. During the life of the token, users then access the website or app that the token has been issued for, rather than having to re-enter credentials each time they go back to the same webpage, app, or any resource protected with that same token.\n\nAuth tokens work like a stamped ticket. The user retains access as long as the token remains valid. Once the user logs out or quits an app, the token is invalidated.\n\nToken-based authentication is different from traditional password-based or server-based authentication techniques. Tokens offer a second layer of security, and administrators have detailed control over each action and transaction.\n\nBut using tokens requires a bit of coding know-how. Most developers pick up the techniques quickly, but there is a learning curve.\n\nVisit the following resources to learn more:", + "description": "Token-based authentication is a protocol which allows users to verify their identity, and in return receive a unique access token. During the life of the token, users then access the website or app that the token has been issued for, rather than having to re-enter credentials each time they go back to the same webpage, app, or any resource protected with that same token. Auth tokens work like a stamped ticket. The user retains access as long as the token remains valid. Once the user logs out or quits an app, the token is invalidated. Token-based authentication is different from traditional password-based or server-based authentication techniques. Tokens offer a second layer of security, and administrators have detailed control over each action and transaction.\n\nVisit the following resources to learn more:", "links": [ { "title": "What Is Token-Based Authentication?", @@ -2733,12 +2756,17 @@ "title": "Explore top posts about Authentication", "url": "https://app.daily.dev/tags/authentication?ref=roadmapsh", "type": "article" + }, + { + "title": "Why is JWT popular?", + "url": "https://www.youtube.com/watch?v=P2CPd9ynFLg", + "type": "video" } ] }, "vp-muizdICcmU0gN8zmkS": { "title": "OAuth", - "description": "OAuth stands for **O**pen **Auth**orization and is an open standard for authorization. It works to authorize devices, APIs, servers and applications using access tokens rather than user credentials, known as \"secure delegated access\".\n\nIn its most simplest form, OAuth delegates authentication to services like Facebook, Amazon, Twitter and authorizes third-party applications to access the user account **without** having to enter their login and password.\n\nIt is mostly utilized for REST/APIs and only provides a limited scope of a user's data.\n\nVisit the following resources to learn more:", + "description": "OAuth is an open standard for authorization that allows third-party applications to access a user's resources without exposing their credentials. It works by issuing access tokens after users grant permission, which applications then use to interact with resource servers on behalf of the user. This process involves a resource owner (the user), a resource server (which holds the data), and an authorization server (which issues tokens). OAuth enables secure, token-based access management, commonly used for granting applications permissions to interact with services like social media accounts or cloud storage.\n\nVisit the following resources to learn more:", "links": [ { "title": "Okta - What the Heck is OAuth", @@ -2756,31 +2784,31 @@ "type": "article" }, { - "title": "What is OAuth really all about", - "url": "https://www.youtube.com/watch?v=t4-416mg6iU", - "type": "video" - }, - { - "title": "OAuth 2.0: An Overview", - "url": "https://www.youtube.com/watch?v=CPbvxxslDTU", + "title": "OAuth 2 Explained In Simple Terms", + "url": "https://www.youtube.com/watch?v=ZV5yTm4pT8g", "type": "video" } ] }, "ffzsh8_5yRq85trFt9Xhk": { "title": "Cookie Based Auth", - "description": "Cookies are pieces of data used to identify the user and their preferences. The browser returns the cookie to the server every time the page is requested. Specific cookies like HTTP cookies are used to perform cookie-based authentication to maintain the session for each user.\n\nVisit the following resources to learn more:", + "description": "Cookie-based authentication is a method of maintaining user sessions in web applications. When a user logs in, the server creates a session and sends a unique identifier (session ID) to the client as a cookie. This cookie is then sent with every subsequent request, allowing the server to identify and authenticate the user. The actual session data is typically stored on the server, with the cookie merely serving as a key to access this data. This approach is stateful on the server side and works well for traditional web applications. It's relatively simple to implement and is natively supported by browsers. However, cookie-based authentication faces challenges with cross-origin requests, can be vulnerable to CSRF attacks if not properly secured, and may not be ideal for modern single-page applications or mobile apps. Despite these limitations, it remains a common authentication method, especially for server-rendered web applications.\n\nVisit the following resources to learn more:", "links": [ { "title": "How does cookie based authentication work?", "url": "https://stackoverflow.com/questions/17769011/how-does-cookie-based-authentication-work", "type": "article" + }, + { + "title": "Session vs Token Authentication in 100 Seconds", + "url": "https://www.youtube.com/watch?v=UBUNrFtufWo", + "type": "video" } ] }, "z3EJBpgGm0_Uj3ymhypbX": { "title": "OpenID", - "description": "OpenID is a protocol that utilizes the authorization and authentication mechanisms of OAuth 2.0 and is now widely adopted by many identity providers on the Internet. It solves the problem of needing to share user's personal info between many different web services(e.g. online shops, discussion forums etc.)\n\nVisit the following resources to learn more:", + "description": "OpenID is an open standard for decentralized authentication that allows users to log in to multiple websites and applications using a single set of credentials, managed by an identity provider (IdP). It enables users to authenticate their identity through an external service, simplifying the login process and reducing the need for multiple usernames and passwords. OpenID typically works in conjunction with OAuth 2.0 for authorization, allowing users to grant access to their data while maintaining security. This approach enhances user convenience and streamlines identity management across various platforms.\n\nVisit the following resources to learn more:", "links": [ { "title": "Official Website", @@ -2788,13 +2816,8 @@ "type": "article" }, { - "title": "What is OpenID", - "url": "https://openid.net/connect/", - "type": "article" - }, - { - "title": "OAuth vs OpenID", - "url": "https://securew2.com/blog/oauth-vs-openid-which-is-better", + "title": "OpenID Connect Protocol", + "url": "https://auth0.com/docs/authenticate/protocols/openid-connect-protocol", "type": "article" }, { @@ -2816,8 +2839,19 @@ }, "UCHtaePVxS-0kpqlYxbfC": { "title": "SAML", - "description": "Security Assertion Markup Language (SAML)\n-----------------------------------------\n\n**SAML** stands for Security Assertion Markup Language. It is an XML-based standard for exchanging authentication and authorization data between parties, particularly between an identity provider (IdP) and a service provider (SP). In a SAML-based system, a user requests access to a protected resource. The service provider asks the identity provider to authenticate the user and assert whether they are granted access to the resource.\n\n### Benefits of SAML\n\nSome advantages of using SAML include:\n\n* Single Sign-On (SSO): Users can log in once at the IdP and access multiple service providers without needing to authenticate again.\n* Improved security: Passwords and user credentials are not required to be stored and managed by the service provider, reducing the potential vectors for attack.\n* Increased efficiency: As users no longer need to maintain multiple sets of credentials, managing access becomes easier for both the user and the system administrators.\n* Interoperability: SAML enables a wide range of applications to work together, regardless of the underlying technology or platform.\n\n### SAML Components\n\nThree main components are involved in the SAML architecture:\n\n1. **Identity Provider (IdP)**: The entity that manages users' identities and authenticates them by providing security tokens, also called assertions.\n2. **Service Provider (SP)**: The entity that provides a service (such as a web application or API) and relies on the identity provider to authenticate users and grant/deny access to the resources.\n3. **User/Principal**: The end user seeking access to the service provided by the service provider.\n\n### SAML Workflow\n\nThe SAML authentication process consists of the following steps:\n\n1. The user requests access to a protected resource from the service provider.\n2. If the user is not already authenticated, the service provider generates and sends a SAML authentication request to the identity provider.\n3. The identity provider authenticates the user (using, e.g., a username and password, multi-factor authentication, or another method).\n4. The identity provider constructs a SAML response, which includes details about the user and asserts whether the user is authorized to access the requested resource.\n5. The SAML response is sent back to the service provider, typically via the user's web browser or API client.\n6. The service provider processes the SAML response, extracts the necessary information, and grants or denies access to the user based on the identity provider's assertion.\n\nWith SAML, you can streamline user authentication and authorization across various applications and systems, providing a better user experience and improving your overall backend security.", - "links": [] + "description": "Security Assertion Markup Language (SAML)\n-----------------------------------------\n\nSecurity Assertion Markup Language (SAML) is an XML-based framework used for single sign-on (SSO) and identity federation, enabling users to authenticate once and gain access to multiple applications or services. It allows for the exchange of authentication and authorization data between an identity provider (IdP) and a service provider (SP). SAML assertions are XML documents that contain user identity information and attributes, and are used to convey authentication credentials and permissions. By implementing SAML, organizations can streamline user management, enhance security through centralized authentication, and simplify the user experience by reducing the need for multiple logins across different systems.\n\nLearn more from the following resources:", + "links": [ + { + "title": "SAML Explained in Plain English", + "url": "https://www.onelogin.com/learn/saml", + "type": "article" + }, + { + "title": "How SAML Authentication Works", + "url": "https://www.youtube.com/watch?v=VzRnb9u8T1A", + "type": "video" + } + ] }, "NulaE1isWqn-feYHg4YQT": { "title": "Elasticsearch", @@ -2837,13 +2871,23 @@ "title": "Explore top posts about ELK", "url": "https://app.daily.dev/tags/elk?ref=roadmapsh", "type": "article" + }, + { + "title": "What is Elasticsearch", + "url": "https://www.youtube.com/watch?v=ZP0NmfyfsoM", + "type": "video" } ] }, "iN_1EuIwCx_7lRBw1Io4U": { "title": "Solr", - "description": "Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.\n\nVisit the following resources to learn more:", + "description": "Solr is an open-source, highly scalable search platform built on Apache Lucene, designed for full-text search, faceted search, and real-time indexing. It provides powerful features for indexing and querying large volumes of data with high performance and relevance. Solr supports complex queries, distributed searching, and advanced text analysis, including tokenization and stemming. It offers features such as faceted search, highlighting, and geographic search, and is commonly used for building search engines and data retrieval systems in various applications, from e-commerce to content management.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "apache/solr", + "url": "https://github.com/apache/solr", + "type": "opensource" + }, { "title": "Official Website", "url": "https://solr.apache.org/", @@ -2853,69 +2897,138 @@ "title": "Official Documentation", "url": "https://solr.apache.org/resources.html#documentation", "type": "article" + }, + { + "title": "Apache Solr vs Elasticsearch Differences", + "url": "https://www.youtube.com/watch?v=MMWBdSdbu5k", + "type": "video" } ] }, "5XGvep2qoti31bsyqNzrU": { "title": "Real-Time Data", - "description": "There are many ways to get real time data from the backend. Some of them are:", + "description": "Real-time data refers to information that is processed and made available immediately or with minimal delay, allowing users or systems to react promptly to current conditions. This type of data is essential in applications requiring immediate updates and responses, such as financial trading platforms, online gaming, real-time analytics, and monitoring systems. Real-time data processing involves capturing, analyzing, and delivering information as it is generated, often using technologies like stream processing frameworks (e.g., Apache Kafka, Apache Flink) and low-latency databases. Effective real-time data systems can handle high-speed data flows, ensuring timely and accurate decision-making.", "links": [] }, "osvajAJlwGI3XnX0fE-kA": { "title": "Long Polling", - "description": "Long polling is a technique where the client polls the server for new data. However, if the server does not have any data available for the client, instead of sending an empty response, the server holds the request and waits for some specified period of time for new data to be available. If new data becomes available during that time, the server immediately sends a response to the client, completing the open request. If no new data becomes available and the timeout period specified by the client expires, the server sends a response indicating that fact. The client will then immediately re-request data from the server, creating a new request-response cycle.\n\nVisit the following resources to learn more:", + "description": "Long polling is a technique where the client polls the server for new data. However, if the server does not have any data available for the client, instead of sending an empty response, the server holds the request and waits for some specified period of time for new data to be available. If new data becomes available during that time, the server immediately sends a response to the client, completing the open request. If no new data becomes available and the timeout period specified by the client expires, the server sends a response indicating that fact. The client will then immediately re-request data from the server, creating a new request-response cycle.\n\nLearn more from the following resources:", "links": [ { - "title": "Long polling", + "title": "Long Polling", "url": "https://javascript.info/long-polling", "type": "article" }, { - "title": "What are Long-Polling, Websockets, Server-Sent Events (SSE) and Comet?", - "url": "https://stackoverflow.com/questions/11077857/what-are-long-polling-websockets-server-sent-events-sse-and-comet", - "type": "article" + "title": "What is Long Polling?", + "url": "https://www.youtube.com/watch?v=LD0_-uIsnOE", + "type": "video" } ] }, "Tt7yr-ChHncJG0Ge1f0Xk": { "title": "Short Polling", - "description": "Short polling is a technique where the client repeatedly polls the server for new data. This is the most common approach to polling. It's simple to implement and understand, but it's not the most efficient way of doing things.", - "links": [] + "description": "Short polling is a technique where a client periodically sends requests to a server at regular intervals to check for updates or new data. The server responds with the current state or any changes since the last request. While simple to implement and compatible with most HTTP infrastructures, short polling can be inefficient due to the frequent network requests and potential for increased latency in delivering updates. It contrasts with long polling and WebSockets, which offer more efficient mechanisms for real-time communication. Short polling is often used when real-time requirements are less stringent and ease of implementation is a priority.\n\nLearn more from the following resources:", + "links": [ + { + "title": "Amazon SQS short and long polling", + "url": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html", + "type": "article" + }, + { + "title": "Short Polling vs Long Polling vs WebSockets", + "url": "https://www.youtube.com/watch?v=ZBM28ZPlin8", + "type": "video" + } + ] }, "M0iaSSdVPWaCUpyTG50Vf": { "title": "Redis", - "description": "A key-value database (KV database) is a type of database that stores data as a collection of key-value pairs. In a KV database, each piece of data is identified by a unique key, and the value is the data associated with that key.\n\nKV databases are designed for fast and efficient storage and retrieval of data, and they are often used in applications that require high performance and low latency. They are particularly well-suited for storing large amounts of unstructured data, such as log data and user profiles.\n\nSome popular KV databases include Redis, Memcached, and LevelDB. These databases are often used in combination with other types of databases, such as relational databases or document databases, to provide a complete and scalable data storage solution.\n\nVisit the following resources to learn more:", + "description": "Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.\n\nVisit the following resources to learn more:", "links": [ { - "title": "Key-Value Databases - Wikipedia", - "url": "https://en.wikipedia.org/wiki/Key-value_database", + "title": "Redis Crash Course", + "url": "https://www.youtube.com/watch?v=XCsS_NVAa1g", + "type": "course" + }, + { + "title": "Redis Website", + "url": "https://redis.io/", "type": "article" }, { - "title": "Explore top posts about Backend Development", - "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", + "title": "Explore top posts about Redis", + "url": "https://app.daily.dev/tags/redis?ref=roadmapsh", "type": "article" + }, + { + "title": "Redis in 100 Seconds", + "url": "https://www.youtube.com/watch?v=G1rOthIU-uo", + "type": "video" } ] }, "dwfEHInbX2eFiafM-nRMX": { "title": "DynamoDB", - "description": "DynamoDB is a fully managed NoSQL database service provided by AWS, designed for high-performance applications that require low-latency data access at any scale.\n\nIt supports key-value and document data models, allowing developers to store and retrieve any amount of data with predictable performance.\n\nDynamoDB is known for its seamless scalability, automatic data replication across multiple AWS regions, and built-in security features, making it ideal for use cases like real-time analytics, mobile apps, gaming, IoT, and more.\n\nKey features include flexible schema design, powerful query capabilities, and integration with other AWS services.", - "links": [] + "description": "Amazon DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It offers high-performance, scalable, and flexible data storage for applications of any scale. DynamoDB supports both key-value and document data models, providing fast and predictable performance with seamless scalability. It features automatic scaling, built-in security, backup and restore options, and global tables for multi-region deployment. DynamoDB excels in handling high-traffic web applications, gaming backends, mobile apps, and IoT solutions. It offers consistent single-digit millisecond latency at any scale and supports both strongly consistent and eventually consistent read models. With its integration into the AWS ecosystem, on-demand capacity mode, and support for transactions, DynamoDB is widely used for building highly responsive and scalable applications, particularly those with unpredictable workloads or requiring low-latency data access.\n\nLearn more from the following resources:", + "links": [ + { + "title": "AWS DynamoDB Website", + "url": "https://aws.amazon.com/dynamodb/", + "type": "article" + }, + { + "title": "daily.dev AWS DynamoDB Feed", + "url": "https://app.daily.dev/tags/aws-dynamodb", + "type": "article" + }, + { + "title": "AWS DynamoDB Tutorial For Beginners", + "url": "https://www.youtube.com/watch?v=2k2GINpO308", + "type": "video" + } + ] }, "RyJFLLGieJ8Xjt-DlIayM": { "title": "Firebase", - "description": "A real-time database is broadly defined as a data store designed to collect, process, and/or enrich an incoming series of data points (i.e., a data stream) in real time, typically immediately after the data is created.\n\n[Firebase](https://firebase.google.com/) [RethinkDB](https://rethinkdb.com/)", - "links": [] + "description": "Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring. While it excels in rapid prototyping and building real-time applications, its proprietary nature and potential for vendor lock-in are considerations for large-scale or complex applications. Firebase's ease of use and integration with Google Cloud Platform make it popular for startups and projects requiring quick deployment.\n\nLearn more from the following resources:", + "links": [ + { + "title": "The ultimate guide to Firebase", + "url": "https://fireship.io/lessons/the-ultimate-beginners-guide-to-firebase/", + "type": "course" + }, + { + "title": "Firebase Website", + "url": "https://firebase.google.com/", + "type": "article" + }, + { + "title": "Firebase in 100 seconds", + "url": "https://www.youtube.com/watch?v=vAoB4VbhRzM", + "type": "video" + } + ] }, "5T0ljwlHL0545ICCeehcQ": { "title": "RethinkDB", - "description": "", - "links": [] + "description": "RethinkDB is an open-source, distributed NoSQL database designed for real-time applications. It focuses on providing real-time capabilities by allowing applications to automatically receive updates when data changes, using its changefeed feature. RethinkDB's data model is based on JSON documents, and it supports rich queries, including joins, aggregations, and filtering. It offers a flexible schema and supports horizontal scaling through sharding and replication for high availability. Although development on RethinkDB ceased in 2016, its approach to real-time data and powerful querying capabilities make it notable for applications needing immediate data updates and responsiveness.\n\nLearn more from the following resources:", + "links": [ + { + "title": "RethinkDB Crash Course", + "url": "https://www.youtube.com/watch?v=pW3PFtchHDc", + "type": "course" + }, + { + "title": "RethinkDB Website", + "url": "https://rethinkdb.com/", + "type": "article" + } + ] }, "kdulE3Z_BdbtRmq6T2KmR": { "title": "SQLite", - "description": "SQLite is a relational database management system that is embedded into the end program. It is self-contained, serverless, zero-configuration, transactional SQL database engine.\n\nVisit the following resources to learn more:", + "description": "SQLite is a lightweight, serverless, self-contained SQL database engine that is designed for simplicity and efficiency. It is widely used in embedded systems and applications where a full-featured database server is not required, such as mobile apps, desktop applications, and small to medium-sized websites. SQLite stores data in a single file, which makes it easy to deploy and manage. It supports standard SQL queries and provides ACID (Atomicity, Consistency, Isolation, Durability) compliance to ensure data integrity. SQLite’s small footprint, minimal configuration, and ease of use make it a popular choice for applications needing a compact, high-performance database solution.\n\nVisit the following resources to learn more:", "links": [ { "title": "SQLite website", @@ -2931,12 +3044,17 @@ "title": "Explore top posts about SQLite", "url": "https://app.daily.dev/tags/sqlite?ref=roadmapsh", "type": "article" + }, + { + "title": "SQLite Introduction", + "url": "https://www.youtube.com/watch?v=8Xyn8R9eKB8", + "type": "video" } ] }, "XbM4TDImSH-56NsITjyHK": { "title": "Influx DB", - "description": "InfluxDB\n--------\n\nInfluxDB was built from the ground up to be a purpose-built time series database; i.e., it was not repurposed to be time series. Time was built-in from the beginning. InfluxDB is part of a comprehensive platform that supports the collection, storage, monitoring, visualization and alerting of time series data. It’s much more than just a time series database.\n\nVisit the following resources to learn more:", + "description": "InfluxDB is a high-performance, open-source time-series database designed for handling large volumes of timestamped data, such as metrics, events, and real-time analytics. It is optimized for use cases like monitoring, IoT, and application performance management, where data arrives in continuous streams. InfluxDB supports SQL-like queries through its query language (Flux), and it can handle high write and query loads efficiently. Key features include support for retention policies, downsampling, and automatic data compaction, making it ideal for environments that require fast and scalable time-series data storage and retrieval.\n\nVisit the following resources to learn more:", "links": [ { "title": "InfluxDB Website", @@ -2952,29 +3070,49 @@ "title": "Explore top posts about Backend Development", "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", "type": "article" + }, + { + "title": "The Basics of Time Series Data", + "url": "https://www.youtube.com/watch?v=wBWTj-1XiRU", + "type": "video" } ] }, "WiAK70I0z-_bzbWNwiHUd": { "title": "TimeScale", - "description": "TimescaleDB is an open-source time-series database built on top of PostgreSQL, designed for efficiently storing and querying time-series data.\n\nIt introduces the concept of hypertables, which automatically partition data by time and space, making it ideal for high-volume data scenarios like monitoring, IoT, and financial analytics.\n\nTimescaleDB combines the power of relational databases with the performance of a specialized time-series solution, offering advanced features like continuous aggregates, real-time analytics, and seamless integration with PostgreSQL's ecosystem.\n\nIt's a robust choice for developers looking to manage time-series data in scalable and efficient ways.\n\nVisit the following resources to learn more:", + "description": "TimescaleDB is an open-source, time-series database built as an extension to PostgreSQL. It is designed to handle large volumes of time-stamped data efficiently, making it suitable for applications that require high-performance analytics on time-series data, such as monitoring systems, IoT applications, and financial services. TimescaleDB leverages PostgreSQL’s features while providing additional capabilities for time-series data, including efficient data ingestion, advanced time-based queries, and automatic data partitioning (hypertables). It supports complex queries and aggregations, making it a powerful tool for analyzing trends and patterns in time-series data.\n\nVisit the following resources to learn more:", "links": [ + { + "title": "Timescale Website", + "url": "https://www.timescale.com/", + "type": "article" + }, { "title": "Tutorial - TimeScaleDB Explained in 100 Seconds", "url": "https://www.youtube.com/watch?v=69Tzh_0lHJ8", "type": "video" + }, + { + "title": "What is time series data?", + "url": "https://www.youtube.com/watch?v=Se5ipte9DMY", + "type": "video" } ] }, "gT6-z2vhdIQDzmR2K1g1U": { "title": "Cassandra", - "description": "A **wide-column database** (sometimes referred to as a column database) is similar to a relational database. It store data in tables, rows and columns. However in opposite to relational databases here each row can have its own format of the columns. Column databases can be seen as a two-dimensional key-value database. One of such database system is **Apache Cassandra**.\n\n**Warning:** [note that a \"columnar database\" and a \"column database\" are two different terms!](https://en.wikipedia.org/wiki/Wide-column_store#Wide-column_stores_versus_columnar_databases)\n\nVisit the following resources to learn more:", + "description": "Apache Cassandra is a highly scalable, distributed NoSQL database designed to handle large amounts of structured data across multiple commodity servers. It provides high availability with no single point of failure, offering linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure. Cassandra uses a masterless ring architecture, where all nodes are equal, allowing for easy data distribution and replication. It supports flexible data models and can handle both unstructured and structured data. Cassandra excels in write-heavy environments and is particularly suitable for applications requiring high throughput and low latency. Its data model is based on wide column stores, offering a more complex structure than key-value stores. Widely used in big data applications, Cassandra is known for its ability to handle massive datasets while maintaining performance and reliability.\n\nVisit the following resources to learn more:", "links": [ { "title": "Apache Cassandra", "url": "https://cassandra.apache.org/_/index.html", "type": "article" }, + { + "title": "article@Cassandra - Quick Guide", + "url": "https://www.tutorialspoint.com/cassandra/cassandra_quick_guide.htm", + "type": "article" + }, { "title": "Explore top posts about Backend Development", "url": "https://app.daily.dev/tags/backend?ref=roadmapsh", @@ -2989,17 +3127,39 @@ }, "QZwTLOvjUTaSb_9deuxsR": { "title": "Base", - "description": "", - "links": [] + "description": "Oracle Base Database Service enables you to maintain absolute control over your data while using the combined capabilities of Oracle Database and Oracle Cloud Infrastructure. Oracle Base Database Service offers database systems (DB systems) on virtual machines. They are available as single-node DB systems and multi-node RAC DB systems on Oracle Cloud Infrastructure (OCI). You can manage these DB systems by using the OCI Console, the OCI API, the OCI CLI, the Database CLI (DBCLI), Enterprise Manager, or SQL Developer.\n\nLearn more from the following resources:", + "links": [ + { + "title": "Base Database Website", + "url": "https://docs.oracle.com/en-us/iaas/base-database/index.html", + "type": "article" + } + ] }, "5xy66yQrz1P1w7n6PcAFq": { "title": "AWS Neptune", - "description": "AWS Neptune is a fully managed graph database service designed for applications that require highly connected data.\n\nIt supports two popular graph models: Property Graph and RDF (Resource Description Framework), allowing you to build applications that traverse billions of relationships with millisecond latency.\n\nNeptune is optimized for storing and querying graph data, making it ideal for use cases like social networks, recommendation engines, fraud detection, and knowledge graphs.\n\nIt offers high availability, automatic backups, and multi-AZ (Availability Zone) replication, ensuring data durability and fault tolerance.\n\nAdditionally, Neptune integrates seamlessly with other AWS services and supports open standards like Gremlin, SPARQL, and Apache TinkerPop, making it flexible and easy to integrate into existing applications.", - "links": [] + "description": "Amazon Neptune is a fully managed graph database service provided by Amazon Web Services (AWS). It's designed to store and navigate highly connected data, supporting both property graph and RDF (Resource Description Framework) models. Neptune uses graph query languages like Gremlin and SPARQL, making it suitable for applications involving complex relationships, such as social networks, recommendation engines, fraud detection systems, and knowledge graphs. It offers high availability, with replication across multiple Availability Zones, and supports up to 15 read replicas for improved performance. Neptune integrates with other AWS services, provides encryption at rest and in transit, and offers fast recovery from failures. Its scalability and performance make it valuable for handling large-scale, complex data relationships in enterprise-level applications.\n\nLearn more from the following resources:", + "links": [ + { + "title": "AWS Neptune Website", + "url": "https://aws.amazon.com/neptune/", + "type": "article" + }, + { + "title": "Setting Up Amazon Neptune Graph Database", + "url": "https://cliffordedsouza.medium.com/setting-up-amazon-neptune-graph-database-2b73512a7388", + "type": "article" + }, + { + "title": "Getting Started with Neptune Serverless", + "url": "https://www.youtube.com/watch?v=b04-jjM9t4g", + "type": "video" + } + ] }, "Z01E67D6KjrShvQCHjGR7": { "title": "Observability", - "description": "In software development, observability is the measure of how well we can understand a system from the work it does, and how to make it better.\n\nSo what makes a system to be \"observable\"? It is its ability of producing and collecting metrics, logs and traces in order for us to understand what happens under the hood and identify issues and bottlenecks faster.\n\nYou can of course implement all those features by yourself, but there are a lot of softwares out there that can help you with it like Datadog, Sentry and CloudWatch.\n\nVisit the following resources to learn more:", + "description": "Observability refers to the ability to understand and monitor the internal state of a system based on its external outputs, such as metrics, logs, and traces. It encompasses collecting, analyzing, and visualizing data to gain insights into system performance, detect anomalies, and troubleshoot issues. Effective observability involves integrating these data sources to provide a comprehensive view of system behavior, enabling proactive management and rapid response to problems. It helps in understanding complex systems, improving reliability, and optimizing performance by making it easier to identify and address issues before they impact users.\n\nVisit the following resources to learn more:", "links": [ { "title": "DataDog Docs", @@ -3027,8 +3187,8 @@ "type": "article" }, { - "title": "AWS re:Invent 2017: Improving Microservice and Serverless Observability with Monitor", - "url": "https://www.youtube.com/watch?v=Wx0SHRb2xcI", + "title": "What is observability?", + "url": "https://www.youtube.com/watch?v=--17See0KHs", "type": "video" } ] diff --git a/public/roadmap-content/terraform.json b/public/roadmap-content/terraform.json index d947ddf5a..7fa268a74 100644 --- a/public/roadmap-content/terraform.json +++ b/public/roadmap-content/terraform.json @@ -449,8 +449,14 @@ }, "fm8oUyNvfdGWTgLsYANUr": { "title": "Environment Variables", - "description": "", - "links": [] + "description": "Environment variables can be used to customize various aspects of Terraform. You can set these variables to change the default behaviour of terraform such as increase verbosity, update log file path, set workspace, etc. Envrionment variables are optional and terraform does not need them by default.\n\nLearn more from the following resources:", + "links": [ + { + "title": "Environment Variables", + "url": "https://developer.hashicorp.com/terraform/cli/config/environment-variables", + "type": "article" + } + ] }, "rdphcVd-Vq972y4H8CxIj": { "title": "Variable Definition File", @@ -470,8 +476,14 @@ }, "U2n2BtyUrOFLnw9SZYV_w": { "title": "Validation Rules", - "description": "", - "links": [] + "description": "Validation rules can be used to specify custom validations to a variable. The motive of adding validation rules is to make the variable comply with the rules. The validation rules can be added using a `validation` block.\n\nLearn more from the following resources:", + "links": [ + { + "title": "Custom Validation Rules", + "url": "https://developer.hashicorp.com/terraform/language/values/variables#custom-validation-rules", + "type": "article" + } + ] }, "1mFih8uFs3Lc-1PLgwiAU": { "title": "Local Values", diff --git a/public/roadmap-content/ux-design.json b/public/roadmap-content/ux-design.json index 792b189c3..aa025ac2d 100644 --- a/public/roadmap-content/ux-design.json +++ b/public/roadmap-content/ux-design.json @@ -53,7 +53,7 @@ }, "2NlgbLeLBYwZX2u2rKkIO": { "title": "BJ Fogg's Behavior Model", - "description": "B.J. Fogg, a renowned psychologist, and researcher at Stanford University, proposed the [Fogg Behavior Model (FBM)](https://www.behaviormodel.org/). This insightful model helps UX designers understand and influence user behavior by focusing on three core elements. These key factors are motivation, ability, and triggers.\n\n* **Motivation**: This element emphasizes the user's desire to perform a certain action or attain specific outcomes. Motivation can be linked to three core elements specified as sensation (pleasure/pain), anticipation (hope/fear), and social cohesion (belonging/rejection).\n \n* **Ability**: Ability refers to the user's capacity, both physical and mental, to perform desired actions. To enhance the ability of users, UX designers should follow the principle of simplicity. The easier it is to perform an action, the more likely users will engage with the product. Some factors to consider are time, financial resources, physical efforts, and cognitive load.\n \n* **Triggers**: Triggers are the cues, notifications, or prompts that signal users to take an action. For an action to occur, triggers should be presented at the right time when the user has adequate motivation and ability.\n \n\nUX designers should strive to find the balance between these three factors to facilitate the desired user behavior. By understanding your audience and their needs, implementing clear and concise triggers, and minimizing the effort required for action, the FBM can be an effective tool for designing user-centered products.", + "description": "B.J. Fogg, a renowned psychologist, and researcher at Stanford University, proposed the [Fogg Behavior Model (FBM)](https://www.behaviormodel.org/). This insightful model helps UX designers understand and influence user behavior by focusing on three core elements. These key factors are motivation, ability, and prompts.\n\n* **Motivation**: This element emphasizes the user's desire to perform a certain action or attain specific outcomes. Motivation can be linked to three core elements specified as sensation (pleasure/pain), anticipation (hope/fear), and social cohesion (belonging/rejection).\n \n* **Ability**: Ability refers to the user's capacity, both physical and mental, to perform desired actions. To enhance the ability of users, UX designers should follow the principle of simplicity. The easier it is to perform an action, the more likely users will engage with the product. Some factors to consider are time, financial resources, physical efforts, and cognitive load.\n \n* **Prompts**: Prompts are the cues, notifications, or triggers that signal users to take an action. For an action to occur, prompts should be presented at the right time when the user has adequate motivation and ability.\n \n\nUX designers should strive to find the balance between these three factors to facilitate the desired user behavior. By understanding your audience and their needs, implementing clear and concise prompts, and minimizing the effort required for action, the FBM can be an effective tool for designing user-centered products.", "links": [ { "title": "meaning of BJ fogg's behavior model",