Resolve merge conflicts

feat/referral
Kamran Ahmed 2 months ago
commit e190f5b30c
  1. 55
      contributing.md
  2. BIN
      public/images/gifs/party-popper.gif
  3. 0
      public/images/gifs/rocket.gif
  4. BIN
      public/images/gifs/star.gif
  5. BIN
      public/images/gifs/starstruck.gif
  6. BIN
      public/images/gifs/sunglasses.gif
  7. BIN
      public/pdfs/roadmaps/ai-engineer.pdf
  8. 1104
      public/roadmap-content/android.json
  9. 183
      public/roadmap-content/backend.json
  10. 30
      public/roadmap-content/computer-science.json
  11. 4
      public/roadmap-content/cyber-security.json
  12. 14
      public/roadmap-content/devops.json
  13. 1237
      public/roadmap-content/frontend.json
  14. 5
      public/roadmap-content/git-github.json
  15. 25
      public/roadmap-content/ios.json
  16. 306
      public/roadmap-content/mlops.json
  17. 4
      public/roadmap-content/nodejs.json
  18. 20
      public/roadmap-content/redis.json
  19. 2
      public/roadmap-content/software-architect.json
  20. 2
      public/roadmap-content/typescript.json
  21. BIN
      public/roadmaps/ai-engineer.png
  22. 13
      src/api/leaderboard.ts
  23. 2
      src/components/Changelog/ChangelogLaunch.astro
  24. 2
      src/components/ChangelogBanner.astro
  25. 110
      src/components/Leaderboard/LeaderboardPage.tsx
  26. 48
      src/components/Projects/ProjectCard.tsx
  27. 51
      src/components/Projects/ProjectsList.tsx
  28. 1
      src/components/Projects/ProjectsPage.tsx
  29. 5
      src/components/Roadmaps/RoadmapsPage.tsx
  30. 18
      src/components/TopicDetail/TopicDetail.tsx
  31. 441
      src/data/guides/backend-frameworks.md
  32. 186
      src/data/guides/devops-principles.md
  33. 376
      src/data/guides/frontend-frameworks.md
  34. 2
      src/data/roadmaps/ai-data-scientist/ai-data-scientist.md
  35. 1
      src/data/roadmaps/ai-engineer/ai-engineer.json
  36. 50
      src/data/roadmaps/ai-engineer/ai-engineer.md
  37. 7
      src/data/roadmaps/ai-engineer/content/adding-end-user-ids-in-prompts@4Q5x2VCXedAWISBXUIyin.md
  38. 8
      src/data/roadmaps/ai-engineer/content/agents-usecases@778HsQzTuJ_3c9OSn5DmH.md
  39. 8
      src/data/roadmaps/ai-engineer/content/ai-agents@9XCxilAQ7FRet7lHQr1gE.md
  40. 8
      src/data/roadmaps/ai-engineer/content/ai-agents@AeHkNU-uJ_gBdo5-xdpEu.md
  41. 8
      src/data/roadmaps/ai-engineer/content/ai-code-editors@XcKeQfpTA5ITgdX51I4y-.md
  42. 3
      src/data/roadmaps/ai-engineer/content/ai-engineer-vs-ml-engineer@jSZ1LhPdhlkW-9QJhIvFs.md
  43. 3
      src/data/roadmaps/ai-engineer/content/ai-safety-and-ethics@8ndKHDJgL_gYwaXC7XMer.md
  44. 3
      src/data/roadmaps/ai-engineer/content/ai-vs-agi@5QdihE1lLpMc3DFrGy46M.md
  45. 3
      src/data/roadmaps/ai-engineer/content/anomaly-detection@AglWJ7gb9rTT2rMkstxtk.md
  46. 7
      src/data/roadmaps/ai-engineer/content/anthropics-claude@hy6EyKiNxk1x84J63dhez.md
  47. 3
      src/data/roadmaps/ai-engineer/content/audio-processing@mxQYB820447DC6kogyZIL.md
  48. 7
      src/data/roadmaps/ai-engineer/content/aws-sagemaker@OkYO-aSPiuVYuLXHswBCn.md
  49. 7
      src/data/roadmaps/ai-engineer/content/azure-ai@3PQVZbcr4neNMRr6CuNzS.md
  50. 7
      src/data/roadmaps/ai-engineer/content/benefits-of-pre-trained-models@1Ga6DbOPc6Crz7ilsZMYy.md
  51. 3
      src/data/roadmaps/ai-engineer/content/bias-and-fareness@lhIU0ulpvDAn1Xc3ooYz_.md
  52. 7
      src/data/roadmaps/ai-engineer/content/capabilities--context-length@vvpYkmycH0_W030E-L12f.md
  53. 7
      src/data/roadmaps/ai-engineer/content/chat-completions-api@_bPTciEA1GT1JwfXim19z.md
  54. 7
      src/data/roadmaps/ai-engineer/content/chroma@dSd2C9lNl-ymmCRT9_ZC3.md
  55. 3
      src/data/roadmaps/ai-engineer/content/chunking@mX987wiZF7p3V_gExrPeX.md
  56. 10
      src/data/roadmaps/ai-engineer/content/code-completion-tools@TifVhqFm1zXNssA8QR3SM.md
  57. 7
      src/data/roadmaps/ai-engineer/content/cohere@a7qsvoauFe5u953I699ps.md
  58. 3
      src/data/roadmaps/ai-engineer/content/conducting-adversarial-testing@Pt-AJmSJrOxKvolb5_HEv.md
  59. 3
      src/data/roadmaps/ai-engineer/content/constraining-outputs-and-inputs@ONLDyczNacGVZGojYyJrU.md
  60. 3
      src/data/roadmaps/ai-engineer/content/cut-off-dates--knowledge@LbB2PeytxRSuU07Bk0KlJ.md
  61. 3
      src/data/roadmaps/ai-engineer/content/dall-e-api@LKFwwjtcawJ4Z12X102Cb.md
  62. 3
      src/data/roadmaps/ai-engineer/content/data-classification@06Xta-OqSci05nV2QMFdF.md
  63. 3
      src/data/roadmaps/ai-engineer/content/development-tools@NYge7PNtfI-y6QWefXJ4d.md
  64. 3
      src/data/roadmaps/ai-engineer/content/embedding@grTcbzT7jKk_sIUwOTZTD.md
  65. 3
      src/data/roadmaps/ai-engineer/content/embeddings@XyEp6jnBSpCxMGwALnYfT.md
  66. 3
      src/data/roadmaps/ai-engineer/content/faiss@JurLbOO1Z8r6C3yUqRNwf.md
  67. 7
      src/data/roadmaps/ai-engineer/content/fine-tuning@15XOFdVp0IC-kLYPXUJWh.md
  68. 3
      src/data/roadmaps/ai-engineer/content/generation@2jJnS9vRYhaS69d6OxrMh.md
  69. 3
      src/data/roadmaps/ai-engineer/content/googles-gemini@oe8E6ZIQWuYvHVbYJHUc1.md
  70. 7
      src/data/roadmaps/ai-engineer/content/hugging-face-hub@YLOdOvLXa5Fa7_mmuvKEi.md
  71. 7
      src/data/roadmaps/ai-engineer/content/hugging-face-models@8XjkRqHOdyH-DbXHYiBEt.md
  72. 7
      src/data/roadmaps/ai-engineer/content/hugging-face-models@EIDbwbdolR_qsNKVDla6V.md
  73. 7
      src/data/roadmaps/ai-engineer/content/hugging-face-tasks@YKIPOiSj_FNtg0h8uaSMq.md
  74. 7
      src/data/roadmaps/ai-engineer/content/hugging-face@v99C5Bml2a6148LCJ9gy9.md
  75. 3
      src/data/roadmaps/ai-engineer/content/image-generation@49BWxYVFpIgZCCqsikH7l.md
  76. 3
      src/data/roadmaps/ai-engineer/content/image-understanding@fzVq4hGoa2gdbIzoyY1Zp.md
  77. 3
      src/data/roadmaps/ai-engineer/content/impact-on-product-development@qJVgKe9uBvXc-YPfvX_Y7.md
  78. 3
      src/data/roadmaps/ai-engineer/content/indexing-embeddings@5TQnO9B4_LTHwqjI7iHB1.md
  79. 8
      src/data/roadmaps/ai-engineer/content/inference-sdk@3kRTzlLNBnXdTsAEXVu_M.md
  80. 3
      src/data/roadmaps/ai-engineer/content/inference@KWjD4xEPhOOYS51dvRLd2.md
  81. 3
      src/data/roadmaps/ai-engineer/content/introduction@_hYN0gEi9BL24nptEtXWU.md
  82. 3
      src/data/roadmaps/ai-engineer/content/know-your-customers--usecases@t1SObMWkDZ1cKqNNlcd9L.md
  83. 3
      src/data/roadmaps/ai-engineer/content/lancedb@rjaCNT3Li45kwu2gXckke.md
  84. 3
      src/data/roadmaps/ai-engineer/content/langchain-for-multimodal-apps@j9zD3pHysB1CBhLfLjhpD.md
  85. 3
      src/data/roadmaps/ai-engineer/content/langchain@ebXXEhNRROjbbof-Gym4p.md
  86. 3
      src/data/roadmaps/ai-engineer/content/limitations-and-considerations@MXqbQGhNM3xpXlMC2ib_6.md
  87. 7
      src/data/roadmaps/ai-engineer/content/llama-index@d0ontCII8KI8wfP-8Y45R.md
  88. 7
      src/data/roadmaps/ai-engineer/content/llamaindex-for-multimodal-apps@akQTCKuPRRelj2GORqvsh.md
  89. 3
      src/data/roadmaps/ai-engineer/content/llms@wf2BSyUekr1S1q6l8kyq6.md
  90. 3
      src/data/roadmaps/ai-engineer/content/manual-implementation@6xaRB34_g0HGt-y1dGYXR.md
  91. 9
      src/data/roadmaps/ai-engineer/content/maximum-tokens@qzvp6YxWDiGakA2mtspfh.md
  92. 3
      src/data/roadmaps/ai-engineer/content/mistral-ai@n-Ud2dXkqIzK37jlKItN4.md
  93. 7
      src/data/roadmaps/ai-engineer/content/models-on-hugging-face@dLEg4IA3F5jgc44Bst9if.md
  94. 7
      src/data/roadmaps/ai-engineer/content/mongodb-atlas@j6bkm0VUgLkHdMDDJFiMC.md
  95. 3
      src/data/roadmaps/ai-engineer/content/multimodal-ai-usecases@sGR9qcro68KrzM8qWxcH8.md
  96. 3
      src/data/roadmaps/ai-engineer/content/multimodal-ai@W7cKPt_UxcUgwp8J6hS4p.md
  97. 7
      src/data/roadmaps/ai-engineer/content/ollama-models@ro3vY_sp6xMQ-hfzO-rc1.md
  98. 7
      src/data/roadmaps/ai-engineer/content/ollama-sdk@TsG_I7FL-cOCSw8gvZH3r.md
  99. 7
      src/data/roadmaps/ai-engineer/content/ollama@rTT2UnvqFO3GH6ThPLEjO.md
  100. 3
      src/data/roadmaps/ai-engineer/content/open-ai-assistant-api@eOqCBgBTKM8CmY3nsWjre.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -2,12 +2,67 @@
First of all, thank you for considering to contribute. Please look at the details below:
- [Hacktoberfest Contributions](#hacktoberfest-contributions)
- [New Roadmaps](#new-roadmaps)
- [Existing Roadmaps](#existing-roadmaps)
- [Adding Projects](#adding-projects)
- [Adding Content](#adding-content)
- [Guidelines](#guidelines)
## Hacktoberfest Contributions
We are taking part in [Hacktoberfest 11](https://hacktoberfest.com/)!
Before you start to contribute to our project in order to satisfy [Hacktoberfest requirements](https://hacktoberfest.com/participation/#contributors), please bare in mind the following:
* There is not a Hacktoberfest t-shirt this year [(see their FAQ)](https://hacktoberfest.com/participation/#faq).
* There is not an infinite opportunity to contribute to the roadmap.sh project.
### Hacktoberfest Specific Contribution rules
As Hacktoberfest attracts a lot of contributors (which is awesome), it does require a more rigid and strictly enforced set of guidelines than the average contribution.
These are as follows:
1. No single file contributions, please contribute to a minimum of two.
Whilst single file contributions, such as adding one link to a single topic, is perfectly fine outside of hacktoberfest, this can (and probably will) result it an easy 4 pull requests for everyone and we will just become a Hacktoberfest farming project.
***Note: If you contribute the entire contents of a topic i.e. the topic has 0 copy and 0 links, this will count.***
2. Typo fixes will not count (by themselves).
Whilst fixing typos is a great thing to do, lets bundle them in with actual contributions if we see them!
3. The same basic rules apply.
- Content must be in English.
- Maximum of 8 links per topic.
- Follow the below style guide for content.
Here is an example of a **fully complete** topic:
```markdown
# Redis
Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.
Learn more from the following resources:
[@official@Link 1](https:/google.com)
[@article@Link 2](https:/google.com)
[@article@Link 3](https:/google.com)
[@course@Link 4](https:/google.com)
[@course@Link 5](https:/google.com)
[@video@Link 6](https:/google.com)
[@video@Link 7](https:/google.com)
[@video@Link 8](https:/google.com)
```
Contributions to the project that meet these requirements will be given the label `hacktoberfest-accepted` and merged, contributions that do not meet the requirements will simply be closed.
Any attempts at spam PRs will be given the `spam` tag. If you recieve 2 `spam` tags against you, you will be [disqualified from Hacktoberfest](https://hacktoberfest.com/participation/#spam).
## New Roadmaps
For new roadmaps, you can either:

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

Before

Width:  |  Height:  |  Size: 256 KiB

After

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1013 KiB

File diff suppressed because it is too large Load Diff

@ -1,13 +1,30 @@
{
"gKTSe9yQFVbPVlLzWB0hC": {
"title": "Search Engines",
"description": "Search engines like Elasticsearch are specialized tools designed for fast, scalable, and flexible searching and analyzing of large volumes of data. Elasticsearch is an open-source, distributed search and analytics engine built on Apache Lucene, offering full-text search capabilities, real-time indexing, and advanced querying features. Key characteristics of search engines like Elasticsearch include:\n\n1. **Full-Text Search**: Support for complex search queries, including relevance scoring and text analysis.\n2. **Distributed Architecture**: Scalability through horizontal distribution across multiple nodes or servers.\n3. **Real-Time Indexing**: Ability to index and search data almost instantaneously.\n4. **Powerful Query DSL**: A domain-specific language for constructing and executing sophisticated queries.\n5. **Analytics**: Capabilities for aggregating and analyzing data, often used for log and event data analysis.\n\nElasticsearch is commonly used in applications requiring advanced search functionality, such as search engines, data analytics platforms, and real-time monitoring systems.",
"links": []
"description": "Search engines like Elasticsearch are specialized tools designed for fast, scalable, and flexible searching and analyzing of large volumes of data. Elasticsearch is an open-source, distributed search and analytics engine built on Apache Lucene, offering full-text search capabilities, real-time indexing, and advanced querying features. Key characteristics of search engines like Elasticsearch include:\n\n1. **Full-Text Search**: Support for complex search queries, including relevance scoring and text analysis.\n2. **Distributed Architecture**: Scalability through horizontal distribution across multiple nodes or servers.\n3. **Real-Time Indexing**: Ability to index and search data almost instantaneously.\n4. **Powerful Query DSL**: A domain-specific language for constructing and executing sophisticated queries.\n5. **Analytics**: Capabilities for aggregating and analyzing data, often used for log and event data analysis.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Elasticsearch",
"url": "https://www.elastic.co/elasticsearch/",
"type": "article"
}
]
},
"9Fpoor-Os_9lvrwu5Zjh-": {
"title": "Design and Development Principles",
"description": "Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:\n\n1. SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)\n2. DRY (Don't Repeat Yourself)\n3. KISS (Keep It Simple, Stupid)\n4. YAGNI (You Aren't Gonna Need It)\n5. Separation of Concerns\n6. Modularity\n7. Encapsulation\n8. Composition over Inheritance\n9. Loose Coupling and High Cohesion\n10. Principle of Least Astonishment\n\nThese principles aim to create more maintainable, scalable, and robust software. They encourage clean code, promote reusability, reduce complexity, and enhance flexibility. While not rigid rules, these principles guide developers in making design decisions that lead to better software architecture and easier long-term maintenance. Applying these principles helps in creating systems that are easier to understand, modify, and extend over time.",
"links": []
"description": "Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:\n\n* SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)\n* DRY (Don't Repeat Yourself)\n* KISS (Keep It Simple, Stupid)\n* YAGNI (You Aren't Gonna Need It)\n* Separation of Concerns\n* Modularity\n* Encapsulation\n* Composition over Inheritance\n* Loose Coupling and High Cohesion\n* Principle of Least Astonishment\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Design Principles - Wikipedia",
"url": "https://en.wikipedia.org/wiki/Design_principles",
"type": "article"
},
{
"title": "Design Principles - Microsoft",
"url": "https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/index",
"type": "article"
}
]
},
"EwvLPSI6AlZ4TnNIJTZA4": {
"title": "Learn about APIs",
@ -71,7 +88,7 @@
"description": "Rust is a systems programming language known for its focus on safety, performance, and concurrency. It provides fine-grained control over system resources while ensuring memory safety without needing a garbage collector. Rust's ownership model enforces strict rules on how data is accessed and managed, preventing common issues like null pointer dereferences and data races. Its strong type system and modern features, such as pattern matching and concurrency support, make it suitable for a wide range of applications, from low-level systems programming to high-performance web servers and tools. Rust is gaining traction in both industry and open source for its reliability and efficiency.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "The Rust Programming Language - online book",
"title": "The Rust Programming Language - Book",
"url": "https://doc.rust-lang.org/book/",
"type": "article"
},
@ -334,8 +351,8 @@
"type": "article"
},
{
"title": "Learn Git with Tutorials, News and Tips - Atlassian",
"url": "https://www.atlassian.com/git",
"title": "Git Documentation",
"url": "https://git-scm.com/doc",
"type": "article"
},
{
@ -370,8 +387,8 @@
"type": "article"
},
{
"title": "Git",
"url": "https://git-scm.com/",
"title": "Git Documentation",
"url": "https://git-scm.com/doc",
"type": "article"
},
{
@ -396,7 +413,7 @@
"type": "article"
},
{
"title": "GitHub Website",
"title": "GitHub",
"url": "https://github.com",
"type": "article"
},
@ -424,7 +441,7 @@
},
"Ry_5Y-BK7HrkIc6X0JG1m": {
"title": "Bitbucket",
"description": "Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations. While similar to GitHub in many aspects, Bitbucket's integration with Atlassian's ecosystem and its pricing model for private repositories are key differentiators. It's widely used for collaborative software development, particularly in enterprise environments already invested in Atlassian's suite of products.\n\nVisit the following resources to learn more:",
"description": "Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Bitbucket Website",
@ -453,9 +470,9 @@
"description": "GitLab is a web-based DevOps platform that provides a complete solution for the software development lifecycle. It offers source code management, continuous integration/continuous deployment (CI/CD), issue tracking, and more, all integrated into a single application. GitLab supports Git repositories and includes features like merge requests (similar to GitHub's pull requests), wiki pages, and issue boards. It emphasizes DevOps practices, providing built-in CI/CD pipelines, container registry, and Kubernetes integration. GitLab offers both cloud-hosted and self-hosted options, giving organizations flexibility in deployment. Its all-in-one approach differentiates it from competitors, as it includes features that might require multiple tools in other ecosystems. GitLab's focus on the entire DevOps lifecycle, from planning to monitoring, makes it popular among enterprises and teams seeking a unified platform for their development workflows.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "GitLab Website",
"title": "GitLab",
"url": "https://gitlab.com/",
"type": "opensource"
"type": "article"
},
{
"title": "GitLab Documentation",
@ -546,7 +563,7 @@
"type": "article"
},
{
"title": "MS SQL website",
"title": "MS SQL",
"url": "https://www.microsoft.com/en-ca/sql-server/",
"type": "article"
},
@ -567,12 +584,12 @@
"description": "MySQL is an open-source relational database management system (RDBMS) known for its speed, reliability, and ease of use. It uses SQL (Structured Query Language) for database interactions and supports a range of features for data management, including transactions, indexing, and stored procedures. MySQL is widely used for web applications, data warehousing, and various other applications due to its scalability and flexibility. It integrates well with many programming languages and platforms, and is often employed in conjunction with web servers and frameworks in popular software stacks like LAMP (Linux, Apache, MySQL, PHP/Python/Perl). MySQL is maintained by Oracle Corporation and has a large community and ecosystem supporting its development and use.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "MySQL website",
"title": "MySQL",
"url": "https://www.mysql.com/",
"type": "article"
},
{
"title": "W3Schools - MySQL tutorial ",
"title": "W3Schools - MySQL Tutorial",
"url": "https://www.w3schools.com/mySQl/default.asp",
"type": "article"
},
@ -603,12 +620,12 @@
"description": "Oracle Database is a highly robust, enterprise-grade relational database management system (RDBMS) developed by Oracle Corporation. Known for its scalability, reliability, and comprehensive features, Oracle Database supports complex data management tasks and mission-critical applications. It provides advanced functionalities like SQL querying, transaction management, high availability through clustering, and data warehousing. Oracle's database solutions include support for various data models, such as relational, spatial, and graph, and offer tools for security, performance optimization, and data integration. It is widely used in industries requiring large-scale, secure, and high-performance data processing.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Official Website",
"title": "Oracle Website",
"url": "https://www.oracle.com/database/",
"type": "article"
},
{
"title": "Official Docs",
"title": "Oracle Docs",
"url": "https://docs.oracle.com/en/database/index.html",
"type": "article"
},
@ -626,10 +643,10 @@
},
"tD3i-8gBpMKCHB-ITyDiU": {
"title": "MariaDB",
"description": "MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MariaDB was created with the intention of being a more versatile, drop-in replacement version of MySQL\n\nVisit the following resources to learn more:",
"description": "MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most feature rich, stable, and sanely licensed open SQL server in the industry. MariaDB was created with the intention of being a more versatile, drop-in replacement version of MySQL\n\nVisit the following resources to learn more:",
"links": [
{
"title": "MariaDB website",
"title": "MariaDB",
"url": "https://mariadb.org/",
"type": "article"
},
@ -782,8 +799,14 @@
},
"GwApfL4Yx-b5Y8dB9Vy__": {
"title": "Failure Modes",
"description": "Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.",
"links": []
"description": "Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Database Failure Modes",
"url": "https://ieeexplore.ieee.org/document/7107294/",
"type": "article"
}
]
},
"rq_y_OBMD9AH_4aoecvAi": {
"title": "Transactions",
@ -921,7 +944,7 @@
"description": "Data replication is the process of creating and maintaining multiple copies of the same data across different locations or nodes in a distributed system. It enhances data availability, reliability, and performance by ensuring that data remains accessible even if one or more nodes fail. Replication can be synchronous (changes are applied to all copies simultaneously) or asynchronous (changes are propagated after being applied to the primary copy). It's widely used in database systems, content delivery networks, and distributed file systems. Replication strategies include master-slave, multi-master, and peer-to-peer models. While improving fault tolerance and read performance, replication introduces challenges in maintaining data consistency across copies and managing potential conflicts. Effective replication strategies must balance consistency, availability, and partition tolerance, often in line with the principles of the CAP theorem.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What is data replication?",
"title": "Data Replication? - IBM",
"url": "https://www.ibm.com/topics/data-replication",
"type": "article"
},
@ -984,7 +1007,7 @@
"description": "JSON or JavaScript Object Notation is an encoding scheme that is designed to eliminate the need for an ad-hoc code for each application to communicate with servers that communicate in a defined way. JSON API module exposes an implementation for data stores and data structures, such as entity types, bundles, and fields.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Official Website",
"title": "JSON API",
"url": "https://jsonapi.org/",
"type": "article"
},
@ -1014,15 +1037,15 @@
"url": "https://swagger.io/tools/swagger-editor/",
"type": "article"
},
{
"title": " REST API and OpenAPI: It’s Not an Either/Or Question ",
"url": "https://www.youtube.com/watch?v=pRS9LRBgjYg",
"type": "article"
},
{
"title": "OpenAPI 3.0: How to Design and Document APIs with the Latest OpenAPI Specification 3.0",
"url": "https://www.youtube.com/watch?v=6kwmW_p_Tig",
"type": "video"
},
{
"title": " REST API and OpenAPI: It’s Not an Either/Or Question",
"url": "https://www.youtube.com/watch?v=pRS9LRBgjYg",
"type": "video"
}
]
},
@ -1109,7 +1132,7 @@
"type": "article"
},
{
"title": "GraphQL Official Website",
"title": "GraphQL",
"url": "https://graphql.org/",
"type": "article"
},
@ -1130,7 +1153,7 @@
"description": "Client-side caching is a technique where web browsers or applications store data locally on the user's device to improve performance and reduce server load. It involves saving copies of web pages, images, scripts, and other resources on the client's system for faster access on subsequent visits. Modern browsers implement various caching mechanisms, including HTTP caching (using headers like Cache-Control and ETag), service workers for offline functionality, and local storage APIs. Client-side caching significantly reduces network traffic and load times, enhancing user experience, especially on slower connections. However, it requires careful management to balance improved performance with the need for up-to-date content. Developers must implement appropriate cache invalidation strategies and consider cache-busting techniques for critical updates. Effective client-side caching is crucial for creating responsive, efficient web applications while minimizing server resource usage.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Client-side Caching",
"title": "Client Side Caching",
"url": "https://redis.io/docs/latest/develop/use/client-side-caching/",
"type": "article"
},
@ -1143,13 +1166,18 @@
},
"Nq2BO53bHJdFT1rGZPjYx": {
"title": "CDN",
"description": "A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests. Traditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well\n\nVisit the following resources to learn more:",
"description": "A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests.\n\nTraditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well\n\nVisit the following resources to learn more:",
"links": [
{
"title": "CloudFlare - What is a CDN? | How do CDNs work?",
"url": "https://www.cloudflare.com/en-ca/learning/cdn/what-is-a-cdn/",
"type": "article"
},
{
"title": "AWS - CDN",
"url": "https://aws.amazon.com/what-is/cdn/",
"type": "article"
},
{
"title": "What is Cloud CDN?",
"url": "https://www.youtube.com/watch?v=841kyd_mfH0",
@ -1190,8 +1218,19 @@
},
"ELj8af7Mi38kUbaPJfCUR": {
"title": "Caching",
"description": "Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.",
"links": []
"description": "Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What is Caching - AWS",
"url": "https://aws.amazon.com/caching/",
"type": "article"
},
{
"title": "Caching - Cloudflare",
"url": "https://www.cloudflare.com/learning/cdn/what-is-caching/",
"type": "article"
}
]
},
"RBrIP5KbVQ2F0ly7kMfTo": {
"title": "Web Security",
@ -1333,7 +1372,7 @@
"type": "article"
},
{
"title": "DevOps CI/CD Explained in 100 Seconds by Fireship",
"title": "DevOps CI/CD Explained in 100 Seconds",
"url": "https://www.youtube.com/watch?v=scEDHsr3APg",
"type": "video"
},
@ -1581,7 +1620,7 @@
},
"8DmabQJXlrT__COZrDVTV": {
"title": "Twelve Factor Apps",
"description": "The Twelve-Factor App methodology is a set of principles for building modern, scalable, and maintainable web applications, particularly suited for cloud environments. It emphasizes best practices for developing applications in a way that facilitates portability, scalability, and ease of deployment. Key principles include:\n\n1. **Codebase**: One codebase tracked in version control, with many deploys.\n2. **Dependencies**: Explicitly declare and isolate dependencies.\n3. **Config**: Store configuration in the environment.\n4. **Backing Services**: Treat backing services as attached resources.\n5. **Build, Release, Run**: Separate build and run stages.\n6. **Processes**: Execute the app as one or more stateless processes.\n7. **Port Binding**: Export services via port binding.\n8. **Concurrency**: Scale out via the process model.\n9. **Disposability**: Maximize robustness with fast startup and graceful shutdown.\n10. **Dev/Prod Parity**: Keep development, staging, and production environments as similar as possible.\n11. **Logs**: Treat logs as streams of events.\n12. **Admin Processes**: Run administrative or management tasks as one-off processes.\n\nThese principles help create applications that are easy to deploy, manage, and scale in cloud environments, promoting operational simplicity and consistency.\n\nVisit the following resources to learn more:",
"description": "The Twelve-Factor App methodology is a set of principles for building modern, scalable, and maintainable web applications, particularly suited for cloud environments. It emphasizes best practices for developing applications in a way that facilitates portability, scalability, and ease of deployment. Key principles include:\n\n1. **Codebase**: One codebase tracked in version control, with many deploys.\n2. **Dependencies**: Explicitly declare and isolate dependencies.\n3. **Config**: Store configuration in the environment.\n4. **Backing Services**: Treat backing services as attached resources.\n5. **Build, Release, Run**: Separate build and run stages.\n6. **Processes**: Execute the app as one or more stateless processes.\n7. **Port Binding**: Export services via port binding.\n8. **Concurrency**: Scale out via the process model.\n9. **Disposability**: Maximize robustness with fast startup and graceful shutdown.\n10. **Dev/Prod Parity**: Keep development, staging, and production environments as similar as possible.\n11. **Logs**: Treat logs as streams of events.\n12. **Admin Processes**: Run administrative or management tasks as one-off processes.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "The Twelve-Factor App",
@ -1647,7 +1686,7 @@
"description": "Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant data processing. It acts as a message broker, allowing systems to publish and subscribe to streams of records, similar to a distributed commit log. Kafka is highly scalable and can handle large volumes of data with low latency, making it ideal for real-time analytics, log aggregation, and data integration. It features topics for organizing data streams, partitions for parallel processing, and replication for fault tolerance, enabling reliable and efficient handling of large-scale data flows across distributed systems.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Apache Kafka quickstart",
"title": "Apache Kafka",
"url": "https://kafka.apache.org/quickstart",
"type": "article"
},
@ -1704,12 +1743,12 @@
"type": "article"
},
{
"title": "Getting started with LXD Containerization",
"title": "Getting Started with LXD Containerization",
"url": "https://www.youtube.com/watch?v=aIwgPKkVj8s",
"type": "video"
},
{
"title": "Getting started with LXC containers",
"title": "Getting Started with LXC containers",
"url": "https://youtu.be/CWmkSj_B-wo",
"type": "video"
}
@ -1767,7 +1806,7 @@
"description": "Server-Sent Events (SSE) is a technology for sending real-time updates from a server to a web client over a single, persistent HTTP connection. It enables servers to push updates to clients efficiently and automatically reconnects if the connection is lost. SSE is ideal for applications needing one-way communication, such as live notifications or real-time data feeds, and uses a simple text-based format for transmitting event data, which can be easily handled by clients using the `EventSource` API in JavaScript.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Server-Sent Events - MDN",
"title": "Server Sent Events - MDN",
"url": "https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events",
"type": "article"
},
@ -1783,7 +1822,7 @@
"description": "Nginx is a high-performance, open-source web server and reverse proxy server known for its efficiency, scalability, and low resource consumption. Originally developed as a web server, Nginx is also commonly used as a load balancer, HTTP cache, and mail proxy. It excels at handling a large number of concurrent connections due to its asynchronous, event-driven architecture. Nginx's features include support for serving static content, handling dynamic content through proxying to application servers, and providing SSL/TLS termination. Its modular design allows for extensive customization and integration with various applications and services, making it a popular choice for modern web infrastructures.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Official Website",
"title": "Nginx Website",
"url": "https://nginx.org/",
"type": "article"
},
@ -1809,7 +1848,7 @@
"description": "Caddy is a modern, open-source web server written in Go. It's known for its simplicity, automatic HTTPS encryption, and HTTP/2 support out of the box. Caddy stands out for its ease of use, with a simple configuration syntax and the ability to serve static files with zero configuration. It automatically obtains and renews SSL/TLS certificates from Let's Encrypt, making secure deployments straightforward. Caddy supports various plugins and modules for extended functionality, including reverse proxying, load balancing, and dynamic virtual hosting. It's designed with security in mind, implementing modern web standards by default. While it may not match the raw performance of servers like Nginx in extremely high-load scenarios, Caddy's simplicity, built-in security features, and low resource usage make it an attractive choice for many web hosting needs, particularly for smaller to medium-sized projects or developers seeking a hassle-free server setup.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "caddyserver/caddy",
"title": "caddyserver/caddy - Caddy on GitHub",
"url": "https://github.com/caddyserver/caddy",
"type": "opensource"
},
@ -1856,7 +1895,7 @@
"description": "Microsoft Internet Information Services (IIS) is a flexible, secure, and high-performance web server developed by Microsoft for hosting and managing web applications and services on Windows Server. IIS supports a variety of web technologies, including [ASP.NET](http://ASP.NET), PHP, and static content. It provides features such as request handling, authentication, SSL/TLS encryption, and URL rewriting. IIS also offers robust management tools, including a graphical user interface and command-line options, for configuring and monitoring web sites and applications. It is commonly used for deploying enterprise web applications and services in a Windows-based environment, offering integration with other Microsoft technologies and services.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Official Website",
"title": "Microsoft -IIS",
"url": "https://www.iis.net/",
"type": "article"
},
@ -1942,7 +1981,7 @@
},
"xPvVwGQw28uMeLYIWn8yn": {
"title": "Memcached",
"description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nMemcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.\n\nVisit the following resources to learn more:",
"description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "memcached/memcached",
@ -2091,7 +2130,7 @@
"type": "article"
},
{
"title": "Backpressure explained — the resisted flow of data through software",
"title": "Backpressure explained — The Resisted Flow of Data through Software",
"url": "https://medium.com/@jayphelps/backpressure-explained-the-flow-of-data-through-software-2350b3e77ce7",
"type": "article"
},
@ -2136,7 +2175,7 @@
},
"f7iWBkC0X7yyCoP_YubVd": {
"title": "Migration Strategies",
"description": "Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:\n\n1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.\n2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.\n3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.\n4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.\n5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.\n6. **Retire**: Decommissioning applications that are no longer needed or are redundant.\n\nEach strategy has its own trade-offs in terms of cost, complexity, and benefits, and the choice depends on factors like the application’s architecture, business needs, and resource availability.\n\nVisit the following resources to learn more:",
"description": "Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:\n\n1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.\n2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.\n3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.\n4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.\n5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.\n6. **Retire**: Decommissioning applications that are no longer needed or are redundant.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Databases as a Challenge for Continuous Delivery",
@ -2152,7 +2191,7 @@
},
"osQlGGy38xMcKLtgZtWaZ": {
"title": "Types of Scaling",
"description": "Horizontal scaling (scaling out/in) involves adding or removing instances of resources, such as servers or containers, to handle increased or decreased loads. It distributes the workload across multiple instances to improve performance and redundancy. This method enhances the system's capacity by expanding the number of nodes in a distributed system.\n\nVertical scaling (scaling up/down) involves increasing or decreasing the resources (CPU, memory, storage) of a single instance or server to handle more load or reduce capacity. This method improves performance by upgrading the existing hardware or virtual machine but has limits based on the maximum capacity of the individual resource.\n\nBoth approaches have their advantages: horizontal scaling offers better fault tolerance and flexibility, while vertical scaling is often simpler to implement but can be limited by the hardware constraints of a single machine.\n\nVisit the following resources to learn more:",
"description": "Horizontal scaling (scaling out/in) involves adding or removing instances of resources, such as servers or containers, to handle increased or decreased loads. It distributes the workload across multiple instances to improve performance and redundancy. This method enhances the system's capacity by expanding the number of nodes in a distributed system.\n\nVertical scaling (scaling up/down) involves increasing or decreasing the resources (CPU, memory, storage) of a single instance or server to handle more load or reduce capacity. This method improves performance by upgrading the existing hardware or virtual machine but has limits based on the maximum capacity of the individual resource.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Horizontal vs Vertical Scaling",
@ -2207,7 +2246,7 @@
"description": "Monitoring involves continuously observing and tracking the performance, availability, and health of systems, applications, and infrastructure. It typically includes collecting and analyzing metrics, logs, and events to ensure systems are operating within desired parameters. Monitoring helps detect anomalies, identify potential issues before they escalate, and provides insights into system behavior. It often involves tools and platforms that offer dashboards, alerts, and reporting features to facilitate real-time visibility and proactive management. Effective monitoring is crucial for maintaining system reliability, performance, and for supporting incident response and troubleshooting.\n\nA few popular tools are Grafana, Sentry, Mixpanel, NewRelic.",
"links": [
{
"title": "Top monitoring tools 2024",
"title": "Top Monitoring Tools",
"url": "https://thectoclub.com/tools/best-application-monitoring-software/",
"type": "article"
},
@ -2307,9 +2346,9 @@
"description": "Bcrypt is a password-hashing function designed to securely hash passwords for storage in databases. Created by Niels Provos and David Mazières, it's based on the Blowfish cipher and incorporates a salt to protect against rainbow table attacks. Bcrypt's key feature is its adaptive nature, allowing for the adjustment of its cost factor to make it slower as computational power increases, thus maintaining resistance against brute-force attacks over time. It produces a fixed-size hash output, typically 60 characters long, which includes the salt and cost factor. Bcrypt is widely used in many programming languages and frameworks due to its security strength and relative ease of implementation. Its deliberate slowness in processing makes it particularly effective for password storage, where speed is not a priority but security is paramount.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "bcrypts npm package",
"title": "bcrypt",
"url": "https://www.npmjs.com/package/bcrypt",
"type": "article"
"type": "opensource"
},
{
"title": "Understanding bcrypt",
@ -2429,7 +2468,7 @@
},
"TZ0BWOENPv6pQm8qYB8Ow": {
"title": "Server Security",
"description": "Server security involves protecting servers from threats and vulnerabilities to ensure the confidentiality, integrity, and availability of the data and services they manage. Key practices include:\n\n1. **Patch Management**: Regularly updating software and operating systems to fix vulnerabilities.\n2. **Access Control**: Implementing strong authentication mechanisms and restricting access to authorized users only.\n3. **Firewalls and Intrusion Detection**: Using firewalls to block unauthorized access and intrusion detection systems to monitor and respond to suspicious activities.\n4. **Encryption**: Encrypting data both in transit and at rest to protect sensitive information from unauthorized access.\n5. **Security Hardening**: Configuring servers with minimal services and features, applying security best practices to reduce the attack surface.\n6. **Regular Backups**: Performing regular backups to ensure data can be restored in case of loss or corruption.\n7. **Monitoring and Logging**: Continuously monitoring server activity and maintaining logs for auditing and detecting potential security incidents.\n\nEffective server security is crucial for safeguarding against attacks, maintaining system stability, and protecting sensitive data.\n\nLearn more from the following resources:",
"description": "Server security involves protecting servers from threats and vulnerabilities to ensure the confidentiality, integrity, and availability of the data and services they manage. Key practices include:\n\n1. **Patch Management**: Regularly updating software and operating systems to fix vulnerabilities.\n2. **Access Control**: Implementing strong authentication mechanisms and restricting access to authorized users only.\n3. **Firewalls and Intrusion Detection**: Using firewalls to block unauthorized access and intrusion detection systems to monitor and respond to suspicious activities.\n4. **Encryption**: Encrypting data both in transit and at rest to protect sensitive information from unauthorized access.\n5. **Security Hardening**: Configuring servers with minimal services and features, applying security best practices to reduce the attack surface.\n6. **Regular Backups**: Performing regular backups to ensure data can be restored in case of loss or corruption.\n7. **Monitoring and Logging**: Continuously monitoring server activity and maintaining logs for auditing and detecting potential security incidents.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is a hardened server?",
@ -2600,7 +2639,7 @@
},
"hkxw9jPGYphmjhTjw8766": {
"title": "DNS and how it works?",
"description": "DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like [www.example.com](http://www.example.com)) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.\n\nVisit the following resources to learn more:",
"description": "DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like `www.example.com`) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What is DNS?",
@ -2811,7 +2850,7 @@
"description": "OpenID is an open standard for decentralized authentication that allows users to log in to multiple websites and applications using a single set of credentials, managed by an identity provider (IdP). It enables users to authenticate their identity through an external service, simplifying the login process and reducing the need for multiple usernames and passwords. OpenID typically works in conjunction with OAuth 2.0 for authorization, allowing users to grant access to their data while maintaining security. This approach enhances user convenience and streamlines identity management across various platforms.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Official Website",
"title": "OpenID Website",
"url": "https://openid.net/",
"type": "article"
},
@ -2839,7 +2878,7 @@
},
"UCHtaePVxS-0kpqlYxbfC": {
"title": "SAML",
"description": "Security Assertion Markup Language (SAML)\n-----------------------------------------\n\nSecurity Assertion Markup Language (SAML) is an XML-based framework used for single sign-on (SSO) and identity federation, enabling users to authenticate once and gain access to multiple applications or services. It allows for the exchange of authentication and authorization data between an identity provider (IdP) and a service provider (SP). SAML assertions are XML documents that contain user identity information and attributes, and are used to convey authentication credentials and permissions. By implementing SAML, organizations can streamline user management, enhance security through centralized authentication, and simplify the user experience by reducing the need for multiple logins across different systems.\n\nLearn more from the following resources:",
"description": "Security Assertion Markup Language (SAML) is an XML-based framework used for single sign-on (SSO) and identity federation, enabling users to authenticate once and gain access to multiple applications or services. It allows for the exchange of authentication and authorization data between an identity provider (IdP) and a service provider (SP). SAML assertions are XML documents that contain user identity information and attributes, and are used to convey authentication credentials and permissions. By implementing SAML, organizations can streamline user management, enhance security through centralized authentication, and simplify the user experience by reducing the need for multiple logins across different systems.\n\nLearn more from the following resources:",
"links": [
{
"title": "SAML Explained in Plain English",
@ -2884,17 +2923,17 @@
"description": "Solr is an open-source, highly scalable search platform built on Apache Lucene, designed for full-text search, faceted search, and real-time indexing. It provides powerful features for indexing and querying large volumes of data with high performance and relevance. Solr supports complex queries, distributed searching, and advanced text analysis, including tokenization and stemming. It offers features such as faceted search, highlighting, and geographic search, and is commonly used for building search engines and data retrieval systems in various applications, from e-commerce to content management.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "apache/solr",
"title": "Solr on Github",
"url": "https://github.com/apache/solr",
"type": "opensource"
},
{
"title": "Official Website",
"title": "Solr Website",
"url": "https://solr.apache.org/",
"type": "article"
},
{
"title": "Official Documentation",
"title": "Solr Documentation",
"url": "https://solr.apache.org/resources.html#documentation",
"type": "article"
},
@ -2910,7 +2949,7 @@
"description": "Real-time data refers to information that is processed and made available immediately or with minimal delay, allowing users or systems to react promptly to current conditions. This type of data is essential in applications requiring immediate updates and responses, such as financial trading platforms, online gaming, real-time analytics, and monitoring systems. Real-time data processing involves capturing, analyzing, and delivering information as it is generated, often using technologies like stream processing frameworks (e.g., Apache Kafka, Apache Flink) and low-latency databases. Effective real-time data systems can handle high-speed data flows, ensuring timely and accurate decision-making.\n\nLearn more from the following resources:",
"links": [
{
"title": "Real-time data - Wiki",
"title": "Real-time Data - Wiki",
"url": "https://en.wikipedia.org/wiki/Real-time_data",
"type": "article"
},
@ -2942,7 +2981,7 @@
"description": "Short polling is a technique where a client periodically sends requests to a server at regular intervals to check for updates or new data. The server responds with the current state or any changes since the last request. While simple to implement and compatible with most HTTP infrastructures, short polling can be inefficient due to the frequent network requests and potential for increased latency in delivering updates. It contrasts with long polling and WebSockets, which offer more efficient mechanisms for real-time communication. Short polling is often used when real-time requirements are less stringent and ease of implementation is a priority.\n\nLearn more from the following resources:",
"links": [
{
"title": "Amazon SQS short and long polling",
"title": "Amazon SQS Short and Long Polling",
"url": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html",
"type": "article"
},
@ -2984,7 +3023,7 @@
"description": "Amazon DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It offers high-performance, scalable, and flexible data storage for applications of any scale. DynamoDB supports both key-value and document data models, providing fast and predictable performance with seamless scalability. It features automatic scaling, built-in security, backup and restore options, and global tables for multi-region deployment. DynamoDB excels in handling high-traffic web applications, gaming backends, mobile apps, and IoT solutions. It offers consistent single-digit millisecond latency at any scale and supports both strongly consistent and eventually consistent read models. With its integration into the AWS ecosystem, on-demand capacity mode, and support for transactions, DynamoDB is widely used for building highly responsive and scalable applications, particularly those with unpredictable workloads or requiring low-latency data access.\n\nLearn more from the following resources:",
"links": [
{
"title": "AWS DynamoDB Website",
"title": "AWS DynamoDB",
"url": "https://aws.amazon.com/dynamodb/",
"type": "article"
},
@ -3002,10 +3041,10 @@
},
"RyJFLLGieJ8Xjt-DlIayM": {
"title": "Firebase",
"description": "Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring. While it excels in rapid prototyping and building real-time applications, its proprietary nature and potential for vendor lock-in are considerations for large-scale or complex applications. Firebase's ease of use and integration with Google Cloud Platform make it popular for startups and projects requiring quick deployment.\n\nLearn more from the following resources:",
"description": "Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring.\n\nLearn more from the following resources:",
"links": [
{
"title": "The ultimate guide to Firebase",
"title": "The Ultimate Guide to Firebase",
"url": "https://fireship.io/lessons/the-ultimate-beginners-guide-to-firebase/",
"type": "course"
},
@ -3042,7 +3081,7 @@
"description": "SQLite is a lightweight, serverless, self-contained SQL database engine that is designed for simplicity and efficiency. It is widely used in embedded systems and applications where a full-featured database server is not required, such as mobile apps, desktop applications, and small to medium-sized websites. SQLite stores data in a single file, which makes it easy to deploy and manage. It supports standard SQL queries and provides ACID (Atomicity, Consistency, Isolation, Durability) compliance to ensure data integrity. SQLite’s small footprint, minimal configuration, and ease of use make it a popular choice for applications needing a compact, high-performance database solution.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "SQLite website",
"title": "SQLite",
"url": "https://www.sqlite.org/index.html",
"type": "article"
},
@ -3104,7 +3143,7 @@
"type": "video"
},
{
"title": "What is time series data?",
"title": "What is Time Series Data?",
"url": "https://www.youtube.com/watch?v=Se5ipte9DMY",
"type": "video"
}
@ -3203,5 +3242,21 @@
"type": "video"
}
]
},
"ZsZvStCvKwFhlBYe9HGhl": {
"title": "Migrations",
"description": "Database migrations are a version-controlled way to manage and apply incremental changes to a database schema over time, allowing developers to modify the database structure (e.g., adding tables, altering columns) without affecting existing data. They ensure that the database evolves alongside application code in a consistent, repeatable manner across environments (e.g., development, testing, production), while maintaining compatibility with older versions of the schema. Migrations are typically written in SQL or a database-agnostic language, and are executed using migration tools like Liquibase, Flyway, or built-in ORM features such as Django or Rails migrations.\n\nLearn more from the following resources:",
"links": [
{
"title": "What are Database Migrations?",
"url": "https://www.prisma.io/dataguide/types/relational/what-are-database-migrations",
"type": "article"
},
{
"title": "Database Migrations for Beginners",
"url": "https://www.youtube.com/watch?v=dJDBP7pPA-o",
"type": "video"
}
]
}
}

@ -372,16 +372,6 @@
"url": "https://www.coursera.org/lecture/data-structures/doubly-linked-lists-jpGKD",
"type": "course"
},
{
"title": "CS 61B Lecture 7: Linked Lists I",
"url": "https://archive.org/details/ucberkeley_webcast_htzJdKoEmO0",
"type": "article"
},
{
"title": "CS 61B Lecture 7: Linked Lists II",
"url": "https://archive.org/details/ucberkeley_webcast_-c4I3gFYe3w",
"type": "article"
},
{
"title": "Linked List Data Structure | Illustrated Data Structures",
"url": "https://www.youtube.com/watch?v=odW9FU8jPRQ",
@ -392,6 +382,16 @@
"url": "https://www.youtube.com/watch?v=F8AbOfQwl1c",
"type": "video"
},
{
"title": "CS 61B Lecture 7: Linked Lists I",
"url": "https://archive.org/details/ucberkeley_webcast_htzJdKoEmO0",
"type": "video"
},
{
"title": "CS 61B Lecture 7: Linked Lists II",
"url": "https://archive.org/details/ucberkeley_webcast_-c4I3gFYe3w",
"type": "video"
},
{
"title": "Why you should avoid Linked Lists?",
"url": "https://www.youtube.com/watch?v=YQs6IC-vgmo",
@ -511,16 +511,16 @@
"url": "https://www.coursera.org/lecture/data-structures/dynamic-arrays-EwbnV",
"type": "course"
},
{
"title": "UC Berkeley CS61B - Linear and Multi-Dim Arrays (Start watching from 15m 32s)",
"url": "https://archive.org/details/ucberkeley_webcast_Wp8oiO_CZZE",
"type": "article"
},
{
"title": "Array Data Structure | Illustrated Data Structures",
"url": "https://www.youtube.com/watch?v=QJNwK2uJyGs",
"type": "video"
},
{
"title": "UC Berkeley CS61B - Linear and Multi-Dim Arrays (Start watching from 15m 32s)",
"url": "https://archive.org/details/ucberkeley_webcast_Wp8oiO_CZZE",
"type": "video"
},
{
"title": "Dynamic and Static Arrays",
"url": "https://www.youtube.com/watch?v=PEnFFiQe1pM&list=PLDV1Zeh2NRsB6SWUrDFW2RmDotAfPbeHu&index=6",

@ -766,7 +766,7 @@
},
"dJ0NUsODFhk52W2zZxoPh": {
"title": "SSL and TLS Basics",
"description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
"description": "Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.\n\nLearn more from the following resources:",
"links": [
{
"title": "What’s the Difference Between SSL and TLS?",
@ -3223,7 +3223,7 @@
},
"6ILPXeUDDmmYRiA_gNTSr": {
"title": "SSL vs TLS",
"description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
"description": "**SSL (Secure Sockets Layer)** is a cryptographic protocol used to secure communications by encrypting data transmitted between clients and servers. SSL establishes a secure connection through a process known as the handshake, during which the client and server agree on cryptographic algorithms, exchange keys, and authenticate the server with a digital certificate. SSL’s security is considered weaker compared to its successor, TLS, due to vulnerabilities in its older encryption methods and lack of modern cryptographic techniques.\n\n**TLS (Transport Layer Security)** improves upon SSL by using stronger encryption algorithms, more secure key exchange mechanisms, and enhanced certificate validation. Like SSL, TLS begins with a handshake where the client and server agree on a protocol version and cipher suite, exchange keys, and verify certificates. However, TLS incorporates additional features like Perfect Forward Secrecy (PFS) and more secure hashing algorithms, making it significantly more secure than SSL for modern communications.\n\nLearn more from the following resources:",
"links": [
{
"title": "What’s the Difference Between SSL and TLS?",

@ -1827,21 +1827,21 @@
},
"1oYvpFG8LKT1JD6a_9J0m": {
"title": "Provisioning",
"description": "Prometheus is an open-source systems monitoring and alerting toolkit designed for reliability and scalability. It features a multi-dimensional data model, a flexible query language (PromQL), and an efficient time series database. Prometheus collects metrics from configured targets at given intervals, evaluates rule expressions, displays results, and can trigger alerts when specified conditions are observed. It operates on a pull model, scraping metrics from HTTP endpoints, and supports service discovery for dynamic environments. Prometheus is particularly well-suited for monitoring microservices and containerized environments, integrating seamlessly with systems like Kubernetes. Its ecosystem includes various exporters for third-party systems and a built-in alert manager. Widely adopted in cloud-native architectures, Prometheus is a core component of modern observability stacks, often used alongside tools like Grafana for visualization.\n\nVisit the following resources to learn more:",
"description": "Provisioning refers to the process of setting up and configuring the necessary IT infrastructure to support an application or service. This includes allocating and preparing resources such as servers, storage, networking, and software environments. Provisioning can be done manually, but in modern DevOps practices, it's typically automated using tools like Terraform, Pulumi, or CloudFormation. These tools allow for infrastructure-as-code, where the entire provisioning process is defined in version-controlled scripts or templates. This approach enables consistent, repeatable deployments across different environments, reduces human error, and facilitates rapid scaling and disaster recovery.\n\nLearn more from the following resources:",
"links": [
{
"title": "Prometheus Website",
"url": "https://prometheus.io/",
"title": "What is provisioning? - RedHat",
"url": "https://www.redhat.com/en/topics/automation/what-is-provisioning",
"type": "article"
},
{
"title": "Explore top posts about Prometheus",
"url": "https://app.daily.dev/tags/prometheus?ref=roadmapsh",
"title": "What is provisioning? - IBM",
"url": "https://www.ibm.com/topics/provisioning",
"type": "article"
},
{
"title": "Introduction to the Prometheus Monitoring System | Key Concepts and Features",
"url": "https://www.youtube.com/watch?v=STVMGrYIlfg",
"title": "Open Answers: What is provisioning?",
"url": "https://www.youtube.com/watch?v=hWvDlmhASpk",
"type": "video"
}
]

File diff suppressed because it is too large Load Diff

@ -267,6 +267,11 @@
"url": "https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging",
"type": "article"
},
{
"title": "Learn Git Branching",
"url": "https://learngitbranching.js.org/",
"type": "article"
},
{
"title": "Git Branches Tutorial",
"url": "https://www.youtube.com/watch?v=e2IbNHi4uCI",

@ -112,8 +112,29 @@
},
"IduGSdUa2Fi7VFMLKgmsS": {
"title": "iOS Architecture",
"description": "",
"links": []
"description": "iOS architecture refers to the design principles and patterns used to build iOS applications. It focuses on how to structure code, manage data, and ensure a smooth user experience. These architectural patterns help developers create maintainable, scalable, and testable applications while following best practices specific to iOS development. Use cases of these architectures may vary according to the requirements of the application. For example, MVC is used for simple apps, while MVVM is considered when the app is large and complex.\n\nLearn more from the following resources:",
"links": [
{
"title": "Model-View-Controller Pattern in swift (MVC) for Beginners",
"url": "https://ahmedaminhassanismail.medium.com/model-view-controller-pattern-in-swift-mvc-for-beginners-35db8d479832",
"type": "article"
},
{
"title": "MVVM in iOS Swift",
"url": "https://medium.com/@zebayasmeen76/mvvm-in-ios-swift-6afb150458fd",
"type": "article"
},
{
"title": "MVC Design Pattern Explained with Example",
"url": "https://youtu.be/sbYaWJEAYIY?t=2",
"type": "video"
},
{
"title": "MVVM Design Pattern Explained with Example",
"url": "https://www.youtube.com/watch?v=sLHVxnRS75w",
"type": "video"
}
]
},
"IdGdLNgJI3WmONEFsMq-d": {
"title": "Core OS",

@ -1,13 +1,18 @@
{
"_7uvOebQUI4xaSwtMjpEd": {
"title": "Programming Fundamentals",
"description": "Programming is the key requirement for MLOps. You need to be proficient in atleast one programming language. Python is the most popular language for MLOps.",
"description": "ML programming fundamentals encompass the essential skills and concepts needed to develop machine learning models effectively. Key aspects include understanding data structures and algorithms, as well as proficiency in programming languages commonly used in ML, such as Python and R. Familiarity with libraries and frameworks like TensorFlow, PyTorch, and scikit-learn is crucial for implementing machine learning algorithms and building models. Additionally, concepts such as data preprocessing, feature engineering, model evaluation, and hyperparameter tuning are vital for optimizing performance. A solid grasp of statistics and linear algebra is also important, as these mathematical foundations underpin many ML techniques, enabling practitioners to analyze data and interpret model results accurately.",
"links": []
},
"Vh81GnOUOZvDOlOyI5PwT": {
"title": "Python",
"description": "Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its significant use of indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. Python is dynamically-typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly, procedural), object-oriented and functional programming. Python is often described as a \"batteries included\" language due to its comprehensive standard library.\n\nTo start learning Python, here are some useful resources:\n\nRemember, practice is key, and the more you work with Python, the more you'll appreciate its utility in the world of cyber security.",
"description": "Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its significant use of indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. Python is dynamically-typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly, procedural), object-oriented and functional programming. Python is often described as a \"batteries included\" language due to its comprehensive standard library.\n\nLearn more from the following resources:",
"links": [
{
"title": "Python Roadmap",
"url": "https://roadmap.sh/python",
"type": "article"
},
{
"title": "Python.org",
"url": "https://www.python.org/",
@ -32,7 +37,7 @@
},
"vdVq3RQvQF3mF8PQc6DMg": {
"title": "Go",
"description": "Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.\n\nVisit the following resources to learn more:",
"description": "Go, also known as Golang, is an open-source programming language developed by Google that emphasizes simplicity, efficiency, and strong concurrency support. Designed for modern software development, Go features a clean syntax, garbage collection, and built-in support for concurrent programming through goroutines and channels, making it well-suited for building scalable, high-performance applications, especially in cloud computing and microservices architectures. Go's robust standard library and tooling ecosystem, including a powerful package manager and testing framework, further streamline development processes, promoting rapid application development and deployment. Visit the following resources to learn more:",
"links": [
{
"title": "Visit Dedicated Go Roadmap",
@ -49,16 +54,6 @@
"url": "https://go.dev/doc/",
"type": "article"
},
{
"title": "Go by Example - annotated example programs",
"url": "https://gobyexample.com/",
"type": "article"
},
{
"title": "W3Schools Go Tutorial ",
"url": "https://www.w3schools.com/go/",
"type": "article"
},
{
"title": "Making a RESTful JSON API in Go",
"url": "https://thenewstack.io/make-a-restful-json-api-go/",
@ -75,16 +70,27 @@
"type": "article"
},
{
"title": "Go Class by Matt",
"url": "https://www.youtube.com/playlist?list=PLoILbKo9rG3skRCj37Kn5Zj803hhiuRK6",
"title": "Go Programming Course",
"url": "https://www.youtube.com/watch?v=un6ZyFkqFKo",
"type": "video"
}
]
},
"mMzqJF2KQ49TDEk5F3VAI": {
"title": "Bash",
"description": "Understanding bash is essential for MLOps tasks.\n\n* **Book Suggestion:** _The Linux Command Line, 2nd Edition_ by William E. Shotts",
"links": []
"description": "Bash (Bourne Again Shell) is a Unix shell and command language used for interacting with the operating system through a terminal. It allows users to execute commands, automate tasks via scripting, and manage system operations. As the default shell for many Linux distributions, it supports command-line utilities, file manipulation, process control, and text processing. Bash scripts can include loops, conditionals, and functions, making it a powerful tool for system administration, automation, and task scheduling.\n\nLearn more from the following resources:",
"links": [
{
"title": "bash-guide",
"url": "https://github.com/Idnan/bash-guide",
"type": "opensource"
},
{
"title": "Bash Scripting Course",
"url": "https://www.youtube.com/watch?v=tK9Oc6AEnR4",
"type": "video"
}
]
},
"oUhlUoWQQ1txx_sepD5ev": {
"title": "Version Control Systems",
@ -104,8 +110,13 @@
},
"06T5CbZAGJU6fJhCmqCC8": {
"title": "Git",
"description": "[Git](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.\n\nVisit the following resources to learn more:",
"description": "Git is a distributed version control system used to track changes in source code during software development. It enables multiple developers to collaborate on a project by managing versions of code, allowing for branching, merging, and tracking of revisions. Git ensures that changes are recorded with a complete history, enabling rollback to previous versions if necessary. It supports distributed workflows, meaning each developer has a complete local copy of the project’s history, facilitating seamless collaboration, conflict resolution, and efficient management of code across different teams or environments.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Learn Git & GitHub",
"url": "https://roadmap.sh/git-github",
"type": "article"
},
{
"title": "Learn Git with Tutorials, News and Tips - Atlassian",
"url": "https://www.atlassian.com/git",
@ -130,26 +141,21 @@
},
"7t7jSb3YgyWlhgCe8Se1I": {
"title": "GitHub",
"description": "GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.\n\nVisit the following resources to learn more:",
"description": "GitHub is a web-based platform built on top of Git that provides version control, collaboration tools, and project management features for software development. It enables developers to host Git repositories, collaborate on code through pull requests, and review and track changes. GitHub also offers additional features like issue tracking, continuous integration, automated workflows, and documentation hosting. With its social coding environment, GitHub fosters open-source contributions and team collaboration, making it a central hub for many software development projects.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "GitHub Website",
"url": "https://github.com",
"type": "opensource"
},
{
"title": "GitHub Documentation",
"url": "https://docs.github.com/en/get-started/quickstart",
"title": "Learn Git & GitHub",
"url": "https://roadmap.sh/git-github",
"type": "article"
},
{
"title": "How to Use Git in a Professional Dev Team",
"url": "https://ooloo.io/project/github-flow",
"title": "GitHub Website",
"url": "https://github.com",
"type": "article"
},
{
"title": "Learn Git Branching",
"url": "https://learngitbranching.js.org/?locale=en_us",
"title": "GitHub Documentation",
"url": "https://docs.github.com/en/get-started/quickstart",
"type": "article"
},
{
@ -161,29 +167,25 @@
"title": "What is GitHub?",
"url": "https://www.youtube.com/watch?v=w3jLJU7DT5E",
"type": "video"
}
]
},
"00GZcwe25QYi7rDzaOoMt": {
"title": "Cloud Computing",
"description": "**Cloud Computing** refers to the delivery of computing services over the internet rather than using local servers or personal devices. These services include servers, storage, databases, networking, software, analytics, and intelligence. Cloud Computing enables faster innovation, flexible resources, and economies of scale. There are various types of cloud computing such as public clouds, private clouds, and hybrids clouds. Furthermore, it's divided into different services like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These services differ mainly in the level of control an organization has over their data and infrastructures.\n\nLearn more from the following resources:",
"links": [
{
"title": "Git vs. GitHub: Whats the difference?",
"url": "https://www.youtube.com/watch?v=wpISo9TNjfU",
"type": "video"
},
{
"title": "Git and GitHub for Beginners",
"url": "https://www.youtube.com/watch?v=RGOj5yH7evk",
"type": "video"
"title": "What is cloud computing?",
"url": "https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-cloud-computing",
"type": "article"
},
{
"title": "Git and GitHub - CS50 Beyond 2019",
"url": "https://www.youtube.com/watch?v=eulnSXkhE7I",
"title": "What is Cloud Computing? | Amazon Web Services",
"url": "https://www.youtube.com/watch?v=mxT233EdY5c",
"type": "video"
}
]
},
"00GZcwe25QYi7rDzaOoMt": {
"title": "Cloud Computing",
"description": "**Cloud Computing** refers to the delivery of computing services over the internet rather than using local servers or personal devices. These services include servers, storage, databases, networking, software, analytics, and intelligence. Cloud Computing enables faster innovation, flexible resources, and economies of scale. There are various types of cloud computing such as public clouds, private clouds, and hybrids clouds. Furthermore, it's divided into different services like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These services differ mainly in the level of control an organization has over their data and infrastructures.",
"links": []
},
"u3E7FGW4Iwdsu61KYFxCX": {
"title": "AWS / Azure / GCP",
"description": "AWS (Amazon Web Services) Azure and GCP (Google Cloud Platform) are three leading providers of cloud computing services. AWS by Amazon is the oldest and the most established among the three, providing a breadth and depth of solutions ranging from infrastructure services like compute, storage, and databases to the machine and deep learning. Azure, by Microsoft, has integrated tools for DevOps, supports a large number of programming languages, and offers seamless integration with on-prem servers and Microsoft’s software. Google's GCP has strength in cost-effectiveness, live migration of virtual machines, and flexible computing options. All three have introduced various MLOps tools and services to boost capabilities for machine learning development and operations.\n\nVisit the following resources to learn more about AWS, Azure, and GCP:",
@ -212,12 +214,28 @@
},
"kbfucfIO5KCsuv3jKbHTa": {
"title": "Cloud-native ML Services",
"description": "Most of the cloud providers offer managed services for machine learning. These services are designed to help data scientists and machine learning engineers to build, train, and deploy machine learning models at scale. These services are designed to be cloud-native, meaning they are designed to work with other cloud services and are optimized for the cloud environment.\n\nHere are the services offered by the major cloud providers:\n\n* **Amazon Web Services (AWS)**: SageMaker\n* **Google Cloud Platform (GCP)**: AI Platform\n* **Microsoft Azure**: Azure Machine Learning",
"links": []
"description": "Most of the cloud providers offer managed services for machine learning. These services are designed to help data scientists and machine learning engineers to build, train, and deploy machine learning models at scale. These services are designed to be cloud-native, meaning they are designed to work with other cloud services and are optimized for the cloud environment.\n\nLearn more from the following resources:",
"links": [
{
"title": "AWS Sage Maker",
"url": "https://aws.amazon.com/sagemaker/",
"type": "article"
},
{
"title": "Azure ML",
"url": "https://azure.microsoft.com/en-gb/products/machine-learning",
"type": "article"
},
{
"title": "What is Cloud Native?",
"url": "https://www.youtube.com/watch?v=fp9_ubiKqFU",
"type": "video"
}
]
},
"tKeejLv8Q7QX40UtOjpav": {
"title": "Containerization",
"description": "Containers are a construct in which [cgroups](https://en.wikipedia.org/wiki/Cgroups), [namespaces](https://en.wikipedia.org/wiki/Linux_namespaces), and [chroot](https://en.wikipedia.org/wiki/Chroot) are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines.\n\nThese images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.\n\nVisit the following resources to learn more:",
"description": "Containers are a construct in which cgroups, namespaces, and chroot are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines.\n\nThese images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What are Containers?",
@ -274,8 +292,13 @@
},
"XQoK9l-xtN2J8ZV8dw53X": {
"title": "Kubernetes",
"description": "Kubernetes is an [open source](https://github.com/kubernetes/kubernetes) container management platform, and the dominant product in this space. Using Kubernetes, teams can deploy images across multiple underlying hosts, defining their desired availability, deployment logic, and scaling logic in YAML. Kubernetes evolved from Borg, an internal Google platform used to provision and allocate compute resources (similar to the Autopilot and Aquaman systems of Microsoft Azure).\n\nThe popularity of Kubernetes has made it an increasingly important skill for the DevOps Engineer and has triggered the creation of Platform teams across the industry. These Platform engineering teams often exist with the sole purpose of making Kubernetes approachable and usable for their product development colleagues.\n\nVisit the following resources to learn more:",
"description": "Kubernetes is an open source container management platform, and the dominant product in this space. Using Kubernetes, teams can deploy images across multiple underlying hosts, defining their desired availability, deployment logic, and scaling logic in YAML. Kubernetes evolved from Borg, an internal Google platform used to provision and allocate compute resources (similar to the Autopilot and Aquaman systems of Microsoft Azure). The popularity of Kubernetes has made it an increasingly important skill for the DevOps Engineer and has triggered the creation of Platform teams across the industry. These Platform engineering teams often exist with the sole purpose of making Kubernetes approachable and usable for their product development colleagues.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "Kubernetes Roadmap",
"url": "https://roadmap.sh/kubernetes",
"type": "article"
},
{
"title": "Kubernetes Website",
"url": "https://kubernetes.io/",
@ -286,11 +309,6 @@
"url": "https://kubernetes.io/docs/home/",
"type": "article"
},
{
"title": "Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care",
"url": "https://thenewstack.io/primer-how-kubernetes-came-to-be-what-it-is-and-why-you-should-care/",
"type": "article"
},
{
"title": "Kubernetes: An Overview",
"url": "https://thenewstack.io/kubernetes-an-overview/",
@ -310,28 +328,93 @@
},
"ulka7VEVjz6ls5SnI6a6z": {
"title": "Machine Learning Fundamentals",
"description": "An MLOps engineer should have a basic understanding of machine learning models.\n\n* **Courses:** [MLCourse.ai](https://mlcourse.ai/), [Fast.ai](https://course.fast.ai)\n* **Book Suggestion:** _Applied Machine Learning and AI for Engineers_ by Jeff Prosise",
"links": []
"description": "Machine learning fundamentals encompass the key concepts and techniques that enable systems to learn from data and make predictions or decisions without being explicitly programmed. At its core, machine learning involves algorithms that can identify patterns in data and improve over time with experience. Key areas include supervised learning (where models are trained on labeled data), unsupervised learning (where models identify patterns in unlabeled data), and reinforcement learning (where agents learn to make decisions based on feedback from their actions). Essential components also include data preprocessing, feature selection, model training, evaluation metrics, and the importance of avoiding overfitting. Understanding these fundamentals is crucial for developing effective machine learning applications across various domains. Learn more from the following resources:",
"links": [
{
"title": "Fundamentals of Machine Learning - Microsoft",
"url": "https://learn.microsoft.com/en-us/training/modules/fundamentals-machine-learning/",
"type": "course"
},
{
"title": "MLCourse.ai",
"url": "https://mlcourse.ai/",
"type": "course"
},
{
"title": "Fast.ai",
"url": "https://course.fast.ai",
"type": "course"
}
]
},
"VykbCu7LWIx8fQpqKzoA7": {
"title": "Data Engineering Fundamentals",
"description": "Data Engineering is essentially dealing with the collection, validation, storage, transformation, and processing of data. The objective is to provide reliable, efficient, and scalable data pipelines and infrastructure that allow data scientists to convert data into actionable insights. It involves steps like data ingestion, data storage, data processing, and data provisioning. Important concepts include designing, building, and maintaining data architecture, databases, processing systems, and large-scale processing systems. It is crucial to have extensive technical knowledge in various tools and programming languages like SQL, Python, Hadoop, and more.",
"links": []
"description": "Data Engineering is essentially dealing with the collection, validation, storage, transformation, and processing of data. The objective is to provide reliable, efficient, and scalable data pipelines and infrastructure that allow data scientists to convert data into actionable insights. It involves steps like data ingestion, data storage, data processing, and data provisioning. Important concepts include designing, building, and maintaining data architecture, databases, processing systems, and large-scale processing systems. It is crucial to have extensive technical knowledge in various tools and programming languages like SQL, Python, Hadoop, and more.\n\nLearn more from the following resources:",
"links": [
{
"title": "Data Engineering 101",
"url": "https://www.redpanda.com/guides/fundamentals-of-data-engineering",
"type": "article"
},
{
"title": "Fundamentals of Data Engineering",
"url": "https://www.youtube.com/watch?v=mPSzL8Lurs0",
"type": "video"
}
]
},
"cOg3ejZRYE-u-M0c89IjM": {
"title": "Data Pipelines",
"description": "Data pipelines refer to a set of processes that involve moving data from one system to another, for purposes such as data integration, data migration, data transformation, or data synchronization. These processes can involve a variety of data sources and destinations, and may often require data to be cleaned, enriched, or otherwise transformed along the way. It's a key concept in data engineering to ensure that data is appropriately processed from its source to the location where it will be used, typically a data warehouse, data mart, or a data lake. As such, data pipelines play a crucial part in building an effective and efficient data analytics setup, enabling the flow of data to be processed for insights.\n\nIt is important to understand the difference between ELT and ETL pipelines. ELT stands for Extract, Load, Transform, and refers to a process where data is first extracted from source systems, then loaded into a target system, and finally transformed within the target system. ETL, on the other hand, stands for Extract, Transform, Load, and refers to a process where data is first extracted from source systems, then transformed, and finally loaded into a target system. The choice between ELT and ETL pipelines depends on the specific requirements of the data processing tasks at hand, and the capabilities of the systems involved.",
"links": []
"description": "Data pipelines are a series of automated processes that transport and transform data from various sources to a destination for analysis or storage. They typically involve steps like data extraction, cleaning, transformation, and loading (ETL) into databases, data lakes, or warehouses. Pipelines can handle batch or real-time data, ensuring that large-scale datasets are processed efficiently and consistently. They play a crucial role in ensuring data integrity and enabling businesses to derive insights from raw data for reporting, analytics, or machine learning.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is a data pipeline?",
"url": "https://www.ibm.com/topics/data-pipeline",
"type": "article"
},
{
"title": "What are data pipelines?",
"url": "https://www.youtube.com/watch?v=oKixNpz6jNo",
"type": "video"
}
]
},
"wOogVDV4FIDLXVPwFqJ8C": {
"title": "Data Lakes & Warehouses",
"description": "\"**Data Lakes** are large-scale data repository systems that store raw, untransformed data, in various formats, from multiple sources. They're often used for big data and real-time analytics requirements. Data lakes preserve the original data format and schema which can be modified as necessary. On the other hand, **Data Warehouses** are data storage systems which are designed for analyzing, reporting and integrating with transactional systems. The data in a warehouse is clean, consistent, and often transformed to meet wide-range of business requirements. Hence, data warehouses provide structured data but require more processing and management compared to data lakes.\"",
"links": []
"description": "\"**Data Lakes** are large-scale data repository systems that store raw, untransformed data, in various formats, from multiple sources. They're often used for big data and real-time analytics requirements. Data lakes preserve the original data format and schema which can be modified as necessary. On the other hand, **Data Warehouses** are data storage systems which are designed for analyzing, reporting and integrating with transactional systems. The data in a warehouse is clean, consistent, and often transformed to meet wide-range of business requirements. Hence, data warehouses provide structured data but require more processing and management compared to data lakes.\"\n\nLearn more from the following resources:",
"links": [
{
"title": "Data lake definition",
"url": "https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-data-lake",
"type": "article"
},
{
"title": "What is a data lake?",
"url": "https://www.youtube.com/watch?v=LxcH6z8TFpI",
"type": "video"
},
{
"title": "@hat is a data warehouse?",
"url": "https://www.youtube.com/watch?v=k4tK2ttdSDg",
"type": "video"
}
]
},
"Berd78HvnulNEGOsHCf8n": {
"title": "Data Ingestion Architecture",
"description": "Data ingestion is the process of collecting, transferring, and loading data from various sources to a destination where it can be stored and analyzed. There are several data ingestion architectures that can be used to collect data from different sources and load it into a data warehouse, data lake, or other storage systems. These architectures can be broadly classified into two categories: batch processing and real-time processing. How you choose to ingest data will depend on the volume, velocity, and variety of data you are working with, as well as the latency requirements of your use case.\n\nLambda and Kappa architectures are two popular data ingestion architectures that combine batch and real-time processing to handle large volumes of data efficiently.",
"links": []
"description": "Data ingestion is the process of collecting, transferring, and loading data from various sources to a destination where it can be stored and analyzed. There are several data ingestion architectures that can be used to collect data from different sources and load it into a data warehouse, data lake, or other storage systems. These architectures can be broadly classified into two categories: batch processing and real-time processing. How you choose to ingest data will depend on the volume, velocity, and variety of data you are working with, as well as the latency requirements of your use case.\n\nLambda and Kappa architectures are two popular data ingestion architectures that combine batch and real-time processing to handle large volumes of data efficiently.\n\nLearn more from the following resources:",
"links": [
{
"title": "Data Ingestion Patterns",
"url": "https://docs.aws.amazon.com/whitepapers/latest/aws-cloud-data-ingestion-patterns-practices/data-ingestion-patterns.html",
"type": "article"
},
{
"title": "What is a data pipeline?",
"url": "https://www.youtube.com/watch?v=kGT4PcTEPP8",
"type": "video"
}
]
},
"pVSlVHXIap0unFxLGM-lQ": {
"title": "Airflow",
@ -414,7 +497,7 @@
},
"iTsEHVCo6KGq7H2HMgy5S": {
"title": "MLOps Principles",
"description": "Awareness of MLOps principles and maturity factors is required.\n\n* **Books:**\n * _Designing Machine Learning Systems_ by Chip Huyen\n * _Introducing MLOps_ by Mark Treveil and Dataiku\n* **Assessment:** [MLOps maturity assessment](https://marvelousmlops.substack.com/p/mlops-maturity-assessment)\n* **Great resource on MLOps:** [ml-ops.org](https://ml-ops.org)",
"description": "MLOps (Machine Learning Operations) principles focus on streamlining the deployment, monitoring, and management of machine learning models in production environments. Key principles include:\n\n1. **Collaboration**: Foster collaboration between data scientists, developers, and operations teams to ensure alignment on model goals, performance, and lifecycle management.\n \n2. **Automation**: Automate workflows for model training, testing, deployment, and monitoring to enhance efficiency, reduce errors, and speed up the development lifecycle.\n \n3. **Version Control**: Implement version control for both code and data to track changes, reproduce experiments, and maintain model lineage.\n \n4. **Continuous Integration and Deployment (CI/CD)**: Establish CI/CD pipelines tailored for machine learning to facilitate rapid model iteration and deployment.\n \n5. **Monitoring and Governance**: Continuously monitor model performance and data drift in production to ensure models remain effective and compliant with regulatory requirements.\n \n6. **Scalability**: Design systems that can scale to handle varying workloads and accommodate changes in data volume and complexity.\n \n7. **Reproducibility**: Ensure that experiments can be reliably reproduced by standardizing environments and workflows, making it easier to validate and iterate on models.\n \n\nThese principles help organizations efficiently manage the lifecycle of machine learning models, from development to deployment and beyond.",
"links": []
},
"l1xasxQy2vAY34NWaqKEe": {
@ -445,23 +528,62 @@
},
"a6vawajw7BpL6plH_nuAz": {
"title": "CI/CD",
"description": "Critical for traceable and reproducible ML model deployments.\n\n* **Books:**\n * _Learning GitHub Actions_ by Brent Laster\n * _Learning Git_ by Anna Skoulikari\n* **Tutorials & Courses:** [Git & GitHub for beginners](https://www.youtube.com/watch?v=RGOj5yH7evk), [Python to Production guide](https://www.udemy.com/course/setting-up-the-linux-terminal-for-software-development/), [Version Control Missing Semester](https://missing.csail.mit.edu/2020/version-control/), [https://learngitbranching.js.org/](https://learngitbranching.js.org/)\n* **Tool:** [Pre-commit hooks](https://marvelousmlops.substack.com/p/welcome-to-pre-commit-heaven)",
"links": []
"description": "CI/CD (Continuous Integration and Continuous Deployment/Delivery) is a software development practice that automates the process of integrating code changes, running tests, and deploying updates. Continuous Integration focuses on regularly merging code changes into a shared repository, followed by automated testing to ensure code quality. Continuous Deployment extends this by automatically releasing every validated change to production, while Continuous Delivery ensures code is always in a deployable state, but requires manual approval for production releases. CI/CD pipelines improve code reliability, reduce integration risks, and speed up the development lifecycle.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is CI/CD?",
"url": "https://www.redhat.com/en/topics/devops/what-is-ci-cd",
"type": "article"
},
{
"title": "CI/CD In 5 Minutes",
"url": "https://www.youtube.com/watch?v=42UP1fxi2SY",
"type": "video"
}
]
},
"fes7M--Y8i08_zeP98tVV": {
"title": "Orchestration",
"description": "Systems like Airflow and Mage are important in ML engineering.\n\n* **Course:** [Introduction to Airflow in Python](https://app.datacamp.com/learn/courses/introduction-to-airflow-in-python)\n* **Note:** Airflow is also featured in the _ML Engineering with Python_ book and [_The Full Stack 7-Steps MLOps Framework_](https://www.pauliusztin.me/courses/the-full-stack-7-steps-mlops-framework).",
"links": []
"description": "ML orchestration refers to the process of managing and coordinating the various tasks and workflows involved in the machine learning lifecycle, from data preparation and model training to deployment and monitoring. It involves integrating multiple tools and platforms to streamline operations, automate repetitive tasks, and ensure seamless collaboration among data scientists, engineers, and operations teams. By using orchestration frameworks, organizations can enhance reproducibility, scalability, and efficiency, enabling them to manage complex machine learning pipelines and improve the overall quality of models in production. This ensures that models are consistently updated and maintained, facilitating rapid iteration and adaptation to changing data and business needs.\n\nLearn more from the following resources:",
"links": [
{
"title": "ML Observability: what, why, how",
"url": "https://ubuntu.com/blog/ml-observability",
"type": "article"
}
]
},
"fGGWKmAJ50Ke6wWJBEgby": {
"title": "Experiment Tracking & Model Registry",
"description": "**Experiment Tracking** is an essential part of MLOps, providing a system to monitor and record the different experiments conducted during the machine learning model development process. This involves capturing, organizing and visualizing the metadata associated with each experiment, such as hyperparameters used, models produced, metrics like accuracy or loss, and other information about the computational environment. This tracking allows for reproducibility of experiments, comparison across different experiment runs, and helps in identifying the best models.\n\nLogging metadata, parameters, and artifacts of training runs.\n\n* **Tool:** MLflow\n* **Courses:** [MLflow Udemy course](https://www.udemy.com/course/mlflow-course/), [End-to-end machine learning (MLflow piece)](https://www.udemy.com/course/sustainable-and-scalable-machine-learning-project-development/)",
"links": []
"description": "**Experiment Tracking** is an essential part of MLOps, providing a system to monitor and record the different experiments conducted during the machine learning model development process. This involves capturing, organizing and visualizing the metadata associated with each experiment, such as hyperparameters used, models produced, metrics like accuracy or loss, and other information about the computational environment. This tracking allows for reproducibility of experiments, comparison across different experiment runs, and helps in identifying the best models.\n\nLearn more from the following resources:",
"links": [
{
"title": "Experiment Tracking",
"url": "https://madewithml.com/courses/mlops/experiment-tracking/#dashboard",
"type": "article"
},
{
"title": "ML Flow Model Registry",
"url": "https://mlflow.org/docs/latest/model-registry.html",
"type": "article"
}
]
},
"6XgP_2NLuiw654zvTyueT": {
"title": "Data Lineage & Feature Stores",
"description": "**Data Lineage** refers to the life-cycle of data, including its origins, movements, characteristics and quality. It's a critical component in MLOps for tracking the journey of data through every process in a pipeline, from raw input to model output. Data lineage helps in maintaining transparency, ensuring compliance, and facilitating data debugging or tracing data related bugs. It provides a clear representation of data sources, transformations, and dependencies thereby aiding in audits, governance, or reproduction of machine learning models.\n\nFeature stores are a crucial component of MLOps infrastructure.\n\n* **Tutorial:** Creating a feature store with Feast [Part 1](https://kedion.medium.com/creating-a-feature-store-with-feast-part-1-37c380223e2f) [Part 2](https://kedion.medium.com/feature-storage-for-ml-with-feast-part-2-34df1971a8d3) [Part 3](https://kedion.medium.com/feature-storage-for-ml-with-feast-a061899fc4a2)\n* **Tool:** DVC for data tracking\n* **Course:** [End-to-end machine learning (DVC piece)](https://www.udemy.com/course/sustainable-and-scalable-machine-learning-project-development/)",
"links": []
"description": "**Data Lineage** refers to the life-cycle of data, including its origins, movements, characteristics and quality. It's a critical component in MLOps for tracking the journey of data through every process in a pipeline, from raw input to model output. Data lineage helps in maintaining transparency, ensuring compliance, and facilitating data debugging or tracing data related bugs. It provides a clear representation of data sources, transformations, and dependencies thereby aiding in audits, governance, or reproduction of machine learning models.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is Data Lineage?",
"url": "https://www.ibm.com/topics/data-lineage",
"type": "article"
},
{
"title": "What is a feature store",
"url": "https://www.snowflake.com/guides/what-feature-store-machine-learning/",
"type": "article"
}
]
},
"zsW1NRb0dMgS-KzWsI0QU": {
"title": "Model Training & Serving",
@ -470,33 +592,39 @@
},
"r4fbUwD83uYumEO1X8f09": {
"title": "Monitoring & Observability",
"description": "**Monitoring** in MLOps primarily involves tracking the performance of machine learning (ML) models in production to ensure that they continually deliver accurate and reliable results. Such monitoring is necessary because the real-world data that these models handle may change over time, a scenario known as data drift. These changes can adversely affect model performance. Monitoring helps to detect any anomalies in the model’s behaviour or performance and such alerts can trigger the retraining of models with new data. From a broader perspective, monitoring also involves tracking resources and workflows to detect and rectify any operational issues in the MLOps pipeline.",
"description": "**Monitoring** in MLOps primarily involves tracking the performance of machine learning (ML) models in production to ensure that they continually deliver accurate and reliable results. Such monitoring is necessary because the real-world data that these models handle may change over time, a scenario known as data drift. These changes can adversely affect model performance. Monitoring helps to detect any anomalies in the model’s behaviour or performance and such alerts can trigger the retraining of models with new data. From a broader perspective, monitoring also involves tracking resources and workflows to detect and rectify any operational issues in the MLOps pipeline.\n\nLearn more from the following resources:",
"links": [
{
"title": "ML Monitoring vs Observability article",
"url": "https://marvelousmlops.substack.com/p/ml-monitoring-vs-ml-observability",
"title": "ML Monitoring vs ML Observability",
"url": "https://medium.com/marvelous-mlops/ml-monitoring-vs-ml-observability-understanding-the-differences-fff574a8974f",
"type": "article"
},
{
"title": "Machine learning monitoring concepts",
"url": "https://app.datacamp.com/learn/courses/machine-learning-monitoring-concepts",
"type": "article"
"title": "ML Observability vs ML Monitoring: What's the difference?",
"url": "https://www.youtube.com/watch?v=k1Reed3QIYE",
"type": "video"
}
]
},
"sf67bSL7HAx6iN7S6MYKs": {
"title": "Infrastructure as Code",
"description": "Infrastructure as Code (IaC) is a modern approach to managing and provisioning IT infrastructure through machine-readable configuration files, rather than manual processes. It enables developers and operations teams to define and manage infrastructure resources—such as servers, networks, and databases—using code, which can be versioned, tested, and deployed like application code. IaC tools, like Terraform and AWS CloudFormation, allow for automated, repeatable deployments, reducing human error and increasing consistency across environments. This practice facilitates agile development, enhances collaboration between teams, and supports scalable and efficient infrastructure management.",
"links": [
{
"title": "Monitoring ML in Python",
"url": "https://app.datacamp.com/learn/courses/monitoring-machine-learning-in-python",
"title": "What is Infrastructure as Code?",
"url": "https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac",
"type": "article"
},
{
"title": "Prometheus, Grafana",
"url": "https://www.udemy.com/course/mastering-prometheus-and-grafana/",
"type": "article"
"title": "Terraform course for beginners",
"url": "https://www.youtube.com/watch?v=SLB_c_ayRMo",
"type": "video"
},
{
"title": "8 Terraform best practices",
"url": "https://www.youtube.com/watch?v=gxPykhPxRW0",
"type": "video"
}
]
},
"sf67bSL7HAx6iN7S6MYKs": {
"title": "Infrastructure as Code",
"description": "Essential for a reproducible MLOps framework.\n\n* **Course:** [Terraform course for beginners](https://www.youtube.com/watch?v=SLB_c_ayRMo)\n* **Video:** [8 Terraform best practices by Techworld by Nana](https://www.youtube.com/watch?v=gxPykhPxRW0)\n* **Book Suggestion:** _Terraform: Up and Running, 3rd Edition_ by Yevgeniy Brikman",
"links": []
}
}

@ -1244,8 +1244,8 @@
"type": "article"
},
{
"title": "Explore top posts about JavaScript",
"url": "https://app.daily.dev/tags/javascript?ref=roadmapsh",
"title": "Explore top posts about NestJS",
"url": "https://app.daily.dev/tags/nestjs?ref=roadmapsh",
"type": "article"
},
{

@ -1,13 +1,25 @@
{
"-3pADOHMDQ0H6ZKNjURyn": {
"title": "What is Redis?",
"description": "",
"links": []
"description": "Redis is an open-source, in-memory data structure store, primarily used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets, making it highly versatile. Redis operates with extremely low latency due to its in-memory nature, enabling fast access to data. It is often used in real-time applications such as session management, leaderboards, or caching mechanisms, where quick data retrieval is critical. Additionally, Redis supports data persistence by periodically writing the dataset to disk, balancing memory speed with data reliability.\n\nVisit the following resources to learn more:",
"links": [
{
"title": "What is redis?",
"url": "https://redis.io/docs/latest/get-started/",
"type": "article"
}
]
},
"M-EXrTDeAEMz_IkEi-ab4": {
"title": "In-memory Data Structure Store",
"description": "",
"links": []
"description": "An in-memory database is a purpose-built database that relies primarily on internal memory for data storage. It enables minimal response times by eliminating the need to access standard disk drives (SSDs). In-memory databases are ideal for applications that require microsecond response times or have large spikes in traffic, such as gaming leaderboards, session stores, and real-time data analytics. The terms main memory database (MMDB), in-memory database system (IMDS), and real-time database system (RTDB) also refer to in-memory databases.\n\nLearn more from the following resources:",
"links": [
{
"title": "Amazon MemoryDB",
"url": "https://aws.amazon.com/memorydb/",
"type": "article"
}
]
},
"l2aXyO3STnhbFjvUXPpm2": {
"title": "Key-value Database",

@ -10,7 +10,7 @@
"links": [
{
"title": "What is Software Architecture in Software Engineering?",
"url": "https://webcache.googleusercontent.com/search?q=cache:ya4xvYaEckQJ:https://www.future-processing.com/blog/what-is-software-architecture-in-software-engineering/&cd=1&hl=es-419&ct=clnk&gl=ar",
"url": "https://www.future-processing.com/blog/what-is-software-architecture-in-software-engineering/",
"type": "article"
},
{

@ -401,7 +401,7 @@
},
"HD1UGOidp7JGKdW6CEdQ_": {
"title": "satisfies keyword",
"description": "TypeScript developers are often faced with a dilemma: we want to ensure that some expression matches some type, but also want to keep the most specific type of that expression for inference purposes.\n\nFor example:\n\n // Each property can be a string or an RGB tuple.\n const palette = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ^^^^ sacrebleu - we've made a typo!\n };\n \n // We want to be able to use array methods on 'red'...\n const redComponent = palette.red.at(0);\n \n // or string methods on 'green'...\n const greenNormalized = palette.green.toUpperCase();\n \n\nNotice that we’ve written `bleu`, whereas we probably should have written `blue`. We could try to catch that `bleu` typo by using a type annotation on palette, but we’d lose the information about each property.\n\n type Colors = 'red' | 'green' | 'blue';\n type RGB = [red: number, green: number, blue: number];\n \n const palette: Record<Colors, string | RGB> = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ~~~~ The typo is now correctly detected\n };\n // But we now have an undesirable error here - 'palette.red' \"could\" be a string.\n const redComponent = palette.red.at(0);\n \n\nThe `satisfies` operator lets us validate that the type of an expression matches some type, without changing the resulting type of that expression. As an example, we could use `satisfies` to validate that all the properties of palette are compatible with `string | number[]`:\n\n type Colors = 'red' | 'green' | 'blue';\n type RGB = [red: number, green: number, blue: number];\n \n const palette = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ~~~~ The typo is now caught!\n } satisfies Record<Colors, string | RGB>;\n \n // Both of these methods are still accessible!\n const redComponent = palette.red.at(0);\n const greenNormalized = palette.green.toUpperCase();\n \n\nLearn more from the following resources:",
"description": "The `satisfies` operator lets us validate that the type of an expression matches some type, without changing the resulting type of that expression.\n\nLearn more from the following resources:",
"links": [
{
"title": "satisfies Keyword",

Binary file not shown.

After

Width:  |  Height:  |  Size: 478 KiB

@ -1,7 +1,7 @@
import { type APIContext } from 'astro';
import { api } from './api.ts';
export type LeadeboardUserDetails = {
export type LeaderboardUserDetails = {
id: string;
name: string;
avatar?: string;
@ -10,12 +10,15 @@ export type LeadeboardUserDetails = {
export type ListLeaderboardStatsResponse = {
streaks: {
active: LeadeboardUserDetails[];
lifetime: LeadeboardUserDetails[];
active: LeaderboardUserDetails[];
lifetime: LeaderboardUserDetails[];
};
projectSubmissions: {
currentMonth: LeadeboardUserDetails[];
lifetime: LeadeboardUserDetails[];
currentMonth: LeaderboardUserDetails[];
lifetime: LeaderboardUserDetails[];
};
githubContributors: {
currentMonth: LeaderboardUserDetails[];
};
referrals: {
currentMonth: LeadeboardUserDetails[];

@ -23,7 +23,7 @@ const formattedDate = DateTime.fromISO('2024-09-13').toFormat('dd LLL, yyyy');
<div
class='flex flex-col items-center justify-center gap-2 sm:gap-2 rounded-xl border bg-white px-8 py-12 text-center'
>
<img src='/images/rocket.gif' class='w-[70px] mb-4' />
<img src='/images/gifs/rocket.gif' class='w-[70px] mb-4' />
<h2 class='text-balance text-xl font-medium'>Changelog page is launched</h2>
<p class='font-normal text-balance text-gray-400 text-sm sm:text-base'>
We will be sharing a selected list of updates, improvements, and fixes made to

@ -10,7 +10,7 @@ const top10Changelogs = allChangelogs.slice(0, 10);
<div class='container !max-w-[650px]'>
<p class='text-2xl font-bold sm:text-5xl'>
<img
src='/images/rocket.gif'
src='/images/gifs/rocket.gif'
alt='Rocket'
class='mr-2 hidden sm:inline h-12 w-12'
/>

@ -1,11 +1,10 @@
import { useState, type ReactNode } from 'react';
import { type ReactNode, useState } from 'react';
import type {
LeadeboardUserDetails,
LeaderboardUserDetails,
ListLeaderboardStatsResponse,
} from '../../api/leaderboard';
import { cn } from '../../lib/classname';
import { FolderKanban, Zap, Trophy, Users } from 'lucide-react';
import { RankBadgeIcon } from '../ReactIcons/RankBadgeIcon';
import { FolderKanban, GitPullRequest, Users, Users2, Zap } from 'lucide-react';
import { TrophyEmoji } from '../ReactIcons/TrophyEmoji';
import { SecondPlaceMedalEmoji } from '../ReactIcons/SecondPlaceMedalEmoji';
import { ThirdPlaceMedalEmoji } from '../ReactIcons/ThirdPlaceMedalEmoji';
@ -18,16 +17,12 @@ export function LeaderboardPage(props: LeaderboardPageProps) {
const { stats } = props;
return (
<div className="min-h-screen bg-gray-50">
<div className="container py-10">
<div className="mb-8 text-center">
<div className="mb-2 flex items-center justify-center gap-3">
<Trophy className="size-8 text-yellow-500" />
<h2 className="text-2xl font-bold sm:text-3xl">Leaderboard</h2>
</div>
<p className="mx-auto max-w-2xl text-balance text-sm text-gray-500 sm:text-base">
Top users based on their activity on roadmap.sh
</p>
<div className="min-h-screen bg-gray-100">
<div className="container pb-5 sm:pb-8">
<h1 className="my-5 flex items-center text-lg font-medium text-black sm:mb-4 sm:mt-8">
<Users2 className="mr-2 size-5 text-black" />
Leaderboard
</h1>
<div className="mt-8 grid gap-2 md:grid-cols-2">
<LeaderboardLane
@ -82,6 +77,53 @@ export function LeaderboardPage(props: LeaderboardPageProps) {
]}
/>
</div>
<div className="grid gap-2 sm:gap-3 md:grid-cols-2">
<LeaderboardLane
title="Longest Visit Streak"
tabs={[
{
title: 'Active',
users: stats.streaks?.active || [],
emptyIcon: <Zap className="size-16 text-gray-300" />,
emptyText: 'No users with streaks yet',
},
{
title: 'Lifetime',
users: stats.streaks?.lifetime || [],
emptyIcon: <Zap className="size-16 text-gray-300" />,
emptyText: 'No users with streaks yet',
},
]}
/>
<LeaderboardLane
title="Projects Completed"
tabs={[
{
title: 'This Month',
users: stats.projectSubmissions.currentMonth,
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
emptyText: 'No projects submitted this month',
},
{
title: 'Lifetime',
users: stats.projectSubmissions.lifetime,
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
emptyText: 'No projects submitted yet',
},
]}
/>
<LeaderboardLane
title="Top Contributors"
subtitle="Past 2 weeks"
tabs={[
{
title: 'This Month',
users: stats.githubContributors.currentMonth,
emptyIcon: <GitPullRequest className="size-16 text-gray-300" />,
emptyText: 'No contributors this month',
},
]}
/>
</div>
</div>
</div>
@ -90,24 +132,32 @@ export function LeaderboardPage(props: LeaderboardPageProps) {
type LeaderboardLaneProps = {
title: string;
subtitle?: string;
tabs: {
title: string;
users: LeadeboardUserDetails[];
users: LeaderboardUserDetails[];
emptyIcon?: ReactNode;
emptyText?: string;
}[];
};
function LeaderboardLane(props: LeaderboardLaneProps) {
const { title, tabs } = props;
const { title, subtitle, tabs } = props;
const [activeTab, setActiveTab] = useState(tabs[0]);
const { users: usersToShow, emptyIcon, emptyText } = activeTab;
return (
<div className="overflow-hidden rounded-md border bg-white shadow-sm">
<div className="mb-3 flex items-center justify-between gap-2 bg-gray-100 px-3 py-3">
<h3 className="text-base font-medium">{title}</h3>
<div className="flex min-h-[450px] flex-col overflow-hidden rounded-xl border bg-white shadow-sm">
<div className="mb-3 flex items-center justify-between gap-2 px-3 py-3">
<h3 className="text-base font-medium">
{title}{' '}
{subtitle && (
<span className="ml-1 text-sm font-normal text-gray-400">
{subtitle}
</span>
)}
</h3>
{tabs.length > 1 && (
<div className="flex items-center gap-2">
@ -135,7 +185,7 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
</div>
{usersToShow.length === 0 && emptyText && (
<div className="flex flex-col items-center justify-center p-8">
<div className="flex flex-grow flex-col items-center justify-center p-8">
{emptyIcon}
<p className="mt-4 text-sm text-gray-500">{emptyText}</p>
</div>
@ -145,14 +195,18 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
<ul className="divide-y divide-gray-100 pb-4">
{usersToShow.map((user, counter) => {
const avatar = user?.avatar
? `${import.meta.env.PUBLIC_AVATAR_BASE_URL}/${user.avatar}`
? user?.avatar?.startsWith('http')
? user?.avatar
: `${import.meta.env.PUBLIC_AVATAR_BASE_URL}/${user.avatar}`
: '/images/default-avatar.png';
const rank = counter + 1;
const isGitHubUser = avatar?.indexOf('github') > -1;
return (
<li
key={user.id}
className="flex items-center justify-between gap-1 py-2.5 pl-2 pr-5 hover:bg-gray-50"
className="flex items-center justify-between gap-1 py-2.5 pl-2 pr-5"
>
<div className="flex min-w-0 items-center gap-2">
<span
@ -170,9 +224,19 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
<img
src={avatar}
alt={user.name}
className="size-7 shrink-0 rounded-full"
className="mr-1 size-7 shrink-0 rounded-full"
/>
{isGitHubUser ? (
<a
href={`https://github.com/kamranahmedse/developer-roadmap/pulls?q=is%3Apr+is%3Aclosed+author%3A${user.name}`}
target="_blank"
className="truncate font-medium underline underline-offset-2"
>
{user.name}
</a>
) : (
<span className="truncate">{user.name}</span>
)}
{rank === 1 ? (
<TrophyEmoji className="size-5" />
) : rank === 2 ? (

@ -5,10 +5,13 @@ import type {
} from '../../lib/project.ts';
import { Users } from 'lucide-react';
import { formatCommaNumber } from '../../lib/number.ts';
import { cn } from '../../lib/classname.ts';
import { isLoggedIn } from '../../lib/jwt.ts';
type ProjectCardProps = {
project: ProjectFileType;
userCount?: number;
status?: 'completed' | 'started' | 'none';
};
const badgeVariants: Record<ProjectDifficultyType, string> = {
@ -18,10 +21,12 @@ const badgeVariants: Record<ProjectDifficultyType, string> = {
};
export function ProjectCard(props: ProjectCardProps) {
const { project, userCount = 0 } = props;
const { project, userCount = 0, status } = props;
const { frontmatter, id } = project;
const isLoadingStatus = status === undefined;
const userStartedCount = status !== 'none' && userCount === 0 ? userCount + 1 : userCount;
return (
<a
href={`/projects/${id}`}
@ -34,18 +39,47 @@ export function ProjectCard(props: ProjectCardProps) {
/>
<Badge variant={'grey'} text={frontmatter.nature} />
</span>
<span className="my-3 flex flex-col">
<span className="my-3 flex min-h-[100px] flex-col">
<span className="mb-1 font-medium">{frontmatter.title}</span>
<span className="text-sm text-gray-500">{frontmatter.description}</span>
</span>
<span className="flex items-center gap-2 text-xs text-gray-400">
<Users className="inline-block size-3.5" />
{userCount > 0 ? (
<>{formatCommaNumber(userCount)} Started</>
<span className="flex min-h-[22px] items-center justify-between gap-2 text-xs text-gray-400">
{isLoadingStatus ? (
<>
<span className="h-5 w-24 animate-pulse rounded bg-gray-200" />{' '}
<span className="h-5 w-20 animate-pulse rounded bg-gray-200" />{' '}
</>
) : (
<>
<span className="flex items-center gap-1.5">
<Users className="size-3.5" />
{userStartedCount > 0 ? (
<>{formatCommaNumber(userStartedCount)} Started</>
) : (
<>Be the first to solve!</>
)}
</span>
{status !== 'none' && (
<span
className={cn(
'flex items-center gap-1.5 rounded-full border border-current px-2 py-0.5 capitalize',
status === 'completed' && 'text-green-500',
status === 'started' && 'text-yellow-500',
)}
>
<span
className={cn('inline-block h-2 w-2 rounded-full', {
'bg-green-500': status === 'completed',
'bg-yellow-500': status === 'started',
})}
/>
{status}
</span>
)}
</>
)}
</span>
</a>
);
}

@ -1,7 +1,7 @@
import { ProjectCard } from './ProjectCard.tsx';
import { HeartHandshake, Trash2 } from 'lucide-react';
import { cn } from '../../lib/classname.ts';
import { useMemo, useState } from 'react';
import { useEffect, useMemo, useState } from 'react';
import {
projectDifficulties,
type ProjectDifficultyType,
@ -12,6 +12,8 @@ import {
getUrlParams,
setUrlParams,
} from '../../lib/browser.ts';
import { httpPost } from '../../lib/http.ts';
import { isLoggedIn } from '../../lib/jwt.ts';
type DifficultyButtonProps = {
difficulty: ProjectDifficultyType;
@ -38,6 +40,11 @@ function DifficultyButton(props: DifficultyButtonProps) {
);
}
export type ListProjectStatusesResponse = Record<
string,
'completed' | 'started'
>;
type ProjectsListProps = {
projects: ProjectFileType[];
userCounts: Record<string, number>;
@ -50,6 +57,30 @@ export function ProjectsList(props: ProjectsListProps) {
const [difficulty, setDifficulty] = useState<
ProjectDifficultyType | undefined
>(urlDifficulty);
const [projectStatuses, setProjectStatuses] =
useState<ListProjectStatusesResponse>();
const loadProjectStatuses = async () => {
if (!isLoggedIn()) {
setProjectStatuses({});
return;
}
const projectIds = projects.map((project) => project.id);
const { response, error } = await httpPost(
`${import.meta.env.PUBLIC_API_URL}/v1-list-project-statuses`,
{
projectIds,
},
);
if (error || !response) {
console.error(error);
return;
}
setProjectStatuses(response);
};
const projectsByDifficulty: Map<ProjectDifficultyType, ProjectFileType[]> =
useMemo(() => {
@ -72,12 +103,17 @@ export function ProjectsList(props: ProjectsListProps) {
? projectsByDifficulty.get(difficulty) || []
: projects;
useEffect(() => {
loadProjectStatuses().finally();
}, []);
return (
<div className="flex flex-col">
<div className="my-2.5 flex items-center justify-between">
<div className="flex flex-wrap gap-1">
{projectDifficulties.map((projectDifficulty) => (
<DifficultyButton
key={projectDifficulty}
onClick={() => {
setDifficulty(projectDifficulty);
setUrlParams({ difficulty: projectDifficulty });
@ -130,7 +166,18 @@ export function ProjectsList(props: ProjectsListProps) {
})
.map((matchingProject) => {
const count = userCounts[matchingProject?.id] || 0;
return <ProjectCard project={matchingProject} userCount={count} />;
return (
<ProjectCard
key={matchingProject.id}
project={matchingProject}
userCount={count}
status={
projectStatuses
? (projectStatuses?.[matchingProject.id] || 'none')
: undefined
}
/>
);
})}
</div>
</div>

@ -190,6 +190,7 @@ export function ProjectsPage(props: ProjectsPageProps) {
key={project.id}
project={project}
userCount={userCounts[project.id] || 0}
status={'none'}
/>
))}
</div>

@ -357,6 +357,11 @@ const groups: GroupType[] = [
link: '/ai-data-scientist',
type: 'role',
},
{
title: 'AI Engineer',
link: '/ai-engineer',
type: 'role',
},
{
title: 'Data Analyst',
link: '/data-analyst',

@ -175,7 +175,6 @@ export function TopicDetail(props: TopicDetailProps) {
setError('');
setIsLoading(true);
setIsActive(true);
sponsorHidden.set(true);
setTopicId(topicId);
setResourceType(resourceType);
@ -336,6 +335,10 @@ export function TopicDetail(props: TopicDetailProps) {
return resource.topicIds.includes(normalizedTopicId);
});
const hasPaidScrimbaLinks = paidResourcesForTopic.some(
(resource) => resource?.url?.toLowerCase().indexOf('scrimba') !== -1,
);
return (
<div className={'relative z-[90]'}>
<div
@ -486,6 +489,19 @@ export function TopicDetail(props: TopicDetailProps) {
})}
</ul>
{hasPaidScrimbaLinks && (
<div className="relative -mb-1 ml-3 mt-4 rounded-md border border-yellow-300 bg-yellow-100 px-2.5 py-2 text-sm text-yellow-800">
<div className="flex items-center gap-2">
<Coins className="h-4 w-4 text-yellow-700" />
<span>
Scrimba is offering{' '}
<span className={'font-semibold'}>20% off</span> on
all courses for roadmap.sh users.
</span>
</div>
</div>
)}
{showPaidResourceDisclaimer && (
<PaidResourceDisclaimer
onClose={() => {

@ -0,0 +1,441 @@
---
title: 'Top 7 Backend Frameworks to Use in 2024: Pro Advice'
description: 'Get expert advice on backend frameworks for 2024. Learn about the top 7 frameworks that can elevate your development process.'
authorId: fernando
excludedBySlug: '/backend/frameworks'
seo:
title: 'Top 7 Backend Frameworks to Use in 2024: Pro Advice'
description: 'Get expert advice on backend frameworks for 2024. Learn about the top 7 frameworks that can elevate your development process.'
ogImageUrl: 'https://assets.roadmap.sh/guest/top-backend-frameworks-jfpux.jpg'
isNew: true
type: 'textual'
date: 2024-09-27
sitemap:
priority: 0.7
changefreq: 'weekly'
tags:
- 'guide'
- 'textual-guide'
- 'guide-sitemap'
---
![Best backend frameworks](https://assets.roadmap.sh/guest/top-backend-frameworks-jfpux.jpg)
Choosing the right backend framework in 2024 can be crucial when you’re building web applications. While the programming language you pick is important, the backend framework you go with will help define how scalable, secure, and maintainable your application is. It’s the foundation that supports all the features your users interact with on the frontend and keeps everything running smoothly behind the scenes.
So, it’s a decision you want to get right.
In 2024, [backend development](https://roadmap.sh/backend) is more complex and interconnected than ever. Developers are working with APIs, microservices, cloud-native architectures, and ensuring high availability while keeping security at the forefront. It’s an era where the demand for real-time data, seamless integrations, and efficient performance is higher than ever.
Whether you're building an enterprise-level application or a small startup, the right backend framework can save you time and headaches down the road.
Let’s take a look at the following top backend development frameworks at the top of all lists for 2024:
* NextJS
* Fastify
* SvelteKit
* Ruby on Rails
* Laravel
* Phoenix
* Actix
## Criteria for Evaluating The Top Backend Frameworks
How can you determine what “best backend framework” means for you? To answer that question, I’ll define a set of key factors to consider. Let’s break down the most important criteria that will help you make the best choice for your project:
**Performance**:
* A high-performing backend framework processes server-side tasks (e.g., database queries, user sessions, real-time data) quickly and efficiently.
* Faster processing improves user experience, especially in 2024 when speed is critical.
**Scalability**:
* The framework should handle increased traffic, larger datasets, and feature expansion without issues.
* It should smoothly scale for both small and large user bases.
**Flexibility**:
* A flexible framework adapts to new business or technical requirements.
* It should support various project types without locking you into a specific structure.
**Community and Ecosystem**:
* A strong community provides support through tutorials, forums, and third-party tools.
* A good ecosystem includes useful plugins and integrations for popular services or databases.
**Learning Curve**:
* An easy-to-learn framework boosts productivity and helps you get up to speed quickly.
* A framework should balance ease of learning with powerful functionality.
**Security**:
* A reliable framework includes built-in security features to protect user data and prevent vulnerabilities.
* It should help you comply with regulations and address security concerns from the start.
**Future-Proofing**:
* Choose a framework with a history of updates, a clear development roadmap, and a growing community.
* A future-proof framework ensures long-term support and relevance.
### My go-to backend framework
My favorite backend framework is Next.js because it has the highest scores from the group.
That said, I’ve applied the above criteria to the best backend development frameworks I’m covering below in this guide. This table gives you a snapshot view of how they all compare according to my ratings, and I’ll explain the details further below.
![backend frameworks](https://assets.roadmap.sh/guest/backend-framework-table-nl1iw.png)
Of course, Next.js is the best one for me, and that works for me alone. You have to consider your own projects and your own context to understand what the best choice for you would be.
Let’s get into the selection and what their strengths and weaknesses are to help you select the right one for you.
## Top 7 Backend Frameworks in 2024
### Next.js
![NextJS](https://assets.roadmap.sh/guest/logo-nextjs-mbn1n.png)
Next.js is a full-stack React framework and one of the most popular backend frameworks in the JavaScript community. Over the years, it has evolved into a robust web development solution that supports static site generation (SSG), server-side rendering (SSR), and even edge computing. Backed by Vercel, it’s now one of the go-to frameworks for modern web development.
#### 1\. Performance
Next.js has a wonderful performance thanks to its ability to optimize for both static and dynamic generation. With server-side rendering and support for edge computing, it's built to handle high-performance requirements. Additionally, automatic code splitting ensures only the necessary parts of the app are loaded, reducing load times.
⭐ **Rating: 5/5**
#### 2\. Scalability
Next.js is designed to scale easily, from small static websites to large-scale dynamic applications. Its ability to turn backend routes into serverless functions puts it at an unfair advantage over other frameworks. Paired with Vercel’s deployment platform, scaling becomes almost effortless.
⭐ **Rating: 5/5**
#### 3\. Flexibility
Next.js is one of the most flexible frameworks out there. It supports a wide range of use cases, from simple static websites to highly complex full-stack applications. With its API routes feature, developers can create powerful backends, making Next.js suitable for both frontend and backend development in a single framework.
⭐ **Rating: 5/5**
#### 4\. Community and Ecosystem
The Next.js community (just like the JavaScript community in general) is large and quite active, with an ever-growing number of plugins, integrations, and third-party tools. The framework has solid documentation and an active ecosystem, thanks to its close ties to both the React community and Vercel’s developer support.
⭐ **Rating: 5/5**
#### 5\. Learning Curve
For developers already familiar with React, Next.js provides a relatively smooth learning curve. However, for those new to SSR, SSG or even RSC (React Server Components), there’s a bit of a learning curve as you adapt to these concepts (after all, you’re learning React and backend development at the same time). That said, the framework's excellent documentation and active community make it easier to grasp.
⭐ **Rating: 4/5**
#### 6\. Security
Next.js doesn’t inherently have a wide array of built-in security tools, but it follows secure default practices and can be paired with Vercel’s security optimizations for additional protection. Out of the box, Next.js ensures some level of security against common web threats but will need further configuration depending on the app's complexity.
⭐ **Rating: 3.5/5**
#### 7\. Future-Proofing
Backed by Vercel, Next.js has a bright future. Vercel consistently pushes updates, introduces new features, and improves the overall developer experience. Given its adoption and strong support, Next.js is very future-proof, with no signs of slowing down.
⭐ **Rating: 5/5**
### Fastify
![Fastify](https://assets.roadmap.sh/guest/logo-fastify-3bw4o.png)
Fastify is a lightweight and fast backend framework for Node.js, often seen as a high-performance alternative to Express.js. It was created with a strong focus on speed, low overhead, and developer-friendly tooling, making it a popular choice for developers building APIs and microservices. Fastify offers a flexible plugin architecture and features like schema-based validation and HTTP/2 support, setting it apart in the Node.js ecosystem.
#### 1\. Performance
Fastify shines when it comes to performance. It’s optimized for handling large amounts of requests with low latency, making it one of the fastest Node.js frameworks available.
⭐ **Rating: 5/5**
#### 2\. Scalability
With a strong focus on scalability, Fastify is ideal for handling large applications and high-traffic scenarios. Its lightweight nature ensures that you can build scalable services with minimal resource consumption.
⭐ **Rating: 5/5**
#### 3\. Flexibility
Fastify’s [plugin architecture](https://fastify.dev/docs/latest/Reference/Plugins/) is one of its key strengths. This allows developers to easily extend the framework’s capabilities and tailor it to specific use cases, whether you’re building APIs, microservices, or even server-rendered applications.
⭐ **Rating: 5/5**
#### 4\. Community and Ecosystem
While Fastify’s community is not as large as Express.js or Next.js, it is steadily growing. The ecosystem of plugins and tools continues to expand, making it easier to find the right tools and libraries to fit your needs. However, its smaller ecosystem means fewer third-party tools compared to some of the more established frameworks.
⭐ **Rating: 3/5**
#### 5\. Learning Curve
If you’re coming from Express.js or another Node.js framework, Fastify’s learning curve is minimal. Its API is designed to be familiar and easy to adopt for Node.js developers. While there are some differences in how Fastify handles things like schema validation and plugins, it’s a relatively smooth transition for most developers.
⭐ **Rating: 4/5**
#### 6\. Security
Fastify incorporates built-in security features such as schema-based validation, which helps prevent vulnerabilities like injection attacks. The framework also supports HTTP/2 out of the box, which provides enhanced security and performance.
⭐ **Rating: 4/5**
#### 7\. Future-Proofing
Fastify has a strong development roadmap and is [consistently updated](https://github.com/fastify/fastify) with performance improvements and new features. The backing from a growing community and its continued adoption by large-scale applications make Fastify a solid bet for long-term use.
⭐ **Rating: 5/5**
### SvelteKit
![SvelteKit](https://assets.roadmap.sh/guest/logo-sveltekit-9ntqz.png)
SvelteKit is a full-stack framework built on top of Svelte, a front-end compiler that moves much of the heavy lifting to compile time rather than runtime. SvelteKit was designed to simplify building modern web applications by providing server-side rendering (SSR), static site generation (SSG), and support for client-side routing—all in a performance-optimized package. In other words, it’s an alternative to Next.js.
#### 1\. Performance
SvelteKit leverages Svelte compile-time optimizations, resulting in fast runtime performance. Unlike frameworks that rely heavily on virtual DOM diffing, Svelte compiles components to efficient JavaScript code, which means fewer resources are used during rendering.
⭐ **Rating: 5/5**
#### 2\. Scalability
While SvelteKit is excellent for small to medium-sized applications, its scalability for enterprise-level applications is still being tested by the developer community. It is possible to scale SvelteKit for larger applications, especially with the right infrastructure and server setup, but it may not yet have the same level of proven scalability as more mature frameworks like Next.js or Fastify.
⭐ **Rating: 4/5**
#### 3\. Flexibility
As with most web frameworks, SvelteKit is highly flexible, allowing developers to build everything from static sites to full-stack robust web applications. It provides SSR out of the box, making it easy to handle front-end and back-end logic in a single codebase. Additionally, it supports various deployment environments like serverless functions or traditional servers.
⭐ **Rating: 5/5**
#### 4\. Community and Ecosystem
The SvelteKit community is growing rapidly, and more tools, plugins, and resources are being developed to support it. While the ecosystem isn’t as mature as frameworks like React or Vue, the rate of adoption is promising. The official documentation is well-written, and there’s a growing number of third-party tools, libraries, and guides available for developers to tap into.
⭐ **Rating: 3.5/5**
#### 5\. Learning Curve
For developers familiar with Svelte, the transition to SvelteKit is smooth and intuitive. However, if you're new to Svelte, there is a moderate learning curve, particularly in understanding Svelte’s reactivity model and SvelteKit's routing and SSR features. Still, the simplicity of Svelte as a framework helps ease the learning process compared to more complex frameworks like React or Angular.
⭐ **Rating: 4/5**
#### 6\. Security
SvelteKit’s security features are still evolving, with basic protections in place but requiring developers to implement best practices to build really secure web applications. There are no significant built-in security tools like in some larger frameworks, so developers need to be cautious and handle aspects like input validation, cross-site scripting (XSS) protection, and CSRF manually.
⭐ **Rating: 3/5**
#### 7\. Future-Proofing
Svelte’s increasing popularity and SvelteKit’s rapid development signal a bright future for the framework. The growing adoption of Svelte, backed by its simplicity and performance, ensures that SvelteKit will continue to be developed and improved in the coming years.
⭐ **Rating: 5/5**
### Ruby on Rails
![Ruby on Rails](https://assets.roadmap.sh/guest/logo-rails-eg1x0.png)
Ruby on Rails (Rails) is a full-stack web development framework written in Ruby, created by David Heinemeier Hansson in 2004\. Rails revolutionized web development by promoting "convention over configuration" and allowing developers to rapidly build web applications with fewer lines of code. It
#### 1\. Performance
Rails performs exceptionally well for typical CRUD (Create, Read, Update, Delete) applications, where database operations are straightforward and heavily optimized within the framework. However, as applications grow in complexity or require real-time features, Rails’ performance can become a challenge.
⭐ **Rating: 3.5/5**
#### 2\. Scalability
Rails is often critiqued for its scalability limitations, but it can scale when combined with proper architecture and best practices. Techniques like database sharding, horizontal scaling, and using background jobs for heavy-lifting tasks can help. Still, it’s not the first choice for developers who anticipate massive scale, as it requires careful planning and optimization to avoid performance bottlenecks.
⭐ **Rating: 3.5/5**
#### 3\. Flexibility
Rails is a great framework for rapid development, especially for standard web applications, such as e-commerce platforms, blogs, or SaaS products. However, it’s less flexible when it comes to non-standard architectures or unique application needs. It’s designed with conventions in mind, so while those conventions help you move fast, they can become restrictive in more unconventional use cases.
⭐ **Rating: 3.5/5**
#### 4\. Community and Ecosystem
Next to JavaScript with NPM, Rails has one of the most mature ecosystems in web development, with a huge repository of gems (libraries) that can help speed up development. From user authentication systems to payment gateways, there’s a gem for almost everything, saving developers from reinventing the wheel. The community is also very active, and there are many resources, tutorials, and tools to support developers at every level.
⭐ **Rating: 5/5**
#### 5\. Learning Curve
Rails is known for its easy learning curve, especially for those new to web development. The framework’s focus on convention over configuration means that beginners don’t need to make many decisions and can get a functional app up and running quickly. On top of that, Ruby’s readable syntax also makes it approachable for new devs.
However, as the application grows, mastering the framework’s more advanced concepts and learning how to break through those pre-defined conventions can become a bit of a problem.
⭐ **Rating: 4/5**
#### 6\. Security
Rails comes with a solid set of built-in security features, including protections against SQL injection, XSS (cross-site scripting), and CSRF (cross-site request forgery). By following Rails' conventions, developers can implement secure practices without much additional work. However, as with any framework, you still need to stay updated on security vulnerabilities and ensure proper coding practices are followed.
⭐ **Rating: 4.5/5**
#### 7\. Future-Proofing
While Rails is still highly relevant and widely used, its growth has slowed down from its initial hype during 2010, and it’s no longer the hot, new framework. That said, it remains a solid choice for many businesses, especially those building content-heavy or e-commerce applications. With an established user base and regular updates, Rails is not going anywhere, but its popularity may continue to wane as newer frameworks gain traction.
⭐ **Rating: 4/5**
### Laravel
![Laravel](https://assets.roadmap.sh/guest/logo-laravel-iteyj.png)
Laravel is a PHP backend framework that was introduced in 2011 by Taylor Otwell. It has since become one of the most popular frameworks in the PHP ecosystem, known for its elegant syntax, ease of use, and focus on developer experience (known to some as the RoR of PHP). Laravel offers a range of built-in tools and features like routing, authentication, and database management, making it ideal for building full-featured web applications quickly.
#### 1\. Performance
Laravel performs well for most typical web applications, especially CRUD operations. However, for larger, more complex applications, performance can be a concern. Using tools like caching, query optimization, and Laravel’s built-in optimization features (such as queue handling and task scheduling) can help boost performance, but some extra work may be required for high-load environments.
⭐ **Rating: 4/5**
#### 2\. Scalability
Laravel can scale, but like Rails, it requires careful attention to architecture and infrastructure. By using horizontal scaling techniques, microservices, and services like AWS or Laravel’s Vapor platform, you can build scalable applications. However, Laravel is often seen as better suited for small to medium applications without heavy scaling needs right out of the box.
⭐ **Rating: 3.5/5**
#### 3\. Flexibility
Laravel is highly flexible, allowing you to build a wide variety of web applications. With built-in features for routing, ORM, middleware, and templating, you can quickly build anything from small websites to enterprise applications.
⭐ **Rating: 5/5**
#### 4\. Community and Ecosystem
Contrary to popular belief (mainly due to a lack of hype around the technology), Laravel has a large, active community and a vast ecosystem of packages and third-party tools. With Laracasts, a popular video tutorial platform, and Laravel.io the community portal for Laravel developers, there are many ways to reach out and learn from others.
⭐ **Rating: 5/5**
#### 5\. Learning Curve
Laravel has a relatively gentle learning curve, especially for developers familiar with PHP. Its focus on simplicity, readable syntax, and built-in features make it easy to pick up for beginners. However, mastering the full list of Laravel’s capabilities and best practices can take some time for more complex projects.
⭐ **Rating: 4.5/5**
#### 6\. Security
Just like others, Laravel comes with built-in security features, such as protection against common vulnerabilities like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). The framework adheres to best security practices, making it easier for developers to build secure applications without much extra effort.
⭐ **Rating: 4.5/5**
#### 7\. Future-proofing
Laravel is still highly relevant and continues to grow in popularity (having [recently secured a very substantial](https://laravel-news.com/laravel-raises-57-million-series-a) amount of money). It has a regular release schedule and a strong commitment to maintaining backward compatibility. With its consistent updates, active community, and growing ecosystem, Laravel is a solid choice for long-term projects.
⭐ **Rating: 4.5/5**
### Phoenix
![Phoenix](https://assets.roadmap.sh/guest/phoenix-logo-5g60a.png)
#### **Overview and History**
Phoenix is a backend framework written in Elixir, designed to create high-performance, scalable web applications. It leverages Elixir's concurrency and fault-tolerant nature (inherited from the Erlang ecosystem) to build real-time, distributed systems.
#### 1\. Performance
Phoenix is known for its outstanding performance, particularly in handling large numbers of simultaneous connections. Thanks to Elixir’s concurrency model and lightweight processes, Phoenix can serve thousands of requests with minimal resource consumption. Real-time applications benefit especially from Phoenix’s built-in WebSockets and its LiveView feature for updating UIs in real-time without the need for JavaScript-heavy frameworks.
⭐ **Rating: 5/5**
#### 2\. Scalability
Scalability is one of Phoenix’s biggest features. Because it runs on the Erlang VM, which was designed for distributed, fault-tolerant systems, Phoenix can scale horizontally and vertically with ease.
⭐ **Rating: 5/5**
#### 3\. Flexibility
Phoenix is highly flexible, supporting everything from traditional web applications to real-time applications like chat apps and live updates. Its integration with Elixir’s functional programming paradigm and the BEAM virtual machine allows developers to build fault-tolerant, systems. The flexibility extends to how you can structure applications, scale components, and handle real-time events seamlessly.
⭐ **Rating: 5/5**
#### 4\. Community and Ecosystem
Phoenix has a growing and passionate community, but it’s still smaller compared to more established frameworks like Rails or Laravel. However, it benefits from Elixir’s ecosystem, including libraries for testing, real-time applications, and database management. The community is supportive, and the framework’s documentation is detailed and developer-friendly, making it easy to get started.
⭐ **Rating: 2.5/5**
#### 5\. Learning Curve
Phoenix, being built on Elixir, has a steeper learning curve than frameworks based on more common languages like JavaScript or PHP. Elixir’s functional programming model, while powerful, can be challenging for developers unfamiliar with the paradigm.
⭐ **Rating: 3.5/5**
#### 6\. Security
As with most of the popular backend frameworks, Phoenix comes with strong built-in security features, including protections against common vulnerabilities like XSS, SQL injection, and CSRF. Additionally, because Elixir processes are isolated, Phoenix applications are resilient to many types of attacks. While some manual work is still required to ensure security, Phoenix adheres to best practices and provides tools to help developers write secure code.
⭐ **Rating: 4.5/5**
#### 7\. Future-Proofing
Phoenix has a bright future thanks to its solid foundation in the Erlang/Elixir ecosystem, which is known for long-term reliability and support. While the framework might be technologically sound and future-proof, the key to having Elixir in the future will depend on the growth of its popularity. If Elixir’s community keeps growing, we’ll be able to enjoy the framework for a long time.
⭐ **Rating: 5/5**
### Actix
![Actix](https://assets.roadmap.sh/guest/logo-actix-rust-31pi3.png)
Actix is a powerful, high-performance web framework written in Rust. It’s based on the actor model, which is ideal for building concurrent, distributed systems. Actix is known for its incredible performance and memory safety, thanks to Rust’s strict compile-time guarantees.
#### 1\. Performance
Actix is one of the fastest web frameworks available, thanks to Rust’s system-level performance and Actix’s use of asynchronous programming. As it happens with JavaScript-based frameworks, it can handle a large number of requests with minimal overhead, making it ideal for high-performance, real-time applications.
⭐ **Rating: 5/5**
#### 2\. Scalability
The actor model makes Actix the best at handling concurrent tasks and scaling across multiple threads or servers. Rust’s memory safety model and Actix’s architecture make it highly efficient in resource usage, meaning applications can scale well without excessive overhead.
⭐ **Rating: 5/5**
#### 3\. Flexibility
Actix is flexible but requires a deeper understanding of Rust’s ownership and concurrency model to fully take advantage of it. It’s great for building both small, fast APIs and large, service architectures. While Actix is powerful, it’s less forgiving compared to other popular backend frameworks like Node.js or Python’s Flask, where rapid prototyping is easier.
⭐ **Rating: 3/5**
#### 4\. Community and Ecosystem
Rust’s ecosystem, while growing, is still smaller compared to more established languages like JavaScript or Python. However, the Rust community is highly engaged, and support is steadily improving.
⭐ **Rating: 3.5/5**
#### 5\. Learning Curve
Actix inherits Rust’s learning curve, which can be steep for developers new to systems programming or Rust’s strict memory management rules. However, for developers already familiar with Rust, Actix can be a great gateway into web development.
⭐ **Rating: 2/5**
#### 6\. Security
Rust is known for its memory safety and security guarantees, and Actix benefits from these inherent strengths. Rust’s compile-time checks prevent common security vulnerabilities like null pointer dereferencing, buffer overflows, and data races. While these features tackle one side of the security ecosystem, more relevant ones like web-related vulnerabilities are not tackled by the framework.
⭐ **Rating: 2.5/5**
#### **7\. Future-Proofing**
Rust’s growing popularity and adoption, especially in performance-critical areas, ensure that Actix has a strong future. While Actix’s ecosystem is still developing, the framework is regularly maintained and benefits from Rust’s long-term stability.
⭐ **Rating: 4.5/5**
## Conclusion
Choosing the right backend framework is a critical decision that can shape the future of your project. In 2024, developers have more powerful options than ever, from popular backend frameworks like Ruby on Rails, Laravel or Next.js to high-performance focused, like Fastify, SvelteKit, Phoenix, and Actix. Each framework has its own strengths, making it essential to consider factors such as performance, scalability, flexibility, and the learning curve to ensure you pick the right tool for the job.
Ultimately, there’s no proverbial silver bullet that solves all your problems. Your choice will depend on your project’s needs, your team's expertise, and the long-term goals of your application.
So take your time, weigh the pros and cons, and pick the framework that aligns best with your vision for the future.

@ -0,0 +1,186 @@
---
title: '11 DevOps Principles and Practices to Master: Pro Advice'
description: 'Elevate your game by understanding this set of key DevOps principles and practices. Gain pro insights for a more efficient, collaborative workflow!'
authorId: fernando
excludedBySlug: '/devops/principles'
seo:
title: '11 DevOps Principles and Practices to Master: Pro Advice'
description: 'Elevate your game by understanding this set of key DevOps principles and practices. Gain pro insights for a more efficient, collaborative workflow!'
ogImageUrl: 'https://assets.roadmap.sh/guest/devops-engineer-skills-tlace.jpg'
isNew: true
type: 'textual'
date: 2024-09-24
sitemap:
priority: 0.7
changefreq: 'weekly'
tags:
- 'guide'
- 'textual-guide'
- 'guide-sitemap'
---
![DevOps Principles and Practices](https://assets.roadmap.sh/guest/devops-principles-pfswx.jpg)
If you truly want to understand what makes DevOps so effective, it’s essential to know and master its core principles.
DevOps is more than just a collaboration between development and operations teams; it's built on fundamental principles that simplify software delivery.
In this guide, I’m going to dive deep into the core principles and practices that make the DevOps practice “tick.” If you’re a DevOps engineer or you want to become one, these are the DevOps principles you should master.
I’ll explain the following principles in detail:
1. Understanding the culture you want to join
2. CI/CD
3. Knowing how to use infrastructure as code tools.
4. Understanding containerization.
5. Monitoring & observability.
6. Security
7. Reducing the toil and technical debt.
8. Adopting GitOps.
9. Understanding that you’ll be learning & improving constantly.
10. Understanding basic programming concepts.
11. Embracing automation.
## 1\. Understanding DevOps Culture
[DevOps](https://roadmap.sh/devops) culture is the foundation for all DevOps principles. At its core, it's about fostering a collaborative environment where development and operations teams work together seamlessly. In traditional software development, developers focus on writing code while the operations team is tasked with deploying and maintaining it. This division often leads to misunderstandings and delays.
Instead of operating in silos, these teams, when they follow the DevOps culture, end up sharing a common goal: delivering high-quality software efficiently. This cultural shift reduces the "us versus them" mentality that many organizations suffer, fostering cooperation instead of blame.
DevOps culture encourages development and operations to collaborate throughout the software development lifecycle (SDLC). By aligning their goals and encouraging open communication, both teams can work together to improve the process of development, ultimately resulting in faster and more reliable software delivery.
Key components of this culture include shared responsibility, transparency, and a commitment to continuous improvement.
## 2\. Continuous Integration and Continuous Deployment (CI/CD)
![Continuous Integration and Continuous Deployment](https://assets.roadmap.sh/guest/continous-development-vs-continuous-integration-l2fak.png)
Continuous Integration (CI) and Continuous Deployment (CD) are central to DevOps principles. CI is the practice of frequently integrating code changes into a shared repository, ensuring that new code is automatically tested and validated. This practice helps catch bugs early, reducing the risk of introducing issues into the main codebase. CI allows devs and ops teams to work more efficiently, improving the overall quality of the software.
Continuous Deployment, on the other hand, takes things a step further by automatically deploying code changes to production once they pass the CI tests. This ensures that new features and bug fixes are delivered to users as quickly as possible. Together, CI and CD form a pipeline that streamlines the software development lifecycle, from code commit to production deployment in seconds (or in some cases, minutes).
Implementing CI/CD involves using various tools and practices. Jenkins, GitLab CI, CircleCI, and Travis CI are popular options for setting up CI pipelines, while tools like Spinnaker and Argo CD help with CD.
## 3\. Infrastructure as Code (IaC)
![Infrastructure as Code](https://assets.roadmap.sh/guest/infrastructure-as-code-w965a.png)
Infrastructure as Code (IaC) is a game-changer in the DevOps world. Traditionally, provisioning infrastructure involved manual setup and configuration, which was time-consuming and, of course, prone to human error. IaC changes the game by treating infrastructure the same way we treat application code: as a set of scripts or configurations that can be version-controlled, tested, and automated.
Through IaC, DevOps teams can ensure consistency and repeatability across different environments. It eliminates the "works on my machine" problem by providing a standardized environment for software to run on, whether it's on a developer's local machine, in a staging environment, or in production.
Over the years, IaC tools have evolved quite a lot. At the start of it all, players like Chef and Puppet introduced the concept of configuration management, allowing you to define the desired state of your systems. Ansible took that one step further with its agentless architecture, making it easier to manage infrastructure at scale. Terraform took IaC to the next level by providing a tool-agnostic way to provision resources across multiple cloud providers, making it a favorite among DevOps engineers.
## 4\. Containerization
![Containerization](https://assets.roadmap.sh/guest/containers-docker-g491z.png)
Containerization is a core practice and one of the main devops principles to constantly apply. Containers provide a lightweight, portable way to package software along with its dependencies, ensuring that it runs consistently across different environments. Containers share the host system's kernel, making them more efficient and faster to start up than virtual machines.
These “containers” have been playing a key role in solving one of the age-old problems in software development: environment inconsistencies. By encapsulating an application and its dependencies into a container, you can ensure that it runs the same way on a developer's laptop as it does in production. This consistency simplifies the development process and reduces the risk of environment-related problems.
In this space, Docker is the most popular tool for creating and managing containers (although not the only one), offering a simple way to build, package, and distribute containerized applications. Kubernetes takes containerization to the next level by providing a platform for orchestrating containers at scale. With Kubernetes, you can automate the deployment, scaling, and management of containerized applications, making it easier to manage complex, multi-container applications.
While there is no excuse to not use DevOps at this stage in any software project, some of the biggest benefits of using containers in the DevOps lifecycle are: consistency, scalability, and portability.
In other words, they make it easier to move applications between different environments. They also enable more efficient use of resources, as multiple containers can run on the same host without the overhead of a full virtual machine.
## 5\. Monitoring and Observability
![Monitoring and Observability](https://assets.roadmap.sh/guest/monitoring-servers-14k80.png)
Monitoring and observability are essential components of the DevOps practice and key principles for any DevOps team. While monitoring focuses on tracking the health and performance of your systems, observability goes a step further by providing insights into the internal state of your applications based on the data they produce. Together, they enable DevOps teams to detect and troubleshoot issues quickly, ensuring that applications run smoothly.
Continuous monitoring involves constantly tracking key metrics such as CPU usage, memory consumption, response times, and error rates. Tools like Prometheus, Grafana, and the ELK Stack (Elastic, Logstash, Kibana) are popular choices for collecting and visualizing this data. All public cloud providers also have their own solutions, in some cases even based on the open-source versions mentioned before.
Whatever the tool of your choice is, they all provide real-time insights into the performance of your applications and infrastructure, helping you identify potential issues before they impact users.
Now the practice of observability extends beyond monitoring by providing a deeper understanding of how your systems are behaving. It involves collecting and analyzing logs, metrics and traces to gain insights into the root cause of issues. OpenTelemetry, for instance, is an emerging standard for collecting telemetry data, offering a unified way to instrument, collect, and export data for analysis. This standardization makes it easier to integrate observability into your DevOps practices, regardless of the tools you're using.
## 6\. Security in DevOps
![Security in DevOps](https://assets.roadmap.sh/guest/secured-servers-amsed.png)
Security is a critical aspect of the DevOps lifecycle, and it's something that needs to be integrated from the very beginning of any project expected to see the light of production at one point.
DevSecOps is the practice of embedding security into the DevOps pipeline, ensuring that security measures are applied consistently throughout the software development lifecycle (reviewing code for vulnerabilities, checking IaC scripts, etc). Through this practice, DevOps helps catch vulnerabilities early and greatly reduce the risk of security breaches in production.
Sadly, in many companies and teams that follow more traditional practices, security tends to be an afterthought, gaining importance only after the code is written and deployed. This approach can lead to costly and time-consuming fixes. DevSecOps, on the other hand, integrates security into every stage of the development and operations process, from code development to deployment. In the end, this helps security teams to automate security testing, identify vulnerabilities early, and enforce security policies consistently. All without having to read a single line of code themselves.
In this space, tools like Snyk, Aqua Security, and HashiCorp Vault are king and they can help you integrate security into your DevOps workflows.
## 7\. Reducing Toil and Technical Debt
Toil and technical debt are two of the biggest productivity killers in software development. Toil refers to the repetitive, manual tasks that don't add direct value to the product, while technical debt is the accumulation of shortcuts and workarounds that make the codebase harder to maintain over time. Both can slow down your development workflow and make it more challenging to deliver new features.
And because of that, one of the big and important DevOps principles is to aim to reduce both. Yes, DevOps teams can also help reduce technical debt.
Reducing toil involves automating repetitive tasks to free up time for more valuable work. Tools like Ansible, Chef, and Puppet can help automate infrastructure management, while CI/CD pipelines can automate the build, test, and deployment processes. In the end, less manual work translates to reducing the chances of errors and giving team members the chance to focus on more interesting and strategic tasks.
Technical debt, on the other hand, requires a proactive approach to address. It's about finding the right balance between delivering new features and maintaining the quality of the codebase. Regularly refactoring code, improving documentation, and addressing known issues can help keep technical debt in check. Of course, this also needs to be balanced with their ability to deliver new features and move the product forward.
## 8\. GitOps: The Future of Deployment
![GitOps](https://assets.roadmap.sh/guest/git-ops-tmggz.png)
GitOps is a new practice that takes the principles of Git and applies them to operations. It's about using Git as the single source of truth for your infrastructure and application configurations. By storing everything in Git, you can use version control to manage changes, track history, and facilitate collaboration among development and operations teams.
You essentially version your entire infrastructure with the same tools you version your code.
In other words, all changes to the infrastructure and applications are made through pull requests to the Git repository. Once a change is merged, an automated process applies the change to the target environment. This approach provides a consistent, auditable, and repeatable way to manage deployments, making it easier to maintain the desired state of your systems.
Through GitOps, teams can manage deployments and gain the following benefits: improved visibility, version control, and traceability.
This methodology aligns well with the DevOps principles of automation, consistency, and collaboration, making it easier to manage complex deployments at scale.
Key tools for implementing GitOps include Argo CD and Flux. These tools help you automate the deployment process by monitoring the Git repository for changes and applying them to the cluster.
## 9\. Continuous Learning and Improvement
![Continious Learning and Improvement](https://assets.roadmap.sh/guest/learn-improve-4fzcr.png)
In general the world of tech is constantly evolving and changing and continuous learning and improvement are essential practices for staying ahead and relevant.
That said, in the DevOps landscape change is also a constant, with new tools, practices, and technologies emerging all the time. If you think about it, before 2006 we didn’t even have containers.
So to keep up, DevOps engineers and teams need to be committed to learning and improving continuously.
Encouraging a culture of continuous learning within your team can help keep everyone up-to-date with the latest DevOps trends and tools. This can include participating in conferences, attending workshops, and enrolling in online courses. Reading books like "The Phoenix Project," "The Unicorn Project," and "The DevOps Handbook" can provide valuable insights and inspiration.
If you’re not into books, then websites like [12factor.net](http://12factor.net), [OpenGitOps.dev](http://OpenGitOps.dev), and [CNCF.](http://CNCF.)io are also great resources for staying current with industry best practices.
Continuous improvement goes hand-in-hand with continuous learning. It's about regularly reviewing and refining your processes, identifying areas for improvement after failing and experimenting with new approaches. This iterative approach helps you optimize the development process, improve collaboration between devs and operations, and deliver better software.
## 10\. Understanding Programming Concepts
![Understanding Programming Concepts](https://assets.roadmap.sh/guest/code-sample-fjblw.png)
While not every DevOps engineer needs to be a full-fledged developer, having a solid understanding of programming concepts is key to success in the professional world.
A good grasp of programming helps bridge the gap between development and operations, making it easier to collaborate and understand each other's needs. Which, if you think about it, is literally the core principle of the DevOps practice.
Understanding programming translates to being able to write scripts in languages like Bash, Python, or PowerShell to automate tasks, manage infrastructure, and interact with APIs. This can range from simple tasks like automating server setup to more complex operations like orchestrating CI/CD pipelines.
Understanding programming concepts also enables you to better manage the software development lifecycle. It helps you understand how code changes affect system performance, security, and stability. This insight allows you to make more informed decisions when designing and implementing infrastructure and deployment processes.
## 11\. Automation in DevOps
Automation is at the heart of DevOps principles. It's about automating repetitive and manual tasks to accelerate processes, reduce errors, and free up time for more strategic work. We partially covered this concept before as part of the toil reduction principle.
However, it’s important to explain that automation not only involves code builds and tests, it also includes infrastructure provisioning and application deployment. In other words, automation plays a key role in every stage of the DevOps lifecycle.
The whole point of automation is to accelerate processes. It enables faster, more consistent, and more reliable software delivery. By automating tasks like code integration, testing, and deployment, you can reduce the time it takes to get new features into production and minimize the risk of human error.
There are many areas in the DevOps lifecycle where automation can be applied, in fact, the challenge would be to find areas where it wouldn’t make sense to apply it. These include CI/CD pipelines, infrastructure provisioning, configuration management, monitoring, and security testing. In this area, some of the most popular DevOps tools are Jenkins, Ansible, Terraform, and Selenium. They all provide the building blocks for automating these tasks, allowing you to create a seamless and efficient development workflow that everyone enjoys.
If you’re looking to start implementing automation in your DevOps workflow, consider starting small and gradually expanding automation efforts, using version control for automation scripts (Git is a great option), and continuously monitoring and refining automated processes.
It's important to find a balance between automation and human intervention, ensuring that automation enhances the development workflow without introducing unnecessary complexity.
## Conclusion
And there you have it—the core principles and practices of DevOps in a nutshell. By mastering them, you'll be well on your way to becoming a great [DevOps engineer](https://roadmap.sh/devops/devops-engineer).
Whether you're just starting out or looking to level up your DevOps game, there's always something new to learn and explore. So keep experimenting, keep learning, and most importantly, keep having fun\!
After all, DevOps isn't just about making systems run smoothly—it's about building a culture that encourages innovation, collaboration, and growth. As you dive deeper into the DevOps practice, you'll not only become more skilled but also contribute to creating better software and more agile teams.

@ -0,0 +1,376 @@
---
title: 'Top 7 Frontend Frameworks to Use in 2024: Pro Advice'
description: 'Get expert advice on frontend frameworks for 2024. Elevate your web development process with these top picks.'
authorId: fernando
excludedBySlug: '/frontend/frameworks'
seo:
title: 'Top 7 Frontend Frameworks to Use in 2024: Pro Advice'
description: 'Get expert advice on frontend frameworks for 2024. Elevate your web development process with these top picks.'
ogImageUrl: 'https://assets.roadmap.sh/guest/top-frontend-frameworks-wmqwc.jpg'
isNew: true
type: 'textual'
date: 2024-09-26
sitemap:
priority: 0.7
changefreq: 'weekly'
tags:
- 'guide'
- 'textual-guide'
- 'guide-sitemap'
---
![Best frontend frameworks](https://assets.roadmap.sh/guest/top-frontend-frameworks-wmqwc.jpg)
With the growing complexity of web applications, selecting the right frontend framework is more important than ever. Your choice will impact performance, scalability, and development speed. Not to mention the future-proofing of your application.
In 2024, web development is increasingly about building fast, scalable, and highly interactive user interfaces. Frontend frameworks now need to support real-time interactions, handle large-scale data, and provide excellent developer experiences by simplifying the web development process.
Picking the right frontend framework isn't just about what's popular—it's about finding the tool that fits your project’s needs, whether you’re building a small static site or a large, complex application.
The top frontend frameworks for web development that I’ll cover as part of this article are:
* React
* VueJS
* Angular
* Svelte
* Solid.js
* Qwik
* Astro
## Criteria for Evaluating Frontend Frameworks
Finding what the “best frontend framework” looks like is not easy. In fact, it’s impossible without the particular characteristics of your project, your team, and all other surrounding details. They will all inform your final decision.
To help in that process, I’ve defined our own set of key indicators that will give you an idea of how we’re measuring the value of each of the leading [frontend development](https://roadmap.sh/frontend) frameworks covered in this article.
1. **Performance:** How well does the frontend framework handle real-world scenarios, including page load times, rendering speed, and efficient resource use?
2. **Popularity and Community Support:** Is there a large community around the framework? How easy is it to find tutorials, forums, and third-party tools?
3. **Learning Curve:** Is the framework easy to learn for new developers, or does it require mastering complex patterns and paradigms?
4. **Ecosystem and Extensibility:** Does the framework offer a robust ecosystem of libraries, plugins, and tooling to extend its functionality?
5. **Scalability and Flexibility:** Can the framework handle both small and large projects? Is it flexible enough to support different project types, from single-page applications (SPAs) to complex enterprise solutions?
6. **Future-Proofing:** Is the framework actively maintained and evolving? Will it remain relevant in the next few years, based on trends and support?
### My go-to frontend framework of choice
My go-to framework is React because it has the highest ecosystem score and is one of the most future-proofed ones.
I’ve applied the above criteria to the best frontend development frameworks I’m covering below in this guide. This table gives you a snapshot view of how they all compare according to my ratings, and I’ll explain the details further below.
![table of frameworks](https://assets.roadmap.sh/guest/table-of-frameworks-yu22p.png)
Of course, the choice of React is mine, and mine alone. You have to consider your own projects and your own context to understand what the best choice for you would be.
Let’s get into the selection and what their strengths and weaknesses are to help you select the right one for you.
## Top 7 Frontend Development Frameworks in 2024
### React
![React](https://assets.roadmap.sh/guest/react-logo-d5ice.png)
React was created by Facebook in 2013 and has since become one of the most popular frontend frameworks (though technically a library). Initially developed to solve the challenges of building dynamic and complex user interfaces for Facebook’s apps, React introduced the revolutionary concept of the virtual DOM (Document Object Model), which allowed developers to efficiently update only the parts of the UI that changed instead of re-rendering the entire page.
#### Performance
React uses a virtual DOM (the Virtual Document Object Model) to optimize performance by minimizing the number of direct manipulations to the actual DOM. This allows React to efficiently update only the components that need to change, rather than re-rendering the entire page. While React is fast, performance can be impacted in large applications if not managed carefully, especially with unnecessary re-renders or poorly optimized state management (two concepts that have created a lot of literature around them, and yet, most developers still get wrong).
**⭐ Rating: 4/5**
#### Popularity and Community Support
React is one of the most popular frontend frameworks worldwide, with widespread adoption in both small and large-scale applications. Its massive community means there's a wealth of tutorials, libraries, and third-party tools available. With strong backing from Meta and continuous contributions from developers globally, React has one of the richest ecosystems and the largest support networks.
⭐ **Rating: 5/5**
#### Learning Curve
React has a moderate learning curve. It’s relatively easy to get started with, especially if you’re familiar with JavaScript, but understanding concepts like JSX and hooks can take some time (especially if you throw in the relatively new server components). Once you grasp the basics, React becomes easier to work with, but mastering advanced patterns and state management solutions can add complexity.
⭐ **Rating: 3.5/5**
#### Ecosystem and Extensibility
React has one of the most mature and extensive ecosystems in the frontend space. With a vast selection of libraries, tools, and plugins, React can be extended to meet virtually any development need. Key libraries like React Router (for routing) and Redux (for state management) are widely adopted, and there are countless third-party components available. React's ecosystem is one of its greatest strengths, offering flexibility and extensibility for all kinds of projects.
⭐ **Rating: 5/5**
#### Scalability and Flexibility
React is highly flexible and can scale to meet the needs of both small and large applications. Its component-based architecture allows for modular development, making it easy to manage complex UIs. React is adaptable to various types of projects, from simple SPAs to large, enterprise-level applications. However, managing state in larger applications can become challenging, often requiring the use of external tools like Redux or Context API for better scalability.
**⭐ Rating: 4.5/5**
#### Future-Proofing
React remains one of the most future-proof frameworks, with continuous updates and strong backing from Meta (Facebook). Its widespread adoption ensures that it will be well-supported for years to come. The ecosystem is mature, but React is constantly evolving with features like concurrent rendering and server-side components. The size of the community and corporate support make React a safe bet for long-term projects.
⭐ **Rating: 5/5**
### Vue.js
### ![vuejs](https://assets.roadmap.sh/guest/vuejs-logo-b8w07.png)
Vue.js was developed in 2014 by Evan You, who had previously worked on AngularJS at Google. His goal was to create a framework that combined the best parts of Angular’s templating system with the simplicity and flexibility of modern JavaScript libraries like React. Vue is known for its progressive nature, which means developers can incrementally adopt its features without having to completely rewrite an existing project.
#### Performance
Vue’s reactivity system provides a highly efficient way to track changes to data and update the DOM only when necessary. Its virtual DOM implementation is lightweight and fast, making Vue a strong performer for both small and large applications. Vue 3’s Composition API has further optimized performance by enabling more granular control over component updates.
⭐ **Rating: 4.5/5**
#### Popularity and Community Support:
Vue.js has grown significantly in popularity, especially in regions like China and Europe, and is widely adopted by startups and smaller companies. Although it doesn't have the corporate backing of React or Angular, its community is passionate, and the framework enjoys strong support from individual contributors. Vue’s ecosystem is robust, with many official libraries and third-party plugins, making it a favorite among developers looking for a balance of simplicity and power.
⭐ **Rating: 4.5/5**
#### Learning Curve
Vue’s syntax is clean and straightforward, with a structure that is easy to understand even for those new to frontend frameworks. Features like two-way data binding and directives are intuitive, making Vue much easier to pick up compared to React or Angular.
⭐ **Rating: 4.5/5**
#### Ecosystem and Extensibility
Vue has a rich and growing ecosystem, with many official libraries like Vue Router, Vuex (for state management), and Vue CLI (for project setup). Additionally, its ecosystem includes many high-quality third-party plugins that make it easy to extend Vue applications. While not as large as React’s, Vue’s ecosystem is well-curated and highly effective, making it both powerful and developer-friendly.
⭐ **Rating: 4.5/5**
#### Scalability and Flexibility
Vue is extremely flexible and scalable. It is designed to be incrementally adoptable, which means you can use it in small parts of a project or as the foundation for a large-scale application. Vue’s core libraries, along with tools like Vuex, make it highly scalable.
⭐ **Rating: 4.5/5**
#### Future-Proofing
Vue is actively maintained and supported by a strong open-source community. Its development pace is steady. While it doesn’t have the same level of corporate backing as React or Angular, its growing popularity and enthusiastic community ensure its longevity. Vue is a solid choice for long-lasting projects.
⭐ **Rating: 4.5/5**
### Angular
![Angular](https://assets.roadmap.sh/guest/angular-logo-tr4wg.png)
Angular was first introduced by Google in 2010 as AngularJS, a framework that revolutionized web development by introducing two-way data binding and dependency injection. However, AngularJS eventually became difficult to maintain as applications grew more complex, leading Google to rewrite the framework from the ground up in 2016 with the release of Angular 2 (commonly referred to simply as "Angular").
#### Performance
Angular offers solid performance, especially in large enterprise applications. It uses a change detection mechanism combined with the Ahead-of-Time (AOT) compiler to optimize performance by compiling templates into JavaScript code before the browser runs them. The built-in optimizations are robust, but Angular’s size and complexity can lead to performance overhead if not managed correctly.
⭐ **Rating: 4/5**
#### Popularity and Community Support
Angular is backed by Google and is a popular choice for enterprise-level applications, especially in larger organizations. Its community is active, and Google’s long-term support ensures regular updates and improvements. Angular has a strong presence in corporate environments, and its ecosystem includes official tooling and libraries. However, it is less commonly used by smaller teams and individual developers compared to React and Vue.
⭐ **Rating: 4.5/5**
#### Learning Curve
Angular has a steep learning curve due to its complexity and reliance on TypeScript. New web developers may find it challenging to grasp Angular’s concepts, such as dependency injection, decorators, and modules. The comprehensive nature of Angular also means there’s a lot to learn before you can be fully productive, but for experienced developers working on large-scale applications, the structure and tooling can be highly beneficial.
⭐ **Rating: 3/5**
#### Ecosystem and Extensibility
Angular’s ecosystem is comprehensive and fully integrated, offering everything developers need right out of the box. Angular includes official libraries for routing, HTTP client, forms, and more, all provided and maintained by Google. The Angular CLI is a robust tool for managing projects. However, Angular's strict architecture means less flexibility when integrating with external libraries compared to React or Vue, though the ecosystem is extensive.
⭐ **Rating: 4.5/5**
#### Scalability and Flexibility
Angular is built with scalability in mind, making it ideal for large, complex applications. Its strict structure and reliance on TypeScript make it a great fit for projects that require clear architecture and maintainability over time. Angular’s modularity and out-of-the-box features like dependency injection and lazy loading enable it to handle enterprise-level web applications with multiple teams. However, its strictness can reduce flexibility for smaller, less complex projects.
⭐ **Rating: 5/5**
#### Future-Proofing
Angular has a clear roadmap and long-term support, making it one of the most future-proof frameworks, especially for enterprise applications. Google’s regular updates ensure that Angular remains competitive in the evolving frontend ecosystem. Its TypeScript foundation, strong architecture, and large-scale adoption make it a reliable option for projects with long lifecycles.
⭐ **Rating: 5/5**
### Svelte
![Svelte](https://assets.roadmap.sh/guest/svelte-logo-mln7r.png)
Svelte is a relatively new entrant in the frontend landscape, created by Rich Harris in 2016\. Unlike traditional frameworks like React and Vue, which do much of their work in the browser, Svelte takes a different approach. It shifts most of the work to compile time, meaning that the framework compiles the application code into optimized vanilla JavaScript during the build process, resulting in highly efficient and fast-running code.
#### Performance
Svelte takes a unique approach to performance by compiling components into highly optimized vanilla JavaScript at build time, removing the need for a virtual DOM entirely. This leads to very fast runtime performance and smaller bundle sizes, as only the necessary code is shipped to the browser. Svelte excels in small, fast-loading applications, making it one of the fastest frontend frameworks available.
⭐ **Rating: 5/5**
#### Popularity and Community Support
Svelte has seen rapid growth in popularity (partially due to its novel approach). While its community is smaller compared to React, Vue, or Angular, it’s highly engaged and growing steadily. Svelte has fewer third-party libraries and tools, but the community is working hard to expand its ecosystem. It's particularly popular for smaller projects and developers who want a minimalistic framework.
⭐ **Rating: 4/5**
#### Learning Curve
Svelte is relatively easy to learn, especially for web developers familiar with modern JavaScript. Its component-based structure is intuitive, and there’s no need to learn a virtual DOM or complex state management patterns. The absence of a virtual DOM and the simplicity of Svelte’s syntax make it one of the easiest frontend frameworks to pick up.
⭐ **Rating: 4.5/5**
#### Ecosystem and Extensibility
Svelte’s ecosystem is still maturing compared to more established frameworks. While it lacks the extensive third-party library support of React or Vue, Svelte’s core tools like SvelteKit (for building full-stack applications) provide much of what is needed for most use cases. That said, the growing community is actively contributing to expanding the ecosystem and its extensive documentation.
⭐ **Rating: 3.5/5**
#### Scalability and Flexibility
Svelte is highly flexible and performs well in small to medium-sized projects. It’s great at creating fast, lightweight applications with minimal boilerplate. While Svelte’s compile-time approach leads to excellent performance, the truth is Svelte is still too new and untested, so its scalability for very large projects or teams is still to be determined.
⭐ **Rating: 4/5**
#### Future-Proofing
Svelte is gaining momentum as a modern, high-performance framework, and its unique approach has attracted a lot of attention. While the community is still smaller than that of React or Vue, it is growing rapidly, and the introduction of tools like SvelteKit further enhances its long-term viability. Svelte’s focus on simplicity and performance means it has the potential to become a significant player, but it's still early in terms of large-scale enterprise adoption.
⭐ **Rating: 4/5**
### Solid.js
![Solidjs](https://assets.roadmap.sh/guest/solid-logo-4sh7s.png)
Solid.js is a more recent addition to the frontend ecosystem, developed by Ryan Carniato in 2018\. Inspired by React’s declarative style, Solid.js seeks to offer similar features but with even better performance by using a fine-grained reactivity system. Unlike React, which uses a virtual DOM, Solid compiles its reactive components down to fine-grained, efficient updates, reducing overhead and increasing speed.
#### Performance
Solid.js is designed for performance, using a fine-grained reactivity system to ensure that only the necessary parts of the DOM are updated. This eliminates the need for a virtual DOM, resulting in highly efficient rendering and state updates. Solid’s performance is often considered one of the best in the frontend space, especially for applications with complex state management.
⭐ **Rating: 5/5**
#### Popularity and Community Support
Solid.js is still a relatively new player in the frontend space, but it is gaining traction due to its high performance and fine-grained reactivity model. The community is smaller compared to other frameworks but highly enthusiastic, and interest in Solid.js is growing quickly. While it has fewer resources and libraries available compared to larger frameworks, it is gradually building a strong support network.
⭐ **Rating: 3.5/5**
#### Learning Curve
Solid.js has a learning curve similar to React, particularly because of its JSX-like syntax. However, its fine-grained reactivity system introduces new concepts that might take some time to fully understand, especially for those new to reactive programming. While its reactivity model offers powerful performance benefits, frontend developers need to adjust to this different approach, making it slightly more challenging than React for beginners.
⭐ **Rating: 3.5/5**
#### Ecosystem and Extensibility
Solid.js has a smaller but rapidly growing ecosystem. While it supports libraries like Solid Router for routing and integrates well with existing JavaScript tools, the number of available third-party extensions is still limited compared to React or Vue. Solid is seeing increasing contributions from the community, and as it grows in popularity, its ecosystem is expected to expand.
⭐ **Rating: 3.5/5**
#### Scalability and Flexibility
Solid.js, with its fine-grained reactivity, is extremely flexible and scales well for complex applications. Its unique reactivity model enables it to handle large, state-heavy applications with minimal overhead. While Solid is still proving itself in larger, enterprise-level environments, its design offers promising scalability. However, due to its newness, large-scale implementations are less common compared to more established frameworks like React or Angular.
⭐ **Rating: 4/5**
#### Future-Proofing
Solid.js, although newer, is quickly gaining traction due to its performance benefits and innovative reactivity model. Its small but dedicated community is growing, and the framework's architecture is built with modern web needs in mind. While it’s not yet widely adopted in enterprise environments, its potential for long-term use is promising, especially as more developers discover its benefits. However, its ecosystem is still developing.
⭐ **Rating: 4/5**
### Qwik
![Qwik](https://assets.roadmap.sh/guest/qwik-logo-3dfy8.png)
Qwik, created by Misko Hevery (the creator of Angular), is an innovative frontend framework that aims to solve the problem of slow page load times by introducing a new architecture called "resumability." Introduced in 2021 (making it the youngest frontend framework on this list), Qwik is designed to instantly load websites by only downloading and executing the minimal amount of code required to render the page, deferring the loading of other parts of the application until they are needed.
#### Performance
Qwik’s performance is built around its innovative resumable architecture, which optimizes for instant loading. It loads only the minimal amount of JavaScript needed to render the page, and additional code is loaded asynchronously as needed. This makes Qwik ideal for performance-critical applications, especially on slower devices and networks.
⭐ **Rating: 5/5**
#### Popularity and Community Support
Qwik is an emerging framework with an innovative approach to performance. Its community is still in its early stages, but there is increasing interest due to its "resumable" architecture. Although the ecosystem is small, the framework’s unique features have caught the attention of developers looking to push the boundaries of frontend performance. As of 2024, Qwik's community is expanding, though still much smaller than React or Vue.
⭐ **Rating: 3.5/5**
#### Learning Curve
Qwik has a moderate learning curve, largely due to its new "resumable" approach to web development. Developers who are used to traditional frontend frameworks may find Qwik’s architecture and its emphasis on lazy loading and instant loading a bit unfamiliar. While the concepts are powerful, it can take time to fully grasp how to take advantage of Qwik’s unique features.
⭐ **Rating: 3.5/5**
#### Ecosystem and Extensibility
Qwik’s ecosystem is still in its early stages, but it is designed to be compatible with existing tools and libraries. The framework’s emphasis on performance over complexity means that while it lacks a large number of third-party plugins, it is designed to work alongside existing technologies.
⭐ **Rating: 3/5**
#### Scalability and Flexibility
Qwik’s architecture is designed to handle scalability from the ground up. Its "resumable" approach allows applications to scale by loading only the necessary parts of the app on demand, making it particularly well-suited for performance-critical, large-scale projects. Although Qwik is still emerging, its emphasis on scalability and performance ensures it can grow with the demands of large, complex applications, at least on paper. Much like with Svelte, Qwik needs a lot more testing before we can draw a final verdict on its scalability.
⭐ **Rating: 4.5/5**
#### Future-Proofing
Qwik is an exciting new frontend framework that introduces a novel approach with its resumable architecture, positioning it well for future needs around performance and scalability. Though still emerging, Qwik’s design aligns with modern web development process demands, particularly for fast-loading, performance-critical applications. If the community and ecosystem continue to grow, Qwik has strong future-proofing potential, especially for performance-sensitive projects.
⭐ **Rating: 4/5**
###
### Astro
![Astro](https://assets.roadmap.sh/guest/astro-logo-7rzp9.png)
Astro was created by the team at Snowpack in 2021 and is a frontend framework focused on static site generation with minimal JavaScript. Astro takes a unique approach by allowing developers to build components using popular frameworks like React, Vue, and Svelte, but it only ships the HTML to the browser, greatly reducing the amount of JavaScript that needs to be processed by the client.
#### Performance
Astro is optimized for static site generation, shipping little to no JavaScript to the browser by default. This approach leads to very fast page load times, especially for content-heavy sites. While Astro does allow for interactive components, its performance is generally excellent due to the minimal JavaScript footprint on the client side.
⭐ **Rating: 5/5**
#### Popularity and Community Support
Astro is rapidly gaining popularity, especially in the static site generation space. Its framework-agnostic approach and performance optimizations have led to a growing community. While smaller than React or Vue, Astro’s community is highly active, with increasing adoption for content-heavy websites and static site generation. The ecosystem is expanding quickly with new integrations and plugins.
⭐ **Rating: 4/5**
#### Learning Curve
Astro is known for being easy to pick up, especially for developers already familiar with other frontend frameworks like React, Vue, or Svelte. Its framework-agnostic approach allows developers to use familiar components and libraries while taking advantage of Astro’s static site generation features. Astro’s simplicity makes it an accessible choice for beginners and experienced developers alike.
⭐ **Rating: 5/5**
#### Ecosystem and Extensibility
Astro’s ecosystem is rapidly growing, with support for integrations with popular frameworks like React, Vue, and Svelte. Astro’s framework-agnostic approach allows developers to combine reusable components from different ecosystems in a single project. Its extensibility is also enhanced by its plugin system, which allows web developers to customize their creations even further.
⭐ **Rating: 4/5**
#### Scalability and Flexibility
Astro is highly flexible for static sites and excels in building fast, scalable content-heavy websites. Its architecture allows for scaling static sites with minimal client-side JavaScript, making it an excellent choice for projects like blogs, documentation sites, or e-commerce platforms. However, Astro is not designed for large-scale, dynamic web applications like React or Angular, limiting its scalability in highly interactive or complex projects.
⭐ **Rating: 3.5/5**
#### Future-Proofing
Astro is rapidly growing in popularity, especially for static site generation, and its framework-agnostic approach ensures that it can work with future tools and technologies. As the need for fast, content-heavy websites continues to grow, Astro is well-positioned to meet that demand. Its unique architecture and growing ecosystem suggest it will remain relevant, especially for static sites, but its future-proofing for dynamic applications is less certain compared to other frontend frameworks.
⭐ **Rating: 4/5**
## Conclusion
The space of frontend development continues to evolve with an impressive list of frameworks to choose from. Whether you're aiming for performance, scalability, ease of use, or future-proofing, each frontend framework brings something unique to the table.
* **React** remains a reliable choice for large-scale applications.
* **Vue.js** offers a perfect balance of simplicity and scalability.
* **Angular** is the go-to for enterprise-level projects.
* **Svelte** and **Solid.js** are great options for developers who prioritize performance and simplicity.
* **Qwik** is an exciting new contender focused on instant loading and performance.
* **Astro** shines in static site generation, combining a modern approach with the ability to integrate multiple frameworks for highly flexible, content-heavy sites.
In the end, the choice comes down to your specific project needs. Whatever you’re building, there's a framework here that can help you succeed.
The future of frontend development is exciting, and these frontend frameworks ensure you're equipped for whatever challenges lie ahead.

@ -1,7 +1,7 @@
---
jsonUrl: '/jsons/roadmaps/ai-data-scientist.json'
pdfUrl: '/pdfs/roadmaps/ai-data-scientist.pdf'
order: 4
order: 5
renderer: 'editor'
briefTitle: 'AI and Data Scientist'
briefDescription: 'Step by step guide to becoming an AI and Data Scientist in 2024'

File diff suppressed because one or more lines are too long

@ -0,0 +1,50 @@
---
jsonUrl: '/jsons/roadmaps/ai-engineer.json'
pdfUrl: '/pdfs/roadmaps/ai-engineer.pdf'
order: 4
renderer: 'editor'
briefTitle: 'AI Engineer'
briefDescription: 'Step by step guide to becoming an AI Engineer in 2024'
title: 'AI Engineer Roadmap'
description: 'Step by step guide to becoming an AI Engineer in 2024'
hasTopics: true
isNew: true
dimensions:
width: 968
height: 3200
question:
title: 'What is an AI Engineer?'
description: |
An AI Engineer uses pre-trained models and existing AI tools to improve user experiences. They focus on applying AI in practical ways, without building models from scratch. This is different from AI Researchers and ML Engineers, who focus more on creating new models or developing AI theory.
schema:
headline: 'AI Engineer Roadmap'
description: 'Learn how to become an AI Engineer with this interactive step by step guide in 2023. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.'
imageUrl: 'https://roadmap.sh/roadmaps/ai-engineer.png'
datePublished: '2024-10-03'
dateModified: '2024-10-03'
seo:
title: 'AI Engineer Roadmap'
description: 'Learn to become an AI Engineer using this roadmap. Community driven, articles, resources, guides, interview questions, quizzes for modern backend development.'
keywords:
- 'ai engineer roadmap 2024'
- 'guide to becoming an ai engineer'
- 'ai engineer roadmap'
- 'ai engineer skills'
- 'become an ai engineer'
- 'ai engineer career path'
- 'skills for ai engineer'
- 'ai engineer quiz'
- 'ai engineer interview questions'
relatedRoadmaps:
- 'ai-data-scientist'
- 'prompt-engineering'
- 'data-analyst'
- 'python'
sitemap:
priority: 1
changefreq: 'monthly'
tags:
- 'roadmap'
- 'main-sitemap'
- 'role-roadmap'
---

@ -0,0 +1,7 @@
# Adding end-user IDs in prompts
Sending end-user IDs in your requests can be a useful tool to help OpenAI monitor and detect abuse. This allows OpenAI to provide you with more actionable feedback in the event that they may detect any policy violations in applications.
Visit the following resources to learn more:
- [@official@OpenAI Documentation](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids)

@ -0,0 +1,8 @@
# Agents Usecases
AI Agents allow you to automate complex workflows that involve multiple steps and decisions.
Visit the following resources to learn more:
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)

@ -0,0 +1,8 @@
# AI Agents
AI Agents are a type of LLM that can be used to automate complex workflows that involve multiple steps and decisions.
Visit the following resources to learn more:
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)

@ -0,0 +1,8 @@
# AI Agents
AI Agents are a type of LLM that can be used to automate complex workflows that involve multiple steps and decisions.
Visit the following resources to learn more:
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)

@ -0,0 +1,8 @@
# AI Code Editors
AI code editors have the first-class support for AI in the editor. You can use AI to generate code, fix bugs, chat with your code, and more.
Visit the following resources to learn more:
- [@website@Cursor](https://cursor.com/)
- [@website@Zed AI](https://zed.dev/ai)

@ -0,0 +1,3 @@
# AI Engineer vs ML Engineer
AI Engineer differs from an AI Researcher or ML Engineer. AI Engineers focus on leveraging pre-trained models and existing AI technologies to enhance user experiences without the need to train models from scratch.

@ -0,0 +1,3 @@
# AI Safety and Ethics
Learn about the principles and guidelines for building safe and ethical AI systems.

@ -0,0 +1,3 @@
# AI vs AGI
AI (Artificial Intelligence) refers to systems designed to perform specific tasks, like image recognition or language translation, often excelling in those narrow areas. In contrast, AGI (Artificial General Intelligence) would be a system capable of understanding, learning, and applying intelligence across a wide range of tasks, much like a human, and could adapt to new situations without specific programming.

@ -0,0 +1,3 @@
# Anomaly Detection
Embeddings transform complex data (like text or behavior) into numerical vectors, capturing relationships between data points. These vectors are stored in a vector database, which allows for efficient similarity searches. Anomalies can be detected by measuring the distance between a data point's vector and its nearest neighbors—if a point is significantly distant, it's likely anomalous. This approach is scalable, adaptable to various data types, and effective for tasks like fraud detection, predictive maintenance, and cybersecurity.

@ -0,0 +1,7 @@
# Anthropic's Claude
Claude is a family of large language models developed by Anthropic. Claude 3.5 Sonnet is the latest model (at the time of this writing) in the series, known for its advanced reasoning and multi-modality capabilities.
Visit the following resources to learn more:
- [@official@Clause Website](https://claude.ai/)

@ -0,0 +1,3 @@
# Audio Processing
Using Multimodal AI, audio data can be processed with other types of data, such as text, images, or video, to enhance understanding and analysis. For example, it can synchronize audio with corresponding visual inputs, like lip movements in video, to improve speech recognition or emotion detection. This fusion of modalities enables more accurate transcription, better sentiment analysis, and enriched context understanding in applications such as virtual assistants, multimedia content analysis, and real-time communication systems.

@ -0,0 +1,7 @@
# AWS Sagemaker
AWS Sagemaker is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Sagemaker takes care of the underlying infrastructure, allowing developers to focus on building and improving their models.
Visit the following resources to learn more:
- [@official@AWS Website](https://aws.amazon.com/sagemaker/)

@ -0,0 +1,7 @@
# Azure AI
Azure AI is a comprehensive set of AI services and tools provided by Microsoft. It includes a range of capabilities such as natural language processing, computer vision, speech recognition, and more. Azure AI is designed to help developers and organizations build, deploy, and scale AI solutions quickly and easily.
Visit the following resources to learn more:
- [@official@Azure Website](https://azure.microsoft.com/en-us/products/ai-services/)

@ -0,0 +1,7 @@
# Benefits of Pre-trained Models
LLM models are not only difficult to train, but they are also expensive. Pre-trained models are a cost-effective solution for developers and organizations looking to leverage the power of AI without the need to train models from scratch.
Visit the following resources to learn more:
- [@article@Why you should use Pre-trained Models](https://cohere.com/blog/pre-trained-vs-in-house-nlp-models)

@ -0,0 +1,3 @@
# Bias and Fareness
Bias and fairness in AI arise when models produce skewed or unequal outcomes for different groups, often reflecting imbalances in the training data. This can lead to discriminatory effects in critical areas like hiring, lending, and law enforcement. Addressing these concerns involves ensuring diverse and representative data, implementing fairness metrics, and ongoing monitoring to prevent biased outcomes. Techniques like debiasing algorithms and transparency in model development help mitigate bias and promote fairness in AI systems.

@ -0,0 +1,7 @@
# Capabilities / Context Length
OpenAI's capabilities include processing complex tasks like language understanding, code generation, and problem-solving. However, context length limits how much information the model can retain and reference during a session, affecting long conversations or documents. Advances aim to increase this context window for more coherent and detailed outputs over extended interactions.
Visit the following resources to learn more:
- [@official@OpenAI Website](https://platform.openai.com/docs/guides/fine-tuning/token-limits)

@ -0,0 +1,7 @@
# Chat Completions API
The Chat Completions API allows developers to create conversational agents by sending user inputs (prompts) and receiving model-generated responses. It supports multiple-turn dialogues, maintaining context across exchanges to deliver relevant responses. This API is often used for chatbots, customer support, and interactive applications where maintaining conversation flow is essential.
Visit the following resources to learn more:
- [@official@OpenAI Website](https://platform.openai.com/docs/api-reference/chat/completions)

@ -0,0 +1,7 @@
# Chroma
Chroma is a vector database designed to efficiently store, index, and query high-dimensional embeddings. It’s optimized for AI applications like semantic search, recommendation systems, and anomaly detection by allowing fast retrieval of similar vectors based on distance metrics (e.g., cosine similarity). Chroma enables scalable and real-time processing, making it a popular choice for projects involving embeddings from text, images, or other data types.
Visit the following resources to learn more:
- [@official@Chroma Website](https://docs.trychroma.com/)

@ -0,0 +1,3 @@
# Chunking
In Retrieval-Augmented Generation (RAG), **chunking** refers to breaking large documents or data into smaller, manageable pieces (chunks) to improve retrieval and generation efficiency. This process helps the system retrieve relevant information more accurately by indexing these chunks in a vector database. During a query, the model retrieves relevant chunks instead of entire documents, which enhances the precision of the generated responses and allows better handling of long-form content within the context length limits.

@ -0,0 +1,10 @@
# Code Completion Tools
AI Code Completion Tools are software tools that use AI models to assist with code generation and editing. These tools help developers write code more quickly and efficiently by providing suggestions, completing code snippets, and suggesting improvements. AI Code Completion Tools can also be used to generate documentation, comments, and other code-related content.
Visit the following resources to learn more:
- [@website@GitHub Copilot](https://copilot.github.com/)
- [@website@Codeium](https://codeium.com/)
- [@website@Supermaven](https://supermaven.com/)
- [@website@TabNine](https://www.tabnine.com/)

@ -0,0 +1,7 @@
# Cohere
Cohere is an AI platform that provides natural language processing (NLP) models and tools, enabling developers to integrate powerful language understanding capabilities into their applications. It offers features like text generation, semantic search, classification, and embeddings. Cohere focuses on scalability and ease of use, making it popular for tasks such as content creation, customer support automation, and building search engines with advanced semantic understanding. It also provides a user-friendly API for custom NLP applications.
Visit the following resources to learn more:
- [@website@Cohere](https://cohere.com/)

@ -0,0 +1,3 @@
# Conducting Adversarial Testing
Adversarial testing involves creating malicious inputs to test the robustness of AI models. This includes testing for prompt injection, evasion, and other adversarial attacks.

@ -0,0 +1,3 @@
# Constraining Outputs and Inputs
Constraining outputs and inputs is important for controlling the behavior of AI models. This includes techniques like output filtering, input validation, and rate limiting.

@ -0,0 +1,3 @@
# Cut-off Dates / Knowledge
OpenAI models have a knowledge cutoff date, meaning they only have access to information available up until a specific time. For example, my knowledge is up to date until September 2023. As a result, I may not be aware of recent developments, events, or newly released technology. Additionally, these models don’t have real-time internet access, so they can't retrieve or update information beyond their training data. This can limit the ability to provide the latest details or react to rapidly changing topics.

@ -0,0 +1,3 @@
# DALL-E API
The DALL-E API allows developers to integrate OpenAI's image generation model into their applications. Using text-based prompts, the API generates unique images that match the descriptions provided by users. This makes it useful for tasks like creative design, marketing, product prototyping, and content creation. The API is highly customizable, enabling developers to adjust parameters such as image size and style. DALL-E excels at creating visually rich content from textual descriptions, expanding the possibilities for AI-driven creative workflows.

@ -0,0 +1,3 @@
# Data Classification
Embeddings are used in data classification by converting data (like text or images) into numerical vectors that capture underlying patterns and relationships. These vector representations make it easier for machine learning models to distinguish between different classes based on the similarity or distance between vectors in high-dimensional space. By training a classifier on these embeddings, tasks like sentiment analysis, document categorization, and image classification can be performed more accurately and efficiently. Embeddings simplify complex data and enhance classification by highlighting key features relevant to each class.

@ -0,0 +1,3 @@
# Development Tools
A lot of developer related tools have popped up since the AI revolution. It's being used in the coding editors, in the terminal, in the CI/CD pipelines, and more.

@ -0,0 +1,3 @@
# Embedding
Embedding refers to the conversion or mapping of discrete objects such as words, phrases, or even entire sentences into vectors of real numbers. It's an essential part of data preprocessing where high-dimensional data is transformed into a lower-dimensional equivalent. This dimensional reduction helps to preserve the semantic relationships between objects. In AI engineering, embedding techniques are often used in language-orientated tasks like sentiment analysis, text classification, and Natural Language Processing (NLP) to provide an understanding of the vast linguistic inputs AI models receive.

@ -0,0 +1,3 @@
# Embedding
Embedding refers to the conversion or mapping of discrete objects such as words, phrases, or even entire sentences into vectors of real numbers. It's an essential part of data preprocessing where high-dimensional data is transformed into a lower-dimensional equivalent. This dimensional reduction helps to preserve the semantic relationships between objects. In AI engineering, embedding techniques are often used in language-orientated tasks like sentiment analysis, text classification, and Natural Language Processing (NLP) to provide an understanding of the vast linguistic inputs AI models receive.

@ -0,0 +1,3 @@
# FAISS
FAISS stands for Facebook AI Similarity Search, it is a database management library developed by Facebook's AI team. Primarily used for efficient similarity search and clustering of dense vectors, it allows users to search through billions of feature vectors swiftly and efficiently. As an AI engineer, learning FAISS is beneficial because these vectors represent objects that are typically used in machine learning or AI applications. For instance, in an image recognition task, a dense vector might be a list of pixels from an image, and FAISS allows a quick search of similar images in a large database.

@ -0,0 +1,7 @@
# Fine-tuning
OpenAI API allows you to fine-tune and adapt pre-trained models to specific tasks or datasets, improving performance on domain-specific problems. By providing custom training data, the model learns from examples relevant to the intended application, such as specialized customer support, unique content generation, or industry-specific tasks.
Visit the following resources to learn more:
- [@official@OpenAI Docs](https://platform.openai.com/docs/guides/fine-tuning)

@ -0,0 +1,3 @@
# Generation
In this step of implementing RAG, we use the found chunks to generate a response to the user's query using an LLM.

@ -0,0 +1,3 @@
# Google's Gemini
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT

@ -0,0 +1,7 @@
# Hugging Face Hub
Hugging Face Hub is a platform where you can share, access and collaborate upon a wide array of machine learning models, primarily focused on Natural Language Processing (NLP) tasks. It is a central repository that facilitates storage and sharing of models, reducing the time and overhead usually associated with these tasks. For an AI Engineer, leveraging Hugging Face Hub can accelerate model development and deployment, effectively allowing them to work on structuring efficient AI solutions instead of worrying about model storage and accessibility issues.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/)

@ -0,0 +1,7 @@
# Hugging Face Models
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/models)

@ -0,0 +1,7 @@
# Hugging Face Models
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/models)

@ -0,0 +1,7 @@
# Hugging Face Tasks
Hugging face has a section where they have a list of tasks with the popular models for that task.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/tasks)

@ -0,0 +1,7 @@
# Hugging Face
Hugging Face is the platform where the machine learning community collaborates on models, datasets, and applications.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/)

@ -0,0 +1,3 @@
# Image Generation
Image Generation often refers to the process of creating new images from an existing dataset or completely from scratch. For an AI Engineer, understanding image generation is crucial as it is one of the key aspects of machine learning and deep learning related to computer vision. It often involves techniques like convolutional neural networks (CNN), generative adversarial networks (GANs), and autoencoders. These technologies are used to generate artificial images that closely resemble original input, and can be applied in various fields such as healthcare, entertainment, security and more.

@ -0,0 +1,3 @@
# Image Understanding
Image Understanding involves extracting meaningful information from images, such as photos or videos. This process includes tasks like image recognition, where an AI system is trained to recognize certain objects within an image, and image segmentation, where an image is divided into multiple regions according to some criteria. For an AI engineer, mastering techniques in Image Understanding is crucial because it forms the basis for more complex tasks such as object detection, facial recognition, or even whole scene understanding, all of which play significant roles in various AI applications. As AI technologies continue evolving, the ability to analyze and interpret visual data becomes increasingly important in fields ranging from healthcare to autonomous vehicles.

@ -0,0 +1,3 @@
# Impact on Product Development
Incorporating Artificial Intelligence (AI) can transform the process of creating, testing, and delivering products. This could range from utilizing AI for enhanced data analysis to inform product design, use of AI-powered automation in production processes, or even AI as a core feature of the product itself.

@ -0,0 +1,3 @@
# Indexing Embeddings
This step involves converting data (such as text, images, or other content) into numerical vectors (embeddings) using a pre-trained model. These embeddings capture the semantic relationships between data points. Once generated, the embeddings are stored in a vector database, which organizes them in a way that enables efficient retrieval based on similarity. This indexed structure allows fast querying and comparison of vectors, facilitating tasks like semantic search, recommendation systems, and anomaly detection.

@ -0,0 +1,8 @@
# Inference SDK
Inference is the process of using a trained model to make predictions on new data. As this process can be compute-intensive, running on a dedicated server can be an interesting option. The huggingface_hub library provides an easy way to call a service that runs inference for hosted models. There are several services you can connect to:
Visit the following resources to learn more:
- [@official@Hugging Face Inference Client](https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client)
- [@official@Hugging Face Inference API](https://huggingface.co/docs/api-inference/en/index)

@ -0,0 +1,3 @@
# Inference
Inference involves using models developed through machine learning to make predictions or decisions. As part of the AI Engineer Roadmap, an AI engineer might create an inference engine, which uses rules and logic to infer new information based on existing data. Often used in natural language processing, image recognition, and similar tasks, inference can help AI systems provide useful outputs based on their training. Working with inference involves understanding different models, how they work, and how to apply them to new data to achieve reliable results.

@ -0,0 +1,3 @@
# Introduction
An AI Engineer uses pre-trained models and existing AI tools to improve user experiences. They focus on applying AI in practical ways, without building models from scratch. This is different from AI Researchers and ML Engineers, who focus more on creating new models or developing AI theory.

@ -0,0 +1,3 @@
# Know your Customers / Usecases
Understanding your target customers and use-cases helps making informed decisions during the development to ensure that the final AI solution appropriately meets the relevant needs of the users. You can use this knowledge to choose the right tools, frameworks, technologies, design the right architecture, and even prevent abuse.

@ -0,0 +1,3 @@
# LanceDB
LanceDB is a relatively new, multithreaded, high-speed data warehouse optimized for AI and machine learning data processing. It's designed to handle massive amounts of data, enables quick storage and retrieval, and supports lossless data compression. For an AI engineer, learning LanceDB could be beneficial as it can be integrated with machine learning frameworks for collecting, processing and analyzing large datasets. These functionalities can help to streamline the process for AI model training, which requires extensive data testing and validation.

@ -0,0 +1,3 @@
# LangChain for Multimodal Apps
LangChain is a software framework that helps facilitate the integration of large language models into applications. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.

@ -0,0 +1,3 @@
# Langchain
LangChain is a software framework that helps facilitate the integration of large language models into applications. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.

@ -0,0 +1,3 @@
# Limitations and Considerations under Pre-trained Models
Pre-trained Models are AI models that are previously trained on a large benchmark dataset and provide a starting point for AI developers. They help in saving training time and computational resources. However, they also come with certain limitations and considerations. These models can sometimes fail to generalize well to tasks outside of their original context due to issues like dataset bias or overfitting. Furthermore, using them without understanding their internal working can lead to problematic consequences. Finally, transfer learning, which is the mechanism to deploy these pre-trained models, might not always be the optimum solution for every AI project. Thus, an AI Engineer must be aware of these factors while working with pre-trained models.

@ -0,0 +1,7 @@
# Llama Index
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.
Visit the following resources to learn more:
- [@official@LlamaIndex Official Website](https://llamaindex.ai/)

@ -0,0 +1,7 @@
# Llama Index
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.
Visit the following resources to learn more:
- [@official@LlamaIndex Official Website](https://llamaindex.ai/)

@ -0,0 +1,3 @@
# LLMs
LLM or Large Language Models are AI models that are trained on a large amount of text data to understand and generate human language. They are the core of applications like ChatGPT, and are used for a variety of tasks, including language translation, question answering, and more.

@ -0,0 +1,3 @@
# Manual Implementation
You can build the AI agents manually by coding the logic from scratch without using any frameworks or libraries. For example, you can use the OpenAI API and write the looping logic yourself to keep the agent running until it has the answer.

@ -0,0 +1,9 @@
# Maximum Tokens
Number of Maximum tokens in OpenAI API depends on the model you are using.
For example, the `gpt-4o` model has a maximum of 128,000 tokens.
Visit the following resources to learn more:
- [@official@OpenAI API Documentation](https://platform.openai.com/docs/api-reference/completions/create)

@ -0,0 +1,3 @@
# Mistral AI
Mistral AI is a French startup founded in 2023, specializing in open-source large language models (LLMs). Created by former Meta and Google DeepMind researchers, it focuses on efficient, customizable AI solutions that promote transparency. Its flagship models, Mistral Large and Mixtral, offer state-of-the-art performance with lower resource requirements, gaining significant attention in the AI field.

@ -0,0 +1,7 @@
# Hugging Face Models
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
Visit the following resources to learn more:
- [@official@Hugging Face](https://huggingface.co/models)

@ -0,0 +1,7 @@
# MongoDB Atlas
MongoDB Atlas is a fully managed cloud-based NoSQL database service by MongoDB. It simplifies database deployment and management across platforms like AWS, Azure, and Google Cloud. Using a flexible document model, Atlas automates tasks such as scaling, backups, and security, allowing developers to focus on building applications. With features like real-time analytics and global clusters, it offers a powerful solution for scalable and resilient app development.
Visit the following resources to learn more:
- [@official@MongoDB Atlas Vector Search](https://www.mongodb.com/products/platform/atlas-vector-search)

@ -0,0 +1,3 @@
# Multimodal AI Usecases
Multimodal AI integrates various data types for diverse applications. In human-computer interaction, it enhances interfaces using speech, gestures, and facial expressions. In healthcare, it combines medical scans and records for accurate diagnoses. For autonomous vehicles, it processes data from sensors for real-time navigation. Additionally, it generates images from text and summarizes videos in content creation, while also analyzing satellite and sensor data for climate insights.

@ -0,0 +1,3 @@
# Multimodal AI
Multimodal AI refers to artificial intelligence systems capable of processing and integrating multiple types of data inputs simultaneously, such as text, images, audio, and video. Unlike traditional AI models that focus on a single data type, multimodal AI combines various inputs to achieve a more comprehensive understanding and generate more robust outputs. This approach mimics human cognition, which naturally integrates information from multiple senses to form a complete perception of the world. By leveraging diverse data sources, multimodal AI can perform complex tasks like image captioning, visual question answering, and cross-modal content generation.

@ -0,0 +1,7 @@
# Ollama Models
Ollama supports a wide range of language models, including but not limited to Llama, Phi, Mistral, Gemma and more.
Visit the following resources to learn more:
- [@official@Ollama Models](https://ollama.com/library)

@ -0,0 +1,7 @@
# Ollama SDK
Ollama SDK can be used to develop applications locally.
Visit the following resources to learn more:
- [@official@Ollama SDK](https://ollama.com)

@ -0,0 +1,7 @@
# Ollama
Ollama is an open-source tool for running large language models (LLMs) locally on personal computers. It supports various models like Llama 2, Mistral, and Code Llama, bundling weights, configurations, and data into a single package. Ollama offers a user-friendly interface, API access, and integration capabilities, allowing users to leverage AI capabilities while maintaining data privacy and control. It's designed for easy installation and use on macOS and Linux, with Windows support in development.
Visit the following resources to learn more:
- [@official@Ollama](https://ollama.com)

@ -0,0 +1,3 @@
# Open AI Assistant API
OpenAI Assistant API is a tool provided by OpenAI that allows developers to integrate the same AI used in ChatGPT into their own applications, products or services. This AI conducts dynamic, interactive and context-aware conversations useful for building AI assistants in various applications. In the AI Engineer Roadmap, mastering the use of APIs like the Open AI Assistant API is a crucial skill, as it allows engineers to harness the power and versatility of pre-trained algorithms and use them for their desired tasks. AI Engineers can offload the intricacies of model training and maintenance, focusing more on product development and innovation.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save