Clean Backend Roadmap Links / Content (#7076)

* 95 topics complete

* 32 topics

* 8 topics

* Update src/data/roadmaps/backend/content/building-for-scale@SHmbcMRsc3SygEDksJQBD.md

* Update src/data/roadmaps/backend/content/architectural-patterns@tHiUpG9LN35E5RaHddMv5.md

---------

Co-authored-by: Kamran Ahmed <kamranahmed.se@gmail.com>
pull/7095/head
dsh 2 months ago committed by GitHub
parent bcc85dcebe
commit ae58fa2a2a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 2
      src/data/roadmaps/backend/content/acid@qSAdfaGUfn8mtmDjHJi3z.md
  2. 5
      src/data/roadmaps/backend/content/apache@jjjonHTHHo-NiAf6p9xPv.md
  3. 13
      src/data/roadmaps/backend/content/authentication@PY9G7KQy8bF6eIdr1ydHf.md
  4. 12
      src/data/roadmaps/backend/content/aws-neptune@5xy66yQrz1P1w7n6PcAFq.md
  5. 12
      src/data/roadmaps/backend/content/backpressure@JansCqGDyXecQkD1K7E7e.md
  6. 6
      src/data/roadmaps/backend/content/base@QZwTLOvjUTaSb_9deuxsR.md
  7. 5
      src/data/roadmaps/backend/content/basic-authentication@yRiJgjjv2s1uV9vgo3n8m.md
  8. 4
      src/data/roadmaps/backend/content/bcrypt@dlG1bVkDmjI3PEGpkm1xH.md
  9. 7
      src/data/roadmaps/backend/content/bitbucket@Ry_5Y-BK7HrkIc6X0JG1m.md
  10. 6
      src/data/roadmaps/backend/content/browsers-and-how-they-work@P82WFaTPgQEPNp5IIuZ1Y.md
  11. 15
      src/data/roadmaps/backend/content/building-for-scale@SHmbcMRsc3SygEDksJQBD.md
  12. 7
      src/data/roadmaps/backend/content/c@rImbMHLLfJwjf3l25vBkc.md
  13. 4
      src/data/roadmaps/backend/content/caching@ELj8af7Mi38kUbaPJfCUR.md
  14. 7
      src/data/roadmaps/backend/content/caddy@Op-PSPNoyj6Ss9CS09AXh.md
  15. 3
      src/data/roadmaps/backend/content/cap-theorem@LAdKDJ4LcMaDWqslMvE8X.md
  16. 9
      src/data/roadmaps/backend/content/cassandra@gT6-z2vhdIQDzmR2K1g1U.md
  17. 6
      src/data/roadmaps/backend/content/cdn@Nq2BO53bHJdFT1rGZPjYx.md
  18. 3
      src/data/roadmaps/backend/content/ci--cd@mGfD7HfuP184lFkXZzGjG.md
  19. 9
      src/data/roadmaps/backend/content/circuit-breaker@spkiQTPvXY4qrhhVUkoPV.md
  20. 3
      src/data/roadmaps/backend/content/client-side@KWTbEVX_WxS8jmSaAX3Fe.md
  21. 4
      src/data/roadmaps/backend/content/containerization-vs-virtualization@SGVwJme-jT_pbOTvems0v.md
  22. 3
      src/data/roadmaps/backend/content/cookie-based-auth@ffzsh8_5yRq85trFt9Xhk.md
  23. 4
      src/data/roadmaps/backend/content/cors@LU6WUbkWKbPM1rb2_gEqa.md
  24. 6
      src/data/roadmaps/backend/content/couchdb@qOlNzZ7U8LhIGukb67n7U.md
  25. 3
      src/data/roadmaps/backend/content/cqrs@u8IRw5PuXGUcmxA0YYXgx.md
  26. 3
      src/data/roadmaps/backend/content/csp@HgQBde1zLUFtlwB66PR6_.md
  27. 4
      src/data/roadmaps/backend/content/data-replication@wrl7HHWXOaxoKVlNZxZ6d.md
  28. 4
      src/data/roadmaps/backend/content/database-indexes@y-xkHFE9YzhNIX3EiWspL.md
  29. 49
      src/data/roadmaps/backend/content/design-and-development-principles@9Fpoor-Os_9lvrwu5Zjh-.md
  30. 5
      src/data/roadmaps/backend/content/dns-and-how-it-works@hkxw9jPGYphmjhTjw8766.md
  31. 12
      src/data/roadmaps/backend/content/domain-driven-design@BvHi5obg0L1JDZFKBzx9t.md
  32. 10
      src/data/roadmaps/backend/content/dynamodb@dwfEHInbX2eFiafM-nRMX.md
  33. 1
      src/data/roadmaps/backend/content/elasticsearch@NulaE1isWqn-feYHg4YQT.md
  34. 7
      src/data/roadmaps/backend/content/event-sourcing@wqE-mkxvehOzOv8UyE39p.md
  35. 12
      src/data/roadmaps/backend/content/failure-modes@GwApfL4Yx-b5Y8dB9Vy__.md
  36. 11
      src/data/roadmaps/backend/content/firebase@RyJFLLGieJ8Xjt-DlIayM.md
  37. 5
      src/data/roadmaps/backend/content/functional-testing@NAGisfq2CgeK3SsuRjnMw.md
  38. 5
      src/data/roadmaps/backend/content/git@_I1E__wCIVrhjMk6IMieE.md
  39. 9
      src/data/roadmaps/backend/content/github@ptD8EVqwFUYr4W5A_tABY.md
  40. 3
      src/data/roadmaps/backend/content/gitlab@Wcp-VDdFHipwa7hNAp1z_.md
  41. 7
      src/data/roadmaps/backend/content/go@BdXbcz4-ar3XOX0wIKzBp.md
  42. 8
      src/data/roadmaps/backend/content/gof-design-patterns@6XIWO0MoE-ySl4qh_ihXa.md
  43. 6
      src/data/roadmaps/backend/content/graceful-degradation@G9AI_i3MkUE1BsO3_-PH7.md
  44. 9
      src/data/roadmaps/backend/content/graphql@zp3bq38tMnutT2N0tktOW.md
  45. 5
      src/data/roadmaps/backend/content/grpc@J-TOE2lT4At1mSdNoxPS1.md
  46. 5
      src/data/roadmaps/backend/content/hateoas@dLY0KafPstajCcSbslC4M.md
  47. 3
      src/data/roadmaps/backend/content/how-does-the-internet-work@yCnn-NfSxIybUQ2iTuUGq.md
  48. 10
      src/data/roadmaps/backend/content/https@x-WBJjBd8u93ym5gtxGsR.md
  49. 9
      src/data/roadmaps/backend/content/influx-db@XbM4TDImSH-56NsITjyHK.md
  50. 7
      src/data/roadmaps/backend/content/instrumentation@4X-sbqpP0NDhM99bKdqIa.md
  51. 4
      src/data/roadmaps/backend/content/integration-testing@381Kw1IMRv7CJp-Uf--qd.md
  52. 6
      src/data/roadmaps/backend/content/internet@SiYUdtYMDImRPmV2_XPkH.md
  53. 4
      src/data/roadmaps/backend/content/java@ANeSwxJDJyQ-49pO2-CCI.md
  54. 4
      src/data/roadmaps/backend/content/javascript@8-lO-v6jCYYoklEJXULxN.md
  55. 4
      src/data/roadmaps/backend/content/json-apis@sNceS4MpSIjRkWhNDmrFg.md
  56. 4
      src/data/roadmaps/backend/content/jwt@UxS_mzVUjLigEwKrXnEeB.md
  57. 5
      src/data/roadmaps/backend/content/kafka@VoYSis1F1ZfTxMlQlXQKB.md
  58. 5
      src/data/roadmaps/backend/content/learn-about-apis@EwvLPSI6AlZ4TnNIJTZA4.md
  59. 10
      src/data/roadmaps/backend/content/loadshifting@HoQdX7a4SnkFRU4RPQ-D5.md
  60. 6
      src/data/roadmaps/backend/content/long-polling@osvajAJlwGI3XnX0fE-kA.md
  61. 4
      src/data/roadmaps/backend/content/lxc@31ZlpfIPr9-5vYZqvjUeL.md
  62. 1
      src/data/roadmaps/backend/content/mariadb@tD3i-8gBpMKCHB-ITyDiU.md
  63. 3
      src/data/roadmaps/backend/content/md5@jWwA6yX4Zjx-r_KpDaD3c.md
  64. 8
      src/data/roadmaps/backend/content/memcached@xPvVwGQw28uMeLYIWn8yn.md
  65. 4
      src/data/roadmaps/backend/content/message-brokers@nJ5FpFgGCRaALcWmAKBKT.md
  66. 6
      src/data/roadmaps/backend/content/microservices@K55h3aqOGe6-hgVhiFisT.md
  67. 12
      src/data/roadmaps/backend/content/migration-strategies@f7iWBkC0X7yyCoP_YubVd.md
  68. 10
      src/data/roadmaps/backend/content/mongodb@28U6q_X-NTYf7OSKHjoWH.md
  69. 17
      src/data/roadmaps/backend/content/monitoring@QvMEEsXh0-rzn5hDGcmEv.md
  70. 5
      src/data/roadmaps/backend/content/monolithic-apps@Ke522R-4k6TDeiDRyZbbU.md
  71. 3
      src/data/roadmaps/backend/content/ms-iis@0NJDgfe6eMa7qPUOI6Eya.md
  72. 3
      src/data/roadmaps/backend/content/ms-sql@dEsTje8kfHwWjCI3zcgLC.md
  73. 4
      src/data/roadmaps/backend/content/mysql@VPxOdjJtKAqmM5V0LR5OC.md
  74. 3
      src/data/roadmaps/backend/content/n1-problem@bQnOAu863hsHdyNMNyJop.md
  75. 8
      src/data/roadmaps/backend/content/neo4j@BTNJfWemFKEeNeTyENXui.md
  76. 3
      src/data/roadmaps/backend/content/nginx@z5AdThp9ByulmM9uekgm-.md
  77. 7
      src/data/roadmaps/backend/content/normalization@Ge2SnKBrQQrU-oGLz6TmT.md
  78. 10
      src/data/roadmaps/backend/content/nosql-databases@F8frGuv1dunOdcVJ_IiGs.md
  79. 9
      src/data/roadmaps/backend/content/oauth@vp-muizdICcmU0gN8zmkS.md
  80. 8
      src/data/roadmaps/backend/content/observability@Z01E67D6KjrShvQCHjGR7.md
  81. 10
      src/data/roadmaps/backend/content/open-api-specs@9cD5ag1L0GqHx4_zxc5JX.md
  82. 6
      src/data/roadmaps/backend/content/openid@z3EJBpgGm0_Uj3ymhypbX.md
  83. 2
      src/data/roadmaps/backend/content/oracle@h1SAjQltHtztSt8QmRgab.md
  84. 5
      src/data/roadmaps/backend/content/orms@Z7jp_Juj5PffSxV7UZcBb.md
  85. 2
      src/data/roadmaps/backend/content/owasp-risks@AAgciyxuDvS2B_c6FRMvT.md
  86. 6
      src/data/roadmaps/backend/content/php@l9Wrq_Ad9-Ju4NIB0m5Ha.md
  87. 8
      src/data/roadmaps/backend/content/pick-a-language@2f0ZO6GJElfZ2Eis28Hzg.md
  88. 6
      src/data/roadmaps/backend/content/postgresql@FihTrMO56kj9jT8O_pO2T.md
  89. 11
      src/data/roadmaps/backend/content/profiling-perfor@SYXJhanu0lFmGj2m2XXhS.md
  90. 8
      src/data/roadmaps/backend/content/python@J_sVHsD72Yzyqb9KCIvAY.md
  91. 2
      src/data/roadmaps/backend/content/rabbitmq@GPFRMcY1DEtRgnaZwJ3vW.md
  92. 10
      src/data/roadmaps/backend/content/real-time-data@5XGvep2qoti31bsyqNzrU.md
  93. 14
      src/data/roadmaps/backend/content/redis@M0iaSSdVPWaCUpyTG50Vf.md
  94. 3
      src/data/roadmaps/backend/content/redis@g8GjkJAhvnSxXTZks0V1g.md
  95. 2
      src/data/roadmaps/backend/content/relational-databases@r45b461NxLN6wBODJ5CNP.md
  96. 8
      src/data/roadmaps/backend/content/repo-hosting-services@NvUcSDWBhzJZ31nzT4UlE.md
  97. 4
      src/data/roadmaps/backend/content/rest@lfNFDZZNdrB0lbEaMtU71.md
  98. 7
      src/data/roadmaps/backend/content/rethinkdb@5T0ljwlHL0545ICCeehcQ.md
  99. 2
      src/data/roadmaps/backend/content/ruby@SlH0Rl07yURDko2nDPfFy.md
  100. 6
      src/data/roadmaps/backend/content/rust@CWwh2abwqx4hAxpAGvhIx.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -1,6 +1,6 @@
# ACID
ACID are the four properties of relational database systems that help in making sure that we are able to perform the transactions in a reliable manner. It's an acronym which refers to the presence of four properties: atomicity, consistency, isolation and durability
ACID is an acronym representing four key properties that guarantee reliable processing of database transactions. It stands for Atomicity, Consistency, Isolation, and Durability. Atomicity ensures that a transaction is treated as a single, indivisible unit that either completes entirely or fails completely. Consistency maintains the database in a valid state before and after the transaction. Isolation ensures that concurrent transactions do not interfere with each other, appearing to execute sequentially. Durability guarantees that once a transaction is committed, it remains so, even in the event of system failures. These properties are crucial in maintaining data integrity and reliability in database systems, particularly in scenarios involving multiple, simultaneous transactions or where data accuracy is critical, such as in financial systems or e-commerce platforms.
Visit the following resources to learn more:

@ -1,9 +1,10 @@
# Apache
Apache is a free, open-source HTTP server, available on many operating systems, but mainly used on Linux distributions. It is one of the most popular options for web developers, as it accounts for over 30% of all the websites, as estimated by W3Techs.
Apache, officially known as the Apache HTTP Server, is a free, open-source web server software developed and maintained by the Apache Software Foundation. It's one of the most popular web servers worldwide, known for its robustness, flexibility, and extensive feature set. Apache supports a wide range of operating systems and can handle various content types and programming languages through its modular architecture. It offers features like virtual hosting, SSL/TLS support, and URL rewriting. Apache's configuration files allow for detailed customization of server behavior. While it has faced competition from newer alternatives like Nginx, especially in high-concurrency scenarios, Apache remains widely used due to its stability, comprehensive documentation, and large community support. It's particularly favored for its ability to integrate with other open-source technologies in the LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack.
Visit the following resources to learn more:
- [@article@Apache Server Website](https://httpd.apache.org/)
- [@official@Apache Server Website](https://httpd.apache.org/)
- [@video@What is Apache Web Server?](https://www.youtube.com/watch?v=kaaenHXO4t4)
- [@video@Apache vs NGINX](https://www.youtube.com/watch?v=9nyiY-psbMs)
- [@feed@Explore top posts about Apache](https://app.daily.dev/tags/apache?ref=roadmapsh)

@ -1,20 +1,9 @@
# Authentication
The API authentication process validates the identity of the client attempting to make a connection by using an authentication protocol. The protocol sends the credentials from the remote client requesting the connection to the remote access server in either plain text or encrypted form. The server then knows whether it can grant access to that remote client or not.
Here is the list of common ways of authentication:
- JWT Authentication
- Token based Authentication
- Session based Authentication
- Basic Authentication
- OAuth - Open Authorization
- SSO - Single Sign On
API authentication is the process of verifying the identity of clients attempting to access an API, ensuring that only authorized users or applications can interact with the API's resources. Common methods include API keys, OAuth 2.0, JSON Web Tokens (JWT), basic authentication, and OpenID Connect. These techniques vary in complexity and security level, from simple token-based approaches to more sophisticated protocols that handle both authentication and authorization. API authentication protects sensitive data, prevents unauthorized access, enables usage tracking, and can provide granular control over resource access. The choice of authentication method depends on factors such as security requirements, types of clients, ease of implementation, and scalability needs. Implementing robust API authentication is crucial for maintaining the integrity, security, and controlled usage of web services and applications in modern, interconnected software ecosystems.
Visit the following resources to learn more:
- [@article@User Authentication: Understanding the Basics & Top Tips](https://swoopnow.com/user-authentication/)
- [@article@An overview about authentication methods](https://betterprogramming.pub/how-do-you-authenticate-mate-f2b70904cc3a)
- [@roadmap.sh@SSO - Single Sign On](https://roadmap.sh/guides/sso)
- [@roadmap.sh@OAuth - Open Authorization](https://roadmap.sh/guides/oauth)
- [@roadmap.sh@JWT Authentication](https://roadmap.sh/guides/jwt-authentication)

@ -1,11 +1,9 @@
# AWS Neptune
AWS Neptune is a fully managed graph database service designed for applications that require highly connected data.
Amazon Neptune is a fully managed graph database service provided by Amazon Web Services (AWS). It's designed to store and navigate highly connected data, supporting both property graph and RDF (Resource Description Framework) models. Neptune uses graph query languages like Gremlin and SPARQL, making it suitable for applications involving complex relationships, such as social networks, recommendation engines, fraud detection systems, and knowledge graphs. It offers high availability, with replication across multiple Availability Zones, and supports up to 15 read replicas for improved performance. Neptune integrates with other AWS services, provides encryption at rest and in transit, and offers fast recovery from failures. Its scalability and performance make it valuable for handling large-scale, complex data relationships in enterprise-level applications.
It supports two popular graph models: Property Graph and RDF (Resource Description Framework), allowing you to build applications that traverse billions of relationships with millisecond latency.
Learn more from the following resources:
Neptune is optimized for storing and querying graph data, making it ideal for use cases like social networks, recommendation engines, fraud detection, and knowledge graphs.
It offers high availability, automatic backups, and multi-AZ (Availability Zone) replication, ensuring data durability and fault tolerance.
Additionally, Neptune integrates seamlessly with other AWS services and supports open standards like Gremlin, SPARQL, and Apache TinkerPop, making it flexible and easy to integrate into existing applications.
- [@official@AWS Neptune Website](https://aws.amazon.com/neptune/)
- [@video@Getting Started with Neptune Serverless](https://www.youtube.com/watch?v=b04-jjM9t4g)
- [@article@Setting Up Amazon Neptune Graph Database](https://cliffordedsouza.medium.com/setting-up-amazon-neptune-graph-database-2b73512a7388)

@ -1,15 +1,9 @@
# Backpressure
Backpressure is a design pattern that is used to manage the flow of data through a system, particularly in situations where the rate of data production exceeds the rate of data consumption. It is commonly used in cloud computing environments to prevent overloading of resources and to ensure that data is processed in a timely and efficient manner.
There are several ways to implement backpressure in a cloud environment:
- Buffering: This involves storing incoming data in a buffer until it can be processed, allowing the system to continue receiving data even if it is temporarily unable to process it.
- Batching: This involves grouping incoming data into batches and processing the batches in sequence, rather than processing each piece of data individually.
- Flow control: This involves using mechanisms such as flow control signals or windowing to regulate the rate at which data is transmitted between systems.
Backpressure is an important aspect of cloud design, as it helps to ensure that data is processed efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as auto-scaling and load balancing, to provide a scalable and resilient cloud environment.
Back pressure is a flow control mechanism in systems processing asynchronous data streams, where the receiving component signals its capacity to handle incoming data to the sending component. This feedback loop prevents overwhelming the receiver with more data than it can process, ensuring system stability and optimal performance. In software systems, particularly those dealing with high-volume data or event-driven architectures, back pressure helps manage resource allocation, prevent memory overflows, and maintain responsiveness. It's commonly implemented in reactive programming, message queues, and streaming data processing systems. By allowing the receiver to control the flow of data, back pressure helps create more resilient, efficient systems that can gracefully handle varying loads and prevent cascading failures in distributed systems.
Visit the following resources to learn more:
- [@article@Awesome Architecture: Backpressure](https://awesome-architecture.com/back-pressure/)
- [@article@Backpressure explained — the resisted flow of data through software](https://medium.com/@jayphelps/backpressure-explained-the-flow-of-data-through-software-2350b3e77ce7)
- [@video@What is Back Pressure](https://www.youtube.com/watch?v=viTGm_cV7lE)

@ -1 +1,7 @@
# Base
Oracle Base Database Service enables you to maintain absolute control over your data while using the combined capabilities of Oracle Database and Oracle Cloud Infrastructure. Oracle Base Database Service offers database systems (DB systems) on virtual machines. They are available as single-node DB systems and multi-node RAC DB systems on Oracle Cloud Infrastructure (OCI). You can manage these DB systems by using the OCI Console, the OCI API, the OCI CLI, the Database CLI (DBCLI), Enterprise Manager, or SQL Developer.
Learn more from the following resources:
- [@official@Base Database Website](https://docs.oracle.com/en-us/iaas/base-database/index.html)

@ -1,11 +1,10 @@
# Basic authentication
Given the name "Basic Authentication", you should not confuse Basic Authentication with the standard username and password authentication. Basic authentication is a part of the HTTP specification, and the details can be [found in the RFC7617](https://www.rfc-editor.org/rfc/rfc7617.html).
Because it is a part of the HTTP specifications, all the browsers have native support for "HTTP Basic Authentication".
Basic Authentication is a simple HTTP authentication scheme built into the HTTP protocol. It works by sending a user's credentials (username and password) encoded in base64 format within the HTTP header. When a client makes a request to a server requiring authentication, the server responds with a 401 status code and a "WWW-Authenticate" header. The client then resends the request with the Authorization header containing the word "Basic" followed by the base64-encoded string of "username:password". While easy to implement, Basic Authentication has significant security limitations: credentials are essentially sent in plain text (base64 is easily decoded), and it doesn't provide any encryption. Therefore, it should only be used over HTTPS connections to ensure the credentials are protected during transmission. Due to its simplicity and lack of advanced security features, Basic Authentication is generally recommended only for simple, low-risk scenarios or as a fallback mechanism.
Visit the following resources to learn more:
- [@roadmap.sh@HTTP Basic Authentication](https://roadmap.sh/guides/http-basic-authentication)
- [@video@Basic Authentication in 5 minutes](https://www.youtube.com/watch?v=rhi1eIjSbvk)
- [@video@Illustrated HTTP Basic Authentication](https://www.youtube.com/watch?v=mwccHwUn7Gc)
- [@feed@Explore top posts about Authentication](https://app.daily.dev/tags/authentication?ref=roadmapsh)

@ -1,9 +1,9 @@
# Bcrypt
bcrypt is a password hashing function, that has been proven reliable and secure since it's release in 1999. It has been implemented into most commonly-used programming languages.
Bcrypt is a password-hashing function designed to securely hash passwords for storage in databases. Created by Niels Provos and David Mazières, it's based on the Blowfish cipher and incorporates a salt to protect against rainbow table attacks. Bcrypt's key feature is its adaptive nature, allowing for the adjustment of its cost factor to make it slower as computational power increases, thus maintaining resistance against brute-force attacks over time. It produces a fixed-size hash output, typically 60 characters long, which includes the salt and cost factor. Bcrypt is widely used in many programming languages and frameworks due to its security strength and relative ease of implementation. Its deliberate slowness in processing makes it particularly effective for password storage, where speed is not a priority but security is paramount.
Visit the following resources to learn more:
- [@article@bcrypts npm package](https://www.npmjs.com/package/bcrypt)
- [@article@Understanding bcrypt](https://auth0.com/blog/hashing-in-action-understanding-bcrypt/)
- [@video@bcrypt explained](https://www.youtube.com/watch?v=O6cmuiTBZVs)
- [@video@bcrypt explained](https://www.youtube.com/watch?v=AzA_LTDoFqY)

@ -1,15 +1,10 @@
# Bitbucket
Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc
Bitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)
Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations. While similar to GitHub in many aspects, Bitbucket's integration with Atlassian's ecosystem and its pricing model for private repositories are key differentiators. It's widely used for collaborative software development, particularly in enterprise environments already invested in Atlassian's suite of products.
Visit the following resources to learn more:
- [@official@Bitbucket Website](https://bitbucket.org/product)
- [@official@Getting started with Bitbucket](https://bitbucket.org/product/guides/basics/bitbucket-interface)
- [@article@Using Git with Bitbucket Cloud](https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud)
- [@official@A brief overview of Bitbucket](https://bitbucket.org/product/guides/getting-started/overview#a-brief-overview-of-bitbucket)
- [@video@Bitbucket tutorial | How to use Bitbucket Cloud](https://www.youtube.com/watch?v=M44nEyd_5To)
- [@video@Bitbucket Tutorial | Bitbucket for Beginners](https://www.youtube.com/watch?v=i5T-DB8tb4A)
- [@feed@Explore top posts about Bitbucket](https://app.daily.dev/tags/bitbucket?ref=roadmapsh)

@ -1,10 +1,10 @@
# Browsers
A web browser is a software application that enables a user to access and display web pages or other online content through its graphical user interface.
Web browsers are software applications that enable users to access, retrieve, and navigate information on the World Wide Web. They interpret and display HTML, CSS, and JavaScript to render web pages. Modern browsers like Google Chrome, Mozilla Firefox, Apple Safari, and Microsoft Edge offer features such as tabbed browsing, bookmarks, extensions, and synchronization across devices. They incorporate rendering engines (e.g., Blink, Gecko, WebKit) to process web content, and JavaScript engines for executing code. Browsers also manage security through features like sandboxing, HTTPS enforcement, and pop-up blocking. They support various web standards and technologies, including HTML5, CSS3, and Web APIs, enabling rich, interactive web experiences. With the increasing complexity of web applications, browsers have evolved to become powerful platforms, balancing performance, security, and user experience in the ever-changing landscape of the internet.
Visit the following resources to learn more:
- [@article@How Browsers Work](https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/)
- [@article@Role of Rendering Engine in Browsers](https://www.browserstack.com/guide/browser-rendering-engine)
- [@article@How Browsers Work](https://www.ramotion.com/blog/what-is-web-browser/)
- [@article@Populating the Page: How Browsers Work](https://developer.mozilla.org/en-US/docs/Web/Performance/How_browsers_work)
- [@video@How Do Web Browsers Work?](https://www.youtube.com/watch?v=5rLFYtXHo9s)
- [@feed@Explore top posts about Browsers](https://app.daily.dev/tags/browsers?ref=roadmapsh)

@ -1,19 +1,8 @@
# Building for Scale
Speaking in general terms, scalability is the ability of a system to handle a growing amount of work by adding resources to it.
Speaking in general terms, scalability is the ability of a system to handle a growing amount of work by adding resources to it. A software that was conceived with a scalable architecture in mind, is a system that will support higher workloads without any fundamental changes to it, but don't be fooled, this isn't magic. You'll only get so far with smart thinking without adding more sources to it. When you think about the infrastructure of a scalable system, you have two main ways of building it: using on-premises resources or leveraging all the tools a cloud provider can give you.
A software that was conceived with a scalable architecture in mind, is a system that will support higher workloads without any fundamental changes to it, but don't be fooled, this isn't magic. You'll only get so far with smart thinking without adding more sources to it.
For a system to be scalable, there are certain things you must pay attention to, like:
- Coupling
- Observability
- Evolvability
- Infrastructure
When you think about the infrastructure of a scalable system, you have two main ways of building it: using on-premises resources or leveraging all the tools a cloud provider can give you.
The main difference between on-premises and cloud resources will be FLEXIBILITY, on cloud providers you don't really need to plan ahead, you can upgrade your infrastructure with a couple of clicks, while with on-premises resources you will need a certain level of planning.
The main difference between on-premises and cloud resources will be **flexibility**, on cloud providers you don't really need to plan ahead, you can upgrade your infrastructure with a couple of clicks, while with on-premises resources you will need a certain level of planning.
Visit the following resources to learn more:

@ -1,11 +1,10 @@
# C\#
C# (pronounced "C sharp") is a general purpose programming language made by Microsoft. It is used to perform different tasks and can be used to create web apps, games, mobile apps, etc.
C# (pronounced C-sharp) is a modern, object-oriented programming language developed by Microsoft as part of its .NET framework. It combines the power and efficiency of C++ with the simplicity of Visual Basic, featuring strong typing, lexical scoping, and support for functional, generic, and component-oriented programming paradigms. C# is widely used for developing Windows desktop applications, web applications with ASP.NET, games with Unity, and cross-platform mobile apps using Xamarin. It offers features like garbage collection, type safety, and extensive library support. C# continues to evolve, with regular updates introducing new capabilities such as asynchronous programming, nullable reference types, and pattern matching. Its integration with the .NET ecosystem and Microsoft's development tools makes it a popular choice for enterprise software development and large-scale applications.
Visit the following resources to learn more:
- [@article@C# Learning Path](https://docs.microsoft.com/en-us/learn/paths/csharp-first-steps/?WT.mc_id=dotnet-35129-website)
- [@course@C# Learning Path](https://docs.microsoft.com/en-us/learn/paths/csharp-first-steps/?WT.mc_id=dotnet-35129-website)
- [@article@C# on W3 schools](https://www.w3schools.com/cs/index.php)
- [@article@Introduction to C#](https://docs.microsoft.com/en-us/shows/CSharp-101/?WT.mc_id=Educationalcsharp-c9-scottha)
- [@video@C# tutorials](https://www.youtube.com/watch?v=gfkTfcpWqAY\&list=PLTjRvDozrdlz3_FPXwb6lX_HoGXa09Yef)
- [@video@Learn C# Programming – Full Course with Mini-Projects](https://www.youtube.com/watch?v=YrtFtdTTfv0)
- [@feed@Explore top posts about C#](https://app.daily.dev/tags/c#?ref=roadmapsh)

@ -1,5 +1,3 @@
# Caching
Caching is a technique of storing frequently used data or results of complex computations in a local memory, for a certain period. So, next time, when the client requests the same information, instead of retrieving the information from the database, it will give the information from the local memory. The main advantage of caching is that it improves performance by reducing the processing burden.
NB! Caching is a complicated topic that has obvious benefits but can lead to pitfalls like stale data, cache invalidation, distributed caching etc
Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.

@ -1,8 +1,9 @@
# Caddy
The Caddy web server is an extensible, cross-platform, open-source web server written in Go. It has some really nice features like automatic SSL/HTTPs and a really easy configuration file.
Caddy is a modern, open-source web server written in Go. It's known for its simplicity, automatic HTTPS encryption, and HTTP/2 support out of the box. Caddy stands out for its ease of use, with a simple configuration syntax and the ability to serve static files with zero configuration. It automatically obtains and renews SSL/TLS certificates from Let's Encrypt, making secure deployments straightforward. Caddy supports various plugins and modules for extended functionality, including reverse proxying, load balancing, and dynamic virtual hosting. It's designed with security in mind, implementing modern web standards by default. While it may not match the raw performance of servers like Nginx in extremely high-load scenarios, Caddy's simplicity, built-in security features, and low resource usage make it an attractive choice for many web hosting needs, particularly for smaller to medium-sized projects or developers seeking a hassle-free server setup.
Visit the following resources to learn more:
- [@article@Official Website](https://caddyserver.com/)
- [@video@Getting started with Caddy the HTTPS Web Server from scratch](https://www.youtube.com/watch?v=t4naLFSlBpQ)
- [@official@Official Website](https://caddyserver.com/)
- [@opensource@caddyserver/caddy](https://github.com/caddyserver/caddy)
- [@video@How to Make a Simple Caddy 2 Website](https://www.youtube.com/watch?v=WgUV_BlHvj0)

@ -1,11 +1,10 @@
# CAP Theorem
CAP is an acronym that stands for Consistency, Availability and Partition Tolerance. According to CAP theorem, any distributed system can only guarantee two of the three properties at any point of time. You can't guarantee all three properties at once.
The CAP Theorem, also known as Brewer's Theorem, is a fundamental principle in distributed database systems. It states that in a distributed system, it's impossible to simultaneously guarantee all three of the following properties: Consistency (all nodes see the same data at the same time), Availability (every request receives a response, without guarantee that it contains the most recent version of the data), and Partition tolerance (the system continues to operate despite network failures between nodes). According to the theorem, a distributed system can only strongly provide two of these three guarantees at any given time. This principle guides the design and architecture of distributed systems, influencing decisions on data consistency models, replication strategies, and failure handling. Understanding the CAP Theorem is crucial for designing robust, scalable distributed systems and for choosing appropriate database solutions for specific use cases in distributed computing environments.
Visit the following resources to learn more:
- [@article@What is CAP Theorem?](https://www.bmc.com/blogs/cap-theorem/)
- [@article@CAP Theorem - Wikipedia](https://en.wikipedia.org/wiki/CAP_theorem)
- [@article@An Illustrated Proof of the CAP Theorem](https://mwhittaker.github.io/blog/an_illustrated_proof_of_the_cap_theorem/)
- [@article@CAP Theorem and its applications in NoSQL Databases](https://www.ibm.com/uk-en/cloud/learn/cap-theorem)
- [@video@What is CAP Theorem?](https://www.youtube.com/watch?v=_RbsFXWRZ10)

@ -1,11 +1,10 @@
# Column Databases
# Cassandra
A **<u>wide-column database</u>** (sometimes referred to as a column database) is similar to a relational database. It store data in tables, rows and columns. However in opposite to relational databases here each row can have its own format of the columns. Column databases can be seen as a two-dimensional key-value database. One of such database system is **Apache Cassandra**.
**Warning:** <a href="https://en.wikipedia.org/wiki/Wide-column_store#Wide-column_stores_versus_columnar_databases">note that a "columnar database" and a "column database" are two different terms!</a>
Apache Cassandra is a highly scalable, distributed NoSQL database designed to handle large amounts of structured data across multiple commodity servers. It provides high availability with no single point of failure, offering linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure. Cassandra uses a masterless ring architecture, where all nodes are equal, allowing for easy data distribution and replication. It supports flexible data models and can handle both unstructured and structured data. Cassandra excels in write-heavy environments and is particularly suitable for applications requiring high throughput and low latency. Its data model is based on wide column stores, offering a more complex structure than key-value stores. Widely used in big data applications, Cassandra is known for its ability to handle massive datasets while maintaining performance and reliability.
Visit the following resources to learn more:
- [@article@Apache Cassandra](https://cassandra.apache.org/_/index.html)
- [@official@Apache Cassandra](https://cassandra.apache.org/_/index.html)
- [article@Cassandra - Quick Guide](https://www.tutorialspoint.com/cassandra/cassandra_quick_guide.htm)
- [@video@Apache Cassandra - Course for Beginners](https://www.youtube.com/watch?v=J-cSy5MeMOA)
- [@feed@Explore top posts about Backend Development](https://app.daily.dev/tags/backend?ref=roadmapsh)

@ -1,12 +1,10 @@
# CDN (Content Delivery Network)
A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests.
Traditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose.
Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well
Traditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well
Visit the following resources to learn more:
- [@article@CloudFlare - What is a CDN? | How do CDNs work?](https://www.cloudflare.com/en-ca/learning/cdn/what-is-a-cdn/)
- [@article@Wikipedia - Content Delivery Network](https://en.wikipedia.org/wiki/Content_delivery_network)
- [@video@What is Cloud CDN?](https://www.youtube.com/watch?v=841kyd_mfH0)
- [@video@What is a Content Delivery Network (CDN)?](https://www.youtube.com/watch?v=Bsq5cKkS33I)
- [@video@What is a CDN and how does it work?](https://www.youtube.com/watch?v=RI9np1LWzqw)

@ -1,6 +1,6 @@
# CI/CD
CI/CD (Continuous Integration/Continuous Deployment) is the practice of automating building, testing, and deployment of applications with the main goal of detecting issues early, and provide quicker releases to the production environment.
CI/CD (Continuous Integration/Continuous Delivery) is a set of practices and tools in software development that automate the process of building, testing, and deploying code changes. Continuous Integration involves frequently merging code changes into a central repository, where automated builds and tests are run. Continuous Delivery extends this by automatically deploying all code changes to a testing or staging environment after the build stage. Some implementations include Continuous Deployment, where changes are automatically released to production. CI/CD pipelines typically involve stages like code compilation, unit testing, integration testing, security scans, and deployment. This approach aims to improve software quality, reduce time to market, and increase development efficiency by catching and addressing issues early in the development cycle.
Visit the following resources to learn more:
@ -8,6 +8,5 @@ Visit the following resources to learn more:
- [@video@Automate your Workflows with GitHub Actions](https://www.youtube.com/watch?v=nyKZTKQS_EQ)
- [@article@What is CI/CD?](https://about.gitlab.com/topics/ci-cd/)
- [@article@A Primer: Continuous Integration and Continuous Delivery (CI/CD)](https://thenewstack.io/a-primer-continuous-integration-and-continuous-delivery-ci-cd/)
- [@article@3 Ways to Use Automation in CI/CD Pipelines](https://thenewstack.io/3-ways-to-use-automation-in-ci-cd-pipelines/)
- [@article@Articles about CI/CD](https://thenewstack.io/category/ci-cd/)
- [@feed@Explore top posts about CI/CD](https://app.daily.dev/tags/cicd?ref=roadmapsh)

@ -1,12 +1,9 @@
# Circuit Breaker
The circuit breaker design pattern is a way to protect a system from failures or excessive load by temporarily stopping certain operations if the system is deemed to be in a failed or overloaded state. It is commonly used in cloud computing environments to prevent cascading failures and to improve the resilience and availability of a system.
A circuit breaker consists of three states: closed, open, and half-open. In the closed state, the circuit breaker allows operations to proceed as normal. If the system encounters a failure or becomes overloaded, the circuit breaker moves to the open state, and all subsequent operations are immediately stopped. After a specified period of time, the circuit breaker moves to the half-open state, and a small number of operations are allowed to proceed. If these operations are successful, the circuit breaker moves back to the closed state; if they fail, the circuit breaker moves back to the open state.
The circuit breaker design pattern is useful for protecting a system from failures or excessive load by providing a way to temporarily stop certain operations and allow the system to recover. It is often used in conjunction with other design patterns, such as retries and fallbacks, to provide a more robust and resilient cloud environment.
The circuit breaker design pattern is a way to protect a system from failures or excessive load by temporarily stopping certain operations if the system is deemed to be in a failed or overloaded state. It is commonly used in cloud computing environments to prevent cascading failures and to improve the resilience and availability of a system. A circuit breaker consists of three states: closed, open, and half-open. In the closed state, the circuit breaker allows operations to proceed as normal. If the system encounters a failure or becomes overloaded, the circuit breaker moves to the open state, and all subsequent operations are immediately stopped. After a specified period of time, the circuit breaker moves to the half-open state, and a small number of operations are allowed to proceed. If these operations are successful, the circuit breaker moves back to the closed state; if they fail, the circuit breaker moves back to the open state.
Visit the following resources to learn more:
- [@article@Circuit Breaker - AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_mitigate_interaction_failure_graceful_degradation.html)
- [@article@Circuit Breaker - Complete Guide](https://mateus4k.github.io/posts/circuit-breakers/)
- [@article@The Circuit Breaker Pattern](https://aerospike.com/blog/circuit-breaker-pattern/)
- [@video@Back to Basics: Static Stability Using a Circuit Breaker Pattern](https://www.youtube.com/watch?v=gy1RITZ7N7s)

@ -1,7 +1,8 @@
# Client Side Caching
Client-side caching is the storage of network data to a local cache for future re-use. After an application fetches network data, it stores that resource in a local cache. Once a resource has been cached, the browser uses the cache on future requests for that resource to boost performance.
Client-side caching is a technique where web browsers or applications store data locally on the user's device to improve performance and reduce server load. It involves saving copies of web pages, images, scripts, and other resources on the client's system for faster access on subsequent visits. Modern browsers implement various caching mechanisms, including HTTP caching (using headers like Cache-Control and ETag), service workers for offline functionality, and local storage APIs. Client-side caching significantly reduces network traffic and load times, enhancing user experience, especially on slower connections. However, it requires careful management to balance improved performance with the need for up-to-date content. Developers must implement appropriate cache invalidation strategies and consider cache-busting techniques for critical updates. Effective client-side caching is crucial for creating responsive, efficient web applications while minimizing server resource usage.
Visit the following resources to learn more:
- [@video@Everything you need to know about HTTP Caching](https://www.youtube.com/watch?v=HiBDZgTNpXY)
- [@article@Client-side Caching](https://redis.io/docs/latest/develop/use/client-side-caching/)

@ -1,9 +1,9 @@
# Containerization vs. Virtualization
Containers and virtual machines are the two most popular approaches to setting up a software infrastructure for your organization.
Containerization and virtualization are both technologies for isolating and running multiple applications on shared hardware, but they differ significantly in approach and resource usage. Virtualization creates separate virtual machines (VMs), each with its own operating system, running on a hypervisor. This provides strong isolation but consumes more resources. Containerization, exemplified by Docker, uses a shared operating system kernel to create isolated environments (containers) for applications. Containers are lighter, start faster, and use fewer resources than VMs. They're ideal for microservices architectures and rapid deployment. Virtualization offers better security isolation and is suitable for running different operating systems on the same hardware. Containerization provides greater efficiency and scalability, especially for cloud-native applications. The choice between them depends on specific use cases, security requirements, and infrastructure needs.
Visit the following resources to learn more:
- [@article@Containerization vs. Virtualization: Everything you need to know](https://middleware.io/blog/containerization-vs-virtualization/)
- [@video@Containerization or Virtualization - The Differences ](https://www.youtube.com/watch?v=1WnDHitznGY)
- [@video@Virtual Machine (VM) vs Docker](https://www.youtube.com/watch?v=a1M_thDTqmU)
- [@feed@Explore top posts about Containers](https://app.daily.dev/tags/containers?ref=roadmapsh)

@ -1,7 +1,8 @@
# Cookie-Based Authentication
Cookies are pieces of data used to identify the user and their preferences. The browser returns the cookie to the server every time the page is requested. Specific cookies like HTTP cookies are used to perform cookie-based authentication to maintain the session for each user.
Cookie-based authentication is a method of maintaining user sessions in web applications. When a user logs in, the server creates a session and sends a unique identifier (session ID) to the client as a cookie. This cookie is then sent with every subsequent request, allowing the server to identify and authenticate the user. The actual session data is typically stored on the server, with the cookie merely serving as a key to access this data. This approach is stateful on the server side and works well for traditional web applications. It's relatively simple to implement and is natively supported by browsers. However, cookie-based authentication faces challenges with cross-origin requests, can be vulnerable to CSRF attacks if not properly secured, and may not be ideal for modern single-page applications or mobile apps. Despite these limitations, it remains a common authentication method, especially for server-rendered web applications.
Visit the following resources to learn more:
- [@article@How does cookie based authentication work?](https://stackoverflow.com/questions/17769011/how-does-cookie-based-authentication-work)
- [@video@Session vs Token Authentication in 100 Seconds](https://www.youtube.com/watch?v=UBUNrFtufWo)

@ -1,10 +1,10 @@
# Cors
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.
Cross-Origin Resource Sharing (CORS) is a security mechanism implemented by web browsers to control access to resources (like APIs or fonts) on a web page from a different domain than the one serving the web page. It extends and adds flexibility to the Same-Origin Policy, allowing servers to specify who can access their resources. CORS works through a system of HTTP headers, where browsers send a preflight request to the server hosting the cross-origin resource, and the server responds with headers indicating whether the actual request is allowed. This mechanism helps prevent unauthorized access to sensitive data while enabling legitimate cross-origin requests. CORS is crucial for modern web applications that often integrate services and resources from multiple domains, balancing security needs with the functionality requirements of complex, distributed web systems.
Visit the following resources to learn more:
- [@article@Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS)
- [@article@Understanding CORS](https://rbika.com/blog/understanding-cors)
- [@video@CORS in 100 Seconds](https://www.youtube.com/watch?v=4KHiSt0oLJ0)
- [@video@CORS in 6 minutes](https://www.youtube.com/watch?v=PNtFSVU-YTI)
- [@article@Understanding CORS](https://rbika.com/blog/understanding-cors)

@ -4,8 +4,6 @@ Apache CouchDB is an open-source document-oriented NoSQL database. It uses JSON
Visit the following resources to learn more:
- [@article@CouchDB Website](https://couchdb.apache.org/)
- [@article@CouchDB Documentation](https://docs.couchdb.org/)
- [@article@The big NoSQL databases comparison](https://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis/)
- [@article@pouchdb - a JavaScript database inspired by CouchDB](https://pouchdb.com/)
- [@official@CouchDB Website](https://couchdb.apache.org/)
- [@video@What is CouchDB?](https://www.youtube.com/watch?v=Mru4sHzIfSA)
- [@feed@Explore top posts about CouchDB](https://app.daily.dev/tags/couchdb?ref=roadmapsh)

@ -1,7 +1,8 @@
# CQRS
CQRS, or command query responsibility segregation, defines an architectural pattern where the main focus is to separate the approach of reading and writing operations for a data store. CQRS can also be used along with Event Sourcing pattern in order to persist application state as an ordered of sequence events, making it possible to restore data to any point in time.
CQRS (Command Query Responsibility Segregation) is an architectural pattern that separates read and write operations for a data store. In this pattern, "commands" handle data modification (create, update, delete), while "queries" handle data retrieval. The principle behind CQRS is that for many systems, especially complex ones, the requirements for reading data differ significantly from those for writing data. By separating these concerns, CQRS allows for independent scaling, optimization, and evolution of the read and write sides. This can lead to improved performance, scalability, and security. CQRS is often used in event-sourced systems and can be particularly beneficial in high-performance, complex domain applications. However, it also introduces additional complexity and should be applied judiciously based on the specific needs and constraints of the system.
Visit the following resources to learn more:
- [@article@CQRS Pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs)
- [@video@Learn CQRS Pattern in 5 minutes!](https://www.youtube.com/watch?v=eiut3FIY1Cg)

@ -1,9 +1,10 @@
# Content Security Policy
Content Security Policy is a computer security standard introduced to prevent cross-site scripting, clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.
Content Security Policy (CSP) is a security standard implemented by web browsers to prevent cross-site scripting (XSS), clickjacking, and other code injection attacks. It works by allowing web developers to specify which sources of content are trusted and can be loaded on a web page. CSP is typically implemented through HTTP headers or meta tags, defining rules for various types of resources like scripts, stylesheets, images, and fonts. By restricting the origins from which content can be loaded, CSP significantly reduces the risk of malicious code execution. It also provides features like reporting violations to help developers identify and fix potential security issues. While powerful, implementing CSP requires careful configuration to balance security with functionality, especially for sites using third-party resources or inline scripts.
Visit the following resources to learn more:
- [@article@MDN — Content Security Policy (CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP)
- [@article@Google Devs — Content Security Policy (CSP)](https://developers.google.com/web/fundamentals/security/csp)
- [@video@Content Security Policy Explained](https://www.youtube.com/watch?v=-LjPRzFR5f0)
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh)

@ -1,8 +1,8 @@
# Data Replication
Data replication is the process by which data residing on a physical/virtual server(s) or cloud instance (primary instance) is continuously replicated or copied to a secondary server(s) or cloud instance (standby instance). Organizations replicate data to support high availability, backup, and/or disaster recovery.
Data replication is the process of creating and maintaining multiple copies of the same data across different locations or nodes in a distributed system. It enhances data availability, reliability, and performance by ensuring that data remains accessible even if one or more nodes fail. Replication can be synchronous (changes are applied to all copies simultaneously) or asynchronous (changes are propagated after being applied to the primary copy). It's widely used in database systems, content delivery networks, and distributed file systems. Replication strategies include master-slave, multi-master, and peer-to-peer models. While improving fault tolerance and read performance, replication introduces challenges in maintaining data consistency across copies and managing potential conflicts. Effective replication strategies must balance consistency, availability, and partition tolerance, often in line with the principles of the CAP theorem.
Visit the following resources to learn more:
- [@article@What is data replication?](https://www.ibm.com/topics/data-replication)
- [@video@What is Data Replication?](https://youtu.be/fUrKt-AQYtE)
- [@video@What is Data Replication?](https://www.youtube.com/watch?v=iO8a1nMbL1o)

@ -1,9 +1,9 @@
# Database Indexes
An index is a data structure that you build and assign on top of an existing table that basically looks through your table and tries to analyze and summarize so that it can create shortcuts.
Database indexes are data structures that improve the speed of data retrieval operations in a database management system. They work similarly to book indexes, providing a quick way to look up information based on specific columns or sets of columns. Indexes create a separate structure that holds a reference to the actual data, allowing the database engine to find information without scanning the entire table. While indexes significantly enhance query performance, especially for large datasets, they come with trade-offs. They increase storage space requirements and can slow down write operations as the index must be updated with each data modification. Common types include B-tree indexes for general purpose use, bitmap indexes for low-cardinality data, and hash indexes for equality comparisons. Proper index design is crucial for optimizing database performance, balancing faster reads against slower writes and increased storage needs.
Visit the following resources to learn more:
- [@article@Database index - Wikipedia](https://en.wikipedia.org/wiki/Database_index)
- [@article@What is a Database Index?](https://www.codecademy.com/article/sql-indexes)
- [@video@Database Indexing Explained](https://www.youtube.com/watch?v=-qNSXK7s7_w)
- [@feed@Explore top posts about Database](https://app.daily.dev/tags/database?ref=roadmapsh)

@ -1,37 +1,16 @@
# Design and Development Principles
In this section, we'll discuss some essential design and development principles to follow while building the backend of any application. These principles will ensure that the backend is efficient, scalable, and maintainable.
## 1. Separation of Concerns (SoC)
Separation of Concerns is a fundamental principle that states that different functionalities of a system should be as independent as possible. This approach improves maintainability and scalability by allowing developers to work on separate components without affecting each other. Divide your backend into clear modules and layers, such as data storage, business logic, and network communication.
## 2. Reusability
Reusability is the ability to use components, functions, or modules in multiple places without duplicating code. While designing the backend, look for opportunities where you can reuse existing code. Use techniques like creating utility functions, abstract classes, and interfaces to promote reusability and reduce redundancy.
## 3. Keep It Simple and Stupid (KISS)
KISS principle states that the simpler the system, the easier it is to understand, maintain, and extend. When designing the backend, try to keep the architecture and code as simple as possible. Use clear naming conventions and modular structures, and avoid over-engineering and unnecessary complexity.
## 4. Don't Repeat Yourself (DRY)
Do not duplicate code or functionality across your backend. Duplication can lead to inconsistency and maintainability issues. Instead, focus on creating reusable components, functions or modules, which can be shared across different parts of the backend.
## 5. Scalability
A scalable system is one that can efficiently handle an increasing number of users, requests, or data. Design the backend with scalability in mind, considering factors such as data storage, caching, load balancing, and horizontal scaling (adding more instances of the backend server).
## 6. Security
Security is a major concern when developing any application. Always follow best practices to prevent security flaws, such as protecting sensitive data, using secure communication protocols (e.g., HTTPS), implementing authentication and authorization mechanisms, and sanitizing user inputs.
## 7. Testing
Testing is crucial for ensuring the reliability and stability of the backend. Implement a comprehensive testing strategy, including unit, integration, and performance tests. Use automated testing tools and set up continuous integration (CI) and continuous deployment (CD) pipelines to streamline the testing and deployment process.
## 8. Documentation
Proper documentation helps developers understand and maintain the backend codebase. Write clear and concise documentation for your code, explaining the purpose, functionality, and how to use it. Additionally, use comments and appropriate naming conventions to make the code itself more readable and self-explanatory.
By following these design and development principles, you'll be well on your way to creating an efficient, secure, and maintainable backend for your applications.
Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:
1. SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)
2. DRY (Don't Repeat Yourself)
3. KISS (Keep It Simple, Stupid)
4. YAGNI (You Aren't Gonna Need It)
5. Separation of Concerns
6. Modularity
7. Encapsulation
8. Composition over Inheritance
9. Loose Coupling and High Cohesion
10. Principle of Least Astonishment
These principles aim to create more maintainable, scalable, and robust software. They encourage clean code, promote reusability, reduce complexity, and enhance flexibility. While not rigid rules, these principles guide developers in making design decisions that lead to better software architecture and easier long-term maintenance. Applying these principles helps in creating systems that are easier to understand, modify, and extend over time.

@ -1,13 +1,10 @@
# DNS
The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources.
DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like www.example.com) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.
Visit the following resources to learn more:
- [@article@What is DNS?](https://www.cloudflare.com/en-gb/learning/dns/what-is-dns/)
- [@article@How DNS works (comic)](https://howdns.works/)
- [@article@Understanding Domain names](https://developer.mozilla.org/en-US/docs/Glossary/DNS/)
- [@video@DNS and How does it Work?](https://www.youtube.com/watch?v=Wj0od2ag5sk)
- [@video@DNS Records](https://www.youtube.com/watch?v=7lxgpKh_fRY)
- [@video@Complete DNS mini-series](https://www.youtube.com/watch?v=zEmUuNFBgN8\&list=PLTk5ZYSbd9MhMmOiPhfRJNW7bhxHo4q-K)
- [@feed@Explore top posts about DNS](https://app.daily.dev/tags/dns?ref=roadmapsh)

@ -1,17 +1,9 @@
# Domain-Driven Design
Domain-driven design (DDD) is a software design approach focusing on modeling software to match a domain according to input from that domain's experts.
In terms of object-oriented programming, it means that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes like LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.
DDD connects the implementation to an evolving model and it is predicated on the following goals:
- Placing the project's primary focus on the core domain and domain logic;
- Basing complex designs on a model of the domain;
- Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.
Domain-Driven Design (DDD) is a software development approach that focuses on creating a deep understanding of the business domain and using this knowledge to inform the design of software systems. It emphasizes close collaboration between technical and domain experts to develop a shared language (ubiquitous language) and model that accurately represents the core concepts and processes of the business. DDD promotes organizing code around business concepts (bounded contexts), using rich domain models to encapsulate business logic, and separating the domain logic from infrastructure concerns. Key patterns in DDD include entities, value objects, aggregates, repositories, and domain services. This approach aims to create more maintainable and flexible software systems that closely align with business needs and can evolve with changing requirements. DDD is particularly valuable for complex domains where traditional CRUD-based architectures may fall short in capturing the nuances and rules of the business.
Visit the following resources to learn more:
- [@article@Domain-Driven Design](https://redis.com/glossary/domain-driven-design-ddd/)
- [@article@Domain-Driven Design: Tackling Complexity in the Heart of Software](https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215)
- [@video@Domain Driven Design: What You Need To Know](https://www.youtube.com/watch?v=4rhzdZIDX_k)
- [@feed@Explore top posts about Domain-Driven Design](https://app.daily.dev/tags/domain-driven-design?ref=roadmapsh)

@ -1,9 +1,9 @@
# DynamoDB
DynamoDB is a fully managed NoSQL database service provided by AWS, designed for high-performance applications that require low-latency data access at any scale.
Amazon DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It offers high-performance, scalable, and flexible data storage for applications of any scale. DynamoDB supports both key-value and document data models, providing fast and predictable performance with seamless scalability. It features automatic scaling, built-in security, backup and restore options, and global tables for multi-region deployment. DynamoDB excels in handling high-traffic web applications, gaming backends, mobile apps, and IoT solutions. It offers consistent single-digit millisecond latency at any scale and supports both strongly consistent and eventually consistent read models. With its integration into the AWS ecosystem, on-demand capacity mode, and support for transactions, DynamoDB is widely used for building highly responsive and scalable applications, particularly those with unpredictable workloads or requiring low-latency data access.
It supports key-value and document data models, allowing developers to store and retrieve any amount of data with predictable performance.
Learn more from the following resources:
DynamoDB is known for its seamless scalability, automatic data replication across multiple AWS regions, and built-in security features, making it ideal for use cases like real-time analytics, mobile apps, gaming, IoT, and more.
Key features include flexible schema design, powerful query capabilities, and integration with other AWS services.
- [@official@AWS DynamoDB Website](https://aws.amazon.com/dynamodb/)
- [@video@AWS DynamoDB Tutorial For Beginners](https://www.youtube.com/watch?v=2k2GINpO308)
- [@feed@daily.dev AWS DynamoDB Feed](https://app.daily.dev/tags/aws-dynamodb)

@ -6,4 +6,5 @@ Visit the following resources to learn more:
- [@official@Elasticsearch Website](https://www.elastic.co/elasticsearch/)
- [@official@Elasticsearch Documentation](https://www.elastic.co/guide/index.html)
- [@video@What is Elasticsearch](https://www.youtube.com/watch?v=ZP0NmfyfsoM)
- [@feed@Explore top posts about ELK](https://app.daily.dev/tags/elk?ref=roadmapsh)

@ -1,12 +1,9 @@
# Event Sourcing
Event sourcing is a design pattern in which the state of a system is represented as a sequence of events that have occurred over time. In an event-sourced system, changes to the state of the system are recorded as events and stored in an event store. The current state of the system is derived by replaying the events from the event store.
One of the main benefits of event sourcing is that it provides a clear and auditable history of all the changes that have occurred in the system. This can be useful for debugging and for tracking the evolution of the system over time.
Event sourcing is often used in conjunction with other patterns, such as Command Query Responsibility Segregation (CQRS) and domain-driven design, to build scalable and responsive systems with complex business logic. It is also useful for building systems that need to support undo/redo functionality or that need to integrate with external systems.
Event sourcing is a design pattern in which the state of a system is represented as a sequence of events that have occurred over time. In an event-sourced system, changes to the state of the system are recorded as events and stored in an event store. The current state of the system is derived by replaying the events from the event store. One of the main benefits of event sourcing is that it provides a clear and auditable history of all the changes that have occurred in the system. This can be useful for debugging and for tracking the evolution of the system over time.Event sourcing is often used in conjunction with other patterns, such as Command Query Responsibility Segregation (CQRS) and domain-driven design, to build scalable and responsive systems with complex business logic. It is also useful for building systems that need to support undo/redo functionality or that need to integrate with external systems.
Visit the following resources to learn more:
- [@article@Event Sourcing - Martin Fowler](https://martinfowler.com/eaaDev/EventSourcing.html)
- [@video@Event Sourcing 101](https://www.youtube.com/watch?v=lg6aF5PP4Tc)
- [@feed@Explore top posts about Architecture](https://app.daily.dev/tags/architecture?ref=roadmapsh)

@ -1,14 +1,4 @@
# Failure Modes
There are several different failure modes that can occur in a database, including:
Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.
- Read contention: This occurs when multiple clients or processes are trying to read data from the same location in the database at the same time, which can lead to delays or errors.
- Write contention: This occurs when multiple clients or processes are trying to write data to the same location in the database at the same time, which can lead to delays or errors.
- Thundering herd: This occurs when a large number of clients or processes try to access the same resource simultaneously, which can lead to resource exhaustion and reduced performance.
- Cascade: This occurs when a failure in one part of the database system causes a chain reaction that leads to failures in other parts of the system.
- Deadlock: This occurs when two or more transactions are waiting for each other to release a lock on a resource, leading to a standstill.
- Corruption: This occurs when data in the database becomes corrupted, which can lead to errors or unexpected results when reading or writing to the database.
- Hardware failure: This occurs when hardware components, such as disk drives or memory, fail, which can lead to data loss or corruption.
- Software failure: This occurs when software components, such as the database management system or application, fail, which can lead to errors or unexpected results.
- Network failure: This occurs when the network connection between the database and the client is lost, which can lead to errors or timeouts when trying to access the database.
- Denial of service (DoS) attack: This occurs when a malicious actor attempts to overwhelm the database with requests, leading to resource exhaustion and reduced performance.

@ -1,6 +1,9 @@
# Realtime databases
# Firebase
A real-time database is broadly defined as a data store designed to collect, process, and/or enrich an incoming series of data points (i.e., a data stream) in real time, typically immediately after the data is created.
Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring. While it excels in rapid prototyping and building real-time applications, its proprietary nature and potential for vendor lock-in are considerations for large-scale or complex applications. Firebase's ease of use and integration with Google Cloud Platform make it popular for startups and projects requiring quick deployment.
[Firebase](https://firebase.google.com/)
[RethinkDB](https://rethinkdb.com/)
Learn more from the following resources:
- [@official@Firebase Website](https://firebase.google.com/)
- [@video@Firebase in 100 seconds](https://www.youtube.com/watch?v=vAoB4VbhRzM)
- [@course@The ultimate guide to Firebase](https://fireship.io/lessons/the-ultimate-beginners-guide-to-firebase/)

@ -1,10 +1,9 @@
# Functional Testing
Functional testing is where software is tested to ensure functional requirements are met. Usually, it is a form of black box testing in which the tester has no understanding of the source code; testing is performed by providing input and comparing expected/actual output.
It contrasts with non-functional testing, which includes performance, load, scalability, and penetration testing.
Functional testing is where software is tested to ensure functional requirements are met. Usually, it is a form of black box testing in which the tester has no understanding of the source code; testing is performed by providing input and comparing expected/actual output. It contrasts with non-functional testing, which includes performance, load, scalability, and penetration testing.
Visit the following resources to learn more:
- [@article@What is Functional Testing?](https://www.guru99.com/functional-testing.html)
- [@video@Functional Testing vs Non-Functional Testing](https://youtu.be/j_79AXkG4PY)
- [@video@Functional Testing vs Non-Functional Testing](https://www.youtube.com/watch?v=NgQT7miTP9M)
- [@feed@Explore top posts about Testing](https://app.daily.dev/tags/testing?ref=roadmapsh)

@ -1,13 +1,12 @@
# Git
[Git](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
Git is a distributed version control system designed to handle projects of any size with speed and efficiency. Created by Linus Torvalds in 2005, it tracks changes in source code during software development, allowing multiple developers to work together on non-linear development. Git maintains a complete history of all changes, enabling easy rollbacks and comparisons between versions. Its distributed nature means each developer has a full copy of the repository, allowing for offline work and backup. Git's key features include branching and merging capabilities, staging area for commits, and support for collaborative workflows like pull requests. Its speed, flexibility, and robust branching and merging capabilities have made it the most widely used version control system in software development, particularly for open-source projects and team collaborations.
Visit the following resources to learn more:
- [@article@Introduction to Git](https://learn.microsoft.com/en-us/training/modules/intro-to-git/)
- [@roadmap@Learn Git & GitHub](/git-github)
- [@video@Git & GitHub Crash Course For Beginners](https://www.youtube.com/watch?v=SWYqp7iY_Tc)
- [@article@Learn Git with Tutorials, News and Tips - Atlassian](https://www.atlassian.com/git)
- [@article@Git Cheat Sheet](https://cs.fyi/guide/git-cheatsheet)
- [@article@Learn Git Branching](https://learngitbranching.js.org/)
- [@article@Git Tutorial](https://www.w3schools.com/git/)
- [@feed@Explore top posts about Git](https://app.daily.dev/tags/git?ref=roadmapsh)

@ -1,15 +1,12 @@
# GitHub
GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
GitHub is a web-based platform for version control and collaboration using Git. Owned by Microsoft, it provides hosting for software development and offers features beyond basic Git functionality. GitHub includes tools for project management, code review, and social coding. Key features include repositories for storing code, pull requests for proposing and reviewing changes, issues for tracking bugs and tasks, and actions for automating workflows. It supports both public and private repositories, making it popular for open-source projects and private development. GitHub's collaborative features, like forking repositories and inline code comments, facilitate team development and community contributions. With its extensive integrations and large user base, GitHub has become a central hub for developers, serving as a portfolio, collaboration platform, and deployment tool for software projects of all sizes.
Visit the following resources to learn more:
- [@opensource@GitHub Website](https://github.com)
- [@article@GitHub Documentation](https://docs.github.com/en/get-started/quickstart)
- [@article@How to Use Git in a Professional Dev Team](https://ooloo.io/project/github-flow)
- [@roadmap@Learn Git & GitHub](/git-github)
- [@official@GitHub Website](https://github.com)
- [@video@What is GitHub?](https://www.youtube.com/watch?v=w3jLJU7DT5E)
- [@video@Git vs. GitHub: Whats the difference?](https://www.youtube.com/watch?v=wpISo9TNjfU)
- [@video@Git and GitHub for Beginners](https://www.youtube.com/watch?v=RGOj5yH7evk)
- [@video@Git and GitHub - CS50 Beyond 2019](https://www.youtube.com/watch?v=eulnSXkhE7I)
- [@article@Learn Git Branching](https://learngitbranching.js.org/?locale=en_us)
- [@feed@Explore top posts about GitHub](https://app.daily.dev/tags/github?ref=roadmapsh)

@ -1,9 +1,10 @@
# GitLab
GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
GitLab is a web-based DevOps platform that provides a complete solution for the software development lifecycle. It offers source code management, continuous integration/continuous deployment (CI/CD), issue tracking, and more, all integrated into a single application. GitLab supports Git repositories and includes features like merge requests (similar to GitHub's pull requests), wiki pages, and issue boards. It emphasizes DevOps practices, providing built-in CI/CD pipelines, container registry, and Kubernetes integration. GitLab offers both cloud-hosted and self-hosted options, giving organizations flexibility in deployment. Its all-in-one approach differentiates it from competitors, as it includes features that might require multiple tools in other ecosystems. GitLab's focus on the entire DevOps lifecycle, from planning to monitoring, makes it popular among enterprises and teams seeking a unified platform for their development workflows.
Visit the following resources to learn more:
- [@opensource@GitLab Website](https://gitlab.com/)
- [@article@GitLab Documentation](https://docs.gitlab.com/)
- [@video@What is Gitlab and Why Use It?](https://www.youtube.com/watch?v=bnF7f1zGpo4)
- [@feed@Explore top posts about GitLab](https://app.daily.dev/tags/gitlab?ref=roadmapsh)

@ -1,15 +1,12 @@
# Go
Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.
Go, also known as Golang, is a statically typed, compiled programming language designed by Google. It combines the efficiency of compiled languages with the ease of use of dynamically typed interpreted languages. Go features built-in concurrency support through goroutines and channels, making it well-suited for networked and multicore systems. It has a simple and clean syntax, fast compilation times, and efficient garbage collection. Go's standard library is comprehensive, reducing the need for external dependencies. The language emphasizes simplicity and readability, with features like implicit interfaces and a lack of inheritance. Go is particularly popular for building microservices, web servers, and distributed systems. Its performance, simplicity, and robust tooling make it a favored choice for cloud-native development, DevOps tools, and large-scale backend systems.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated Go Roadmap](/golang)
- [@official@A Tour of Go – Go Basics](https://go.dev/tour/welcome/1)
- [@official@Go Reference Documentation](https://go.dev/doc/)
- [@article@Go by Example - annotated example programs](https://gobyexample.com/)
- [@article@W3Schools Go Tutorial ](https://www.w3schools.com/go/)
- [@article@Making a RESTful JSON API in Go](https://thenewstack.io/make-a-restful-json-api-go/)
- [@article@Go, the Programming Language of the Cloud](https://thenewstack.io/go-the-programming-language-of-the-cloud/)
- [@video@Go Class by Matt](https://www.youtube.com/playlist?list=PLoILbKo9rG3skRCj37Kn5Zj803hhiuRK6)
- [@video@Go Programming – Golang Course with Bonus Projects](https://www.youtube.com/watch?v=un6ZyFkqFKo)
- [@feed@Explore top posts about Golang](https://app.daily.dev/tags/golang?ref=roadmapsh)

@ -1,12 +1,6 @@
# GoF Design Patterns
The Gang of Four (GoF) design patterns are a set of design patterns for object-oriented software development that were first described in the book "Design Patterns: Elements of Reusable Object-Oriented Software" by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (also known as the Gang of Four).
The GoF design patterns are divided into three categories: Creational, Structural and Behavioral.
- Creational Patterns
- Structural Patterns
- Behavioral Patterns
The Gang of Four (GoF) Design Patterns are a collection of 23 foundational software design patterns that provide solutions to common object-oriented design problems. These patterns are grouped into three categories: *Creational* (focused on object creation like Singleton and Factory), *Structural* (focused on class and object composition like Adapter and Composite), and *Behavioral* (focused on communication between objects like Observer and Strategy). Each pattern offers a proven template for addressing specific design challenges, promoting code reusability, flexibility, and maintainability across software systems.
Learn more from the following links:

@ -1,11 +1,9 @@
# Graceful Degradation
Graceful degradation is a design principle that states that a system should be designed to continue functioning, even if some of its components or features are not available. In the context of web development, graceful degradation refers to the ability of a web page or application to continue functioning, even if the user's browser or device does not support certain features or technologies.
Graceful degradation is often used as an alternative to progressive enhancement, a design principle that states that a system should be designed to take advantage of advanced features and technologies if they are available.
Graceful degradation is a design principle that states that a system should be designed to continue functioning, even if some of its components or features are not available. In the context of web development, graceful degradation refers to the ability of a web page or application to continue functioning, even if the user's browser or device does not support certain features or technologies. Graceful degradation is often used as an alternative to progressive enhancement, a design principle that states that a system should be designed to take advantage of advanced features and technologies if they are available.
Visit the following resources to learn more:
- [@article@What is Graceful Degradation & Why Does it Matter?](https://blog.hubspot.com/website/graceful-degradation)
- [@article@Four Considerations When Designing Systems For Graceful Degradation](https://newrelic.com/blog/best-practices/design-software-for-graceful-degradation)
- [@article@The Art of Graceful Degradation](https://farfetchtechblog.com/en/blog/post/the-art-of-failure-ii-graceful-degradation/)
- [@video@Graceful Degradation - Georgia Tech](https://www.youtube.com/watch?v=Tk7e0LMsAlI)

@ -1,15 +1,10 @@
# GraphQL
GraphQL is a query language and runtime system for APIs (application programming interfaces). It is designed to provide a flexible and efficient way for clients to request data from servers, and it is often used as an alternative to REST (representational state transfer) APIs.
One of the main features of GraphQL is its ability to specify exactly the data that is needed, rather than receiving a fixed set of data from an endpoint. This allows clients to request only the data that they need, and it reduces the amount of data that needs to be transferred over the network.
GraphQL also provides a way to define the structure of the data that is returned from the server, allowing clients to request data in a predictable and flexible way. This makes it easier to build and maintain client applications that depend on data from the server.
GraphQL is widely used in modern web and mobile applications, and it is supported by a large and active developer community.
GraphQL is a query language for APIs and a runtime for executing those queries, developed by Facebook. Unlike REST, where fixed endpoints return predefined data, GraphQL allows clients to request exactly the data they need, making API interactions more flexible and efficient. It uses a single endpoint and relies on a schema that defines the types and structure of the available data. This approach reduces over-fetching and under-fetching of data, making it ideal for complex applications with diverse data needs across multiple platforms (e.g., web, mobile).
Visit the following resources to learn more:
- [@roadmap@GraphQL Roadmap](/graphql)
- [@official@GraphQL Official Website](https://graphql.org/)
- [@video@Tutorial - GraphQL Explained in 100 Seconds](https://www.youtube.com/watch?v=eIQh02xuVw4)
- [@feed@Explore top posts about GraphQL](https://app.daily.dev/tags/graphql?ref=roadmapsh)

@ -1,13 +1,10 @@
# gRPC
gRPC is a high-performance, open source universal RPC framework
RPC stands for Remote Procedure Call, there's an ongoing debate on what the g stands for. RPC is a protocol that allows a program to execute a procedure of another program located on another computer. The great advantage is that the developer doesn’t need to code the details of the remote interaction. The remote procedure is called like any other function. But the client and the server can be coded in different languages.
gRPC is a high-performance, open source universal RPC framework, RPC stands for Remote Procedure Call, there's an ongoing debate on what the g stands for. RPC is a protocol that allows a program to execute a procedure of another program located on another computer. The great advantage is that the developer doesn’t need to code the details of the remote interaction. The remote procedure is called like any other function. But the client and the server can be coded in different languages.
Visit the following resources to learn more:
- [@official@gRPC Website](https://grpc.io/)
- [@official@gRPC Docs](https://grpc.io/docs/)
- [@article@What Is GRPC?](https://www.wallarm.com/what/the-concept-of-grpc)
- [@video@What Is GRPC?](https://www.youtube.com/watch?v=hVrwuMnCtok)
- [@feed@Explore top posts about gRPC](https://app.daily.dev/tags/grpc?ref=roadmapsh)

@ -1,5 +1,8 @@
# Hateoas
HATEOAS is an acronym for <b>H</b>ypermedia <b>A</b>s <b>T</b>he <b>E</b>ngine <b>O</b>f <b>A</b>pplication <b>S</b>tate, it's the concept that when sending information over a RESTful API the document received should contain everything the client needs in order to parse and use the data i.e they don't have to contact any other endpoint not explicitly mentioned within the Document.
HATEOAS (Hypermedia As The Engine Of Application State) is a constraint of RESTful architecture that allows clients to navigate an API dynamically through hypermedia links provided in responses. Instead of hard-coding URLs or endpoints, the client discovers available actions through these links, much like a web browser following links on a webpage. This enables greater flexibility and decouples clients from server-side changes, making the system more adaptable and scalable without breaking existing clients. It's a key element of REST's principle of statelessness and self-descriptive messages.
Learn more from the following resources:
- [@article@What is HATEOAS and why is it important for my REST API?](https://restcookbook.com/Basics/hateoas/)
- [@video@What happened to HATEOAS](https://www.youtube.com/watch?v=HNTSrytKCoQ)

@ -1,6 +1,6 @@
# Internet
The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.
The internet is a global network of interconnected computers that communicate using standardized protocols, primarily TCP/IP. When you request a webpage, your device sends a data packet through your internet service provider (ISP) to a DNS server, which translates the website's domain name into an IP address. The packet is then routed across various networks (using routers and switches) to the destination server, which processes the request and sends back the response. This back-and-forth exchange enables the transfer of data like web pages, emails, and files, making the internet a dynamic, decentralized system for global communication.
Visit the following resources to learn more:
@ -9,5 +9,4 @@ Visit the following resources to learn more:
- [@article@How Does the Internet Work?](http://web.stanford.edu/class/msande91si/www-spr04/readings/week1/InternetWhitepaper.htm)
- [@roadmap.sh@Introduction to Internet](/guides/what-is-internet)
- [@video@How does the Internet work?](https://www.youtube.com/watch?v=x3c1ih2NJEg)
- [@video@How the Internet Works in 5 Minutes](https://www.youtube.com/watch?v=7_LPdttKXPc)
- [@video@How does the internet work? (Full Course)](https://www.youtube.com/watch?v=zN8YNNHcaZc)

@ -1,17 +1,11 @@
# HTTPS
HTTPS is a secure way to send data between a web server and a browser.
A communication through HTTPS starts with the handshake phase during which the server and the client agree on how to encrypt the communication, in particular they choose an encryption algorithm and a secret key. After the handshake all the communication between the server and the client will be encrypted using the agreed upon algorithm and key.
The handshake phase uses a particular kind of cryptography, called asymmetric cryptography, to communicate securely even though client and server have not yet agreed on a secret key. After the handshake phase the HTTPS communication is encrypted with symmetric cryptography, which is much more efficient but requires client and server to both have knowledge of the secret key.
HTTPS (Hypertext Transfer Protocol Secure) is an extension of HTTP designed to secure data transmission between a client (e.g., browser) and a server. It uses encryption through SSL/TLS protocols to ensure data confidentiality, integrity, and authenticity. This prevents sensitive information, like login credentials or payment details, from being intercepted or tampered with by attackers. HTTPS is essential for securing web applications and has become a standard for most websites, especially those handling user data, as it helps protect against man-in-the-middle attacks and eavesdropping.
Visit the following resources to learn more:
- [@article@What is HTTPS?](https://www.cloudflare.com/en-gb/learning/ssl/what-is-https/)
- [@article@Why HTTPS Matters](https://developers.google.com/web/fundamentals/security/encrypt-in-transit/why-https)
- [@article@Enabling HTTPS on Your Servers](https://web.dev/articles/enable-https)
- [@article@How HTTPS works (comic)](https://howhttps.works/)
- [@video@SSL, TLS, HTTP, HTTPS Explained](https://www.youtube.com/watch?v=hExRDVZHhig)
- [@video@HTTPS — Stories from the field](https://www.youtube.com/watch?v=GoXgl9r0Kjk)
- [@article@HTTPS explained with carrier pigeons](https://baida.dev/articles/https-explained-with-carrier-pigeons)
- [@video@HTTP vs HTTPS](https://www.youtube.com/watch?v=nOmT_5hqgPk)

@ -1,11 +1,10 @@
# Timeseries databases
# InfluxDB
## InfluxDB
InfluxDB was built from the ground up to be a purpose-built time series database; i.e., it was not repurposed to be time series. Time was built-in from the beginning. InfluxDB is part of a comprehensive platform that supports the collection, storage, monitoring, visualization and alerting of time series data. It’s much more than just a time series database.
InfluxDB is a high-performance, open-source time-series database designed for handling large volumes of timestamped data, such as metrics, events, and real-time analytics. It is optimized for use cases like monitoring, IoT, and application performance management, where data arrives in continuous streams. InfluxDB supports SQL-like queries through its query language (Flux), and it can handle high write and query loads efficiently. Key features include support for retention policies, downsampling, and automatic data compaction, making it ideal for environments that require fast and scalable time-series data storage and retrieval.
Visit the following resources to learn more:
- [@article@InfluxDB Website](https://www.influxdata.com/)
- [@official@InfluxDB Website](https://www.influxdata.com/)
- [@article@Time series database](https://www.influxdata.com/time-series-database/)
- [@video@The Basics of Time Series Data](https://www.youtube.com/watch?v=wBWTj-1XiRU)
- [@feed@Explore top posts about Backend Development](https://app.daily.dev/tags/backend?ref=roadmapsh)

@ -1,14 +1,11 @@
# Instrumentation, Monitoring, and Telemetry
Instrumentation refers to the measure of a product's performance, in order to diagnose errors and to write trace information. Instrumentation can be of two types: source instrumentation and binary instrumentation.
Backend monitoring allows the user to view the performance of infrastructure i.e. the components that run a web application. These include the HTTP server, middleware, database, third-party API services, and more.
Telemetry is the process of continuously collecting data from different components of the application. This data helps engineering teams to troubleshoot issues across services and identify the root causes. In other words, telemetry data powers observability for your distributed applications.
Instrumentation, monitoring, and telemetry are critical components for ensuring system reliability and performance. *Instrumentation* refers to embedding code or tools within applications to capture key metrics, logs, and traces. *Monitoring* involves observing these metrics in real time to detect anomalies, failures, or performance issues, often using dashboards and alerting systems. *Telemetry* is the automated collection and transmission of this data from distributed systems, enabling visibility into system behavior. Together, these practices provide insights into the health, usage, and performance of systems, aiding in proactive issue resolution and optimizing overall system efficiency.
Visit the following resources to learn more:
- [@article@What is Instrumentation?](https://en.wikipedia.org/wiki/Instrumentation_\(computer_programming\))
- [@article@What is Monitoring?](https://www.yottaa.com/performance-monitoring-backend-vs-front-end-solutions/)
- [@article@What is Telemetry?](https://www.sumologic.com/insight/what-is-telemetry/)
- [@video@Observability vs. APM vs. Monitoring](https://www.youtube.com/watch?v=CAQ_a2-9UOI)
- [@feed@Explore top posts about Monitoring](https://app.daily.dev/tags/monitoring?ref=roadmapsh)

@ -1,10 +1,10 @@
# Integration Testing
Integration testing is a broad category of tests where multiple software modules are **integrated** and tested as a group. It is meant to test the **interaction** between multiple services, resources, or modules. For example, an API's interaction with a backend service, or a service with a database.
Integration testing focuses on verifying the interactions between different components or modules of a software system to ensure they work together as expected. It comes after unit testing and tests how modules communicate with each other, often using APIs, databases, or third-party services. The goal is to catch issues related to the integration points, such as data mismatches, protocol errors, or misconfigurations. Integration tests help ensure that independently developed components can function seamlessly as part of a larger system, making them crucial for identifying bugs that wouldn't surface in isolated unit tests.
Visit the following resources to learn more:
- [@article@Integration Testing](https://www.guru99.com/integration-testing.html)
- [@article@How to Integrate and Test Your Tech Stack](https://thenewstack.io/how-to-integrate-and-test-your-tech-stack/)
- [@video@What is Integration Testing?](https://youtu.be/QYCaaNz8emY)
- [@video@What is Integration Testing?](https://www.youtube.com/watch?v=kRD6PA6uxiY)
- [@feed@Explore top posts about Testing](https://app.daily.dev/tags/testing?ref=roadmapsh)

@ -1,6 +1,6 @@
# Internet
The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.
The internet is a global network of interconnected computers that communicate using standardized protocols, primarily TCP/IP. When you request a webpage, your device sends a data packet through your internet service provider (ISP) to a DNS server, which translates the website's domain name into an IP address. The packet is then routed across various networks (using routers and switches) to the destination server, which processes the request and sends back the response. This back-and-forth exchange enables the transfer of data like web pages, emails, and files, making the internet a dynamic, decentralized system for global communication.
Visit the following resources to learn more:
@ -9,5 +9,5 @@ Visit the following resources to learn more:
- [@article@How Does the Internet Work?](http://web.stanford.edu/class/msande91si/www-spr04/readings/week1/InternetWhitepaper.htm)
- [@roadmap.sh@Introduction to Internet](/guides/what-is-internet)
- [@video@How does the Internet work?](https://www.youtube.com/watch?v=x3c1ih2NJEg)
- [@video@How the Internet Works in 5 Minutes](https://www.youtube.com/watch?v=7_LPdttKXPc)
- [@video@Computer Network | Google IT Support Certificate](https://www.youtube.com/watch?v=Z_hU2zm4_S8)
- [@video@How does the internet work? (Full Course)](https://www.youtube.com/watch?v=zN8YNNHcaZc)

@ -1,13 +1,11 @@
# Java
Java is general-purpose language, primarily used for Internet-based applications.
It was created in 1995 by James Gosling at Sun Microsystems and is one of the most popular options for backend developers.
Java is a high-level, object-oriented programming language known for its portability, robustness, and scalability. Developed by Sun Microsystems (now Oracle), Java follows the "write once, run anywhere" principle, allowing code to run on any device with a Java Virtual Machine (JVM). It's widely used for building large-scale enterprise applications, Android mobile apps, and web services. Java features automatic memory management (garbage collection), a vast standard library, and strong security features, making it a popular choice for backend systems, distributed applications, and cloud-based solutions.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated Java Roadmap](/java)
- [@official@Java Website](https://www.java.com/)
- [@article@W3 Schools Tutorials](https://www.w3schools.com/java/)
- [@video@Java Crash Course](https://www.youtube.com/watch?v=eIrMbAQSU34)
- [@video@Complete Java course](https://www.youtube.com/watch?v=xk4_1vDrzzo)
- [@feed@Explore top posts about Java](https://app.daily.dev/tags/java?ref=roadmapsh)

@ -1,11 +1,11 @@
# JavaScript
JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on.
JavaScript is a versatile, high-level programming language primarily used for adding interactivity and dynamic features to websites. It runs in the browser, allowing for client-side scripting that can manipulate HTML and CSS, respond to user events, and interact with web APIs. JavaScript is also used on the server side with environments like Node.js, enabling full-stack development. It supports event-driven, functional, and imperative programming styles, and has a rich ecosystem of libraries and frameworks (like React, Angular, and Vue) that enhance its capabilities and streamline development.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated JavaScript Roadmap](/javascript)
- [@article@The Modern JavaScript Tutorial](https://javascript.info/)
- [@video@JavaScript Crash Course for Beginners](https://youtu.be/hdI2bqOjy3c?t=2)
- [@article@Build 30 Javascript projects in 30 days](https://javascript30.com/)
- [@video@JavaScript Crash Course for Beginners](https://youtu.be/hdI2bqOjy3c?t=2)
- [@feed@Explore top posts about JavaScript](https://app.daily.dev/tags/javascript?ref=roadmapsh)

@ -5,5 +5,5 @@ JSON or JavaScript Object Notation is an encoding scheme that is designed to eli
Visit the following resources to learn more:
- [@official@Official Website](https://jsonapi.org/)
- [@official@Official Docs](https://jsonapi.org/implementations/)
- [@video@JSON API: Explained in 4 minutes ](https://www.youtube.com/watch?v=N-4prIh7t38)
- [@article@What is JSON API?](https://medium.com/@niranjan.cs/what-is-json-api-3b824fba2788)
- [@video@JSON API: Explained in 4 minutes](https://www.youtube.com/watch?v=N-4prIh7t38)

@ -1,12 +1,10 @@
# JWT
JWT stands for JSON Web Token is a token-based encryption open standard/methodology that is used to transfer information securely as a JSON object. Clients and Servers use JWT to securely share information, with the JWT containing encoded JSON objects and claims. JWT tokens are designed to be compact, safe to use within URLs, and ideal for SSO contexts.
JWT (JSON Web Token) is an open standard for securely transmitting information between parties as a JSON object. It consists of three parts: a header (which specifies the token type and algorithm used for signing), a payload (which contains the claims or the data being transmitted), and a signature (which is used to verify the token’s integrity and authenticity). JWTs are commonly used for authentication and authorization purposes, allowing users to securely transmit and validate their identity and permissions across web applications and APIs. They are compact, self-contained, and can be easily transmitted in HTTP headers, making them popular for modern web and mobile applications.
Visit the following resources to learn more:
- [@official@jwt.io Website](https://jwt.io/)
- [@official@Introduction to JSON Web Tokens](https://jwt.io/introduction)
- [@article@What is JWT?](https://www.akana.com/blog/what-is-jwt)
- [@video@What Is JWT and Why Should You Use JWT](https://www.youtube.com/watch?v=7Q17ubqLfaM)
- [@video@What is JWT? JSON Web Token Explained](https://www.youtube.com/watch?v=926mknSW9Lo)
- [@feed@Explore top posts about JWT](https://app.daily.dev/tags/jwt?ref=roadmapsh)

@ -1,9 +1,10 @@
# Kafka
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant data processing. It acts as a message broker, allowing systems to publish and subscribe to streams of records, similar to a distributed commit log. Kafka is highly scalable and can handle large volumes of data with low latency, making it ideal for real-time analytics, log aggregation, and data integration. It features topics for organizing data streams, partitions for parallel processing, and replication for fault tolerance, enabling reliable and efficient handling of large-scale data flows across distributed systems.
Visit the following resources to learn more:
- [@article@Apache Kafka quickstart](https://kafka.apache.org/quickstart)
- [@official@Apache Kafka quickstart](https://kafka.apache.org/quickstart)
- [@video@Apache Kafka Fundamentals](https://www.youtube.com/watch?v=B5j3uNBH8X4)
- [@video@Kafka in 100 Seconds](https://www.youtube.com/watch?v=uvb00oaa3k8)
- [@feed@Explore top posts about Kafka](https://app.daily.dev/tags/kafka?ref=roadmapsh)

@ -1,8 +1,9 @@
# APIs
API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other.
An API (Application Programming Interface) is a set of defined rules and protocols that allow different software applications to communicate and interact with each other. It provides a standardized way for developers to access and manipulate the functionalities or data of a service, application, or platform without needing to understand its internal workings. APIs can be public or private and are commonly used to integrate disparate systems, facilitate third-party development, and enable interoperability between applications. They typically include endpoints, request methods (like GET, POST, PUT), and data formats (like JSON or XML) to interact with.
Visit the following resources to learn more:
- [@article@What is an API?](https://aws.amazon.com/what-is/api/)
- [@video@What is an API?](https://www.youtube.com/watch?v=s7wmiS2mSXY)
- [@video@What is an API (in 5 minutes)](https://www.youtube.com/watch?v=ByGJQzlzxQg)
- [@feed@daily.dev API Feed](https://app.daily.dev/tags/rest-api)

@ -1,11 +1,7 @@
# Load Shifting
Load shifting is a design pattern that is used to manage the workload of a system by shifting the load to different components or resources at different times. It is commonly used in cloud computing environments to balance the workload of a system and to optimize the use of resources.
Load shifting is a strategy used to manage and distribute computing or system workloads more efficiently by moving or redistributing the load from peak times to off-peak periods. This approach helps in balancing the demand on resources, optimizing performance, and reducing costs. In cloud computing and data centers, load shifting can involve rescheduling jobs, leveraging different regions or availability zones, or adjusting resource allocation based on real-time demand. By smoothing out peak loads, organizations can enhance system reliability, minimize latency, and better utilize their infrastructure.
There are several ways to implement load shifting in a cloud environment:
Learn more from the following resources:
- Scheduling: This involves scheduling the execution of tasks or workloads to occur at specific times or intervals.
- Load balancing: This involves distributing the workload of a system across multiple resources, such as servers or containers, to ensure that the workload is balanced and that resources are used efficiently.
- Auto-scaling: This involves automatically adjusting the number of resources that are available to a system based on the workload, allowing the system to scale up or down as needed.
Load shifting is an important aspect of cloud design, as it helps to ensure that resources are used efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as throttling and backpressure, to provide a scalable and resilient cloud environment.
- [@video@Load Shifting 101](https://www.youtube.com/watch?v=DOyMJEdk5aE)

@ -2,7 +2,7 @@
Long polling is a technique where the client polls the server for new data. However, if the server does not have any data available for the client, instead of sending an empty response, the server holds the request and waits for some specified period of time for new data to be available. If new data becomes available during that time, the server immediately sends a response to the client, completing the open request. If no new data becomes available and the timeout period specified by the client expires, the server sends a response indicating that fact. The client will then immediately re-request data from the server, creating a new request-response cycle.
Visit the following resources to learn more:
Learn more from the following resources:
- [@article@Long polling](https://javascript.info/long-polling)
- [@article@What are Long-Polling, Websockets, Server-Sent Events (SSE) and Comet?](https://stackoverflow.com/questions/11077857/what-are-long-polling-websockets-server-sent-events-sse-and-comet)
- [@article@Long Polling](https://javascript.info/long-polling)
- [@video@What is Long Polling?](https://www.youtube.com/watch?v=LD0_-uIsnOE)

@ -4,7 +4,7 @@ LXC is an abbreviation used for Linux Containers which is an operating system th
Visit the following resources to learn more:
- [@article@LXC Documentation](https://linuxcontainers.org/lxc/documentation/)
- [@official@LXC Documentation](https://linuxcontainers.org/lxc/documentation/)
- [@article@What is LXC?](https://linuxcontainers.org/lxc/introduction/)
- [@video@Linux Container (LXC) Introduction](https://youtu.be/_KnmRdK69qM)
- [@video@Getting started with LXD Containerization](https://www.youtube.com/watch?v=aIwgPKkVj8s)
- [@video@Getting started with LXC containers](https://youtu.be/CWmkSj_B-wo)

@ -6,6 +6,5 @@ Visit the following resources to learn more:
- [@official@MariaDB website](https://mariadb.org/)
- [@article@MariaDB vs MySQL](https://www.guru99.com/mariadb-vs-mysql.html)
- [@article@W3Schools - MariaDB tutorial ](https://www.w3schools.blog/mariadb-tutorial)
- [@video@MariaDB Tutorial For Beginners in One Hour](https://www.youtube.com/watch?v=_AMj02sANpI)
- [@feed@Explore top posts about Infrastructure](https://app.daily.dev/tags/infrastructure?ref=roadmapsh)

@ -1,9 +1,10 @@
# MD5
MD5 (Message-Digest Algorithm 5) is a hash function that is currently advised not to be used due to its extensive vulnerabilities. It is still used as a checksum to verify data integrity.
MD5 (Message-Digest Algorithm 5) is a widely used cryptographic hash function that produces a 128-bit hash value, typically represented as a 32-character hexadecimal number. It was designed to provide a unique identifier for data by generating a fixed-size output (the hash) for any input. While MD5 was once popular for verifying data integrity and storing passwords, it is now considered cryptographically broken and unsuitable for security-sensitive applications due to vulnerabilities that allow for collision attacks (where two different inputs produce the same hash). As a result, MD5 has largely been replaced by more secure hash functions like SHA-256.
Visit the following resources to learn more:
- [@article@Wikipedia - MD5](https://en.wikipedia.org/wiki/MD5)
- [@article@What is MD5?](https://www.techtarget.com/searchsecurity/definition/MD5)
- [@article@Why is MD5 not safe?](https://infosecscout.com/why-md5-is-not-safe/)
- [@video@How the MD5 hash function works](https://www.youtube.com/watch?v=5MiMK45gkTY)

@ -1,13 +1,11 @@
# Memcached
Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library.
Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.
Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.
Memcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.
Visit the following resources to learn more:
- [@article@Memcached, From Wikipedia](https://en.wikipedia.org/wiki/Memcached)
- [@opensource@Memcached, From Official Github](https://github.com/memcached/memcached#readme)
- [@opensource@memcached/memcached](https://github.com/memcached/memcached#readme)
- [@article@Memcached Tutorial](https://www.tutorialspoint.com/memcached/index.htm)
- [@video@Redis vs Memcached](https://www.youtube.com/watch?v=Gyy1SiE8avE)

@ -1,7 +1,9 @@
# Message Brokers
Message brokers are an inter-application communication technology to help build a common integration mechanism to support cloud-native, microservices-based, serverless, and hybrid cloud architectures. Two of the most famous message brokers are `RabbitMQ` and `Apache Kafka`
Message brokers are intermediaries that facilitate communication between distributed systems or components by receiving, routing, and delivering messages. They enable asynchronous message passing, decoupling producers (senders) from consumers (receivers), which improves scalability and flexibility. Common functions of message brokers include message queuing, load balancing, and ensuring reliable message delivery through features like persistence and acknowledgment. Popular message brokers include Apache Kafka, RabbitMQ, and ActiveMQ, each offering different features and capabilities suited to various use cases like real-time data processing, event streaming, or task management.
Visit the following resources to learn more:
- [@article@What are message brokers?](https://www.ibm.com/topics/message-brokers)
- [@video@Introduction to Message Brokers](https://www.youtube.com/watch?v=57Qr9tk6Uxc)
- [@video@Kafka vs RabbitMQ](https://www.youtube.com/watch?v=_5mu7lZz5X4)

@ -1,12 +1,12 @@
# Microservices
Microservice architecture is a pattern in which highly cohesive, loosely coupled services are separately developed, maintained, and deployed. Each component handles an individual function, and when combined, the application handles an overall business function.
Microservices is an architectural style that structures an application as a collection of loosely coupled, independently deployable services. Each microservice focuses on a specific business capability and communicates with others via lightweight protocols, typically HTTP or messaging queues. This approach allows for greater scalability, flexibility, and resilience, as services can be developed, deployed, and scaled independently. Microservices also facilitate the use of diverse technologies and languages for different components, and they support continuous delivery and deployment. However, managing microservices involves complexity in terms of inter-service communication, data consistency, and deployment orchestration.
Visit the following resources to learn more:
- [@official@Pattern: Microservice Architecture](https://microservices.io/patterns/microservices.html)
- [@article@Pattern: Microservice Architecture](https://microservices.io/patterns/microservices.html)
- [@article@What is Microservices?](https://smartbear.com/solutions/microservices/)
- [@article@Microservices 101](https://thenewstack.io/microservices-101/)
- [@article@Primer: Microservices Explained](https://thenewstack.io/primer-microservices-explained/)
- [@article@Articles about Microservices](https://thenewstack.io/category/microservices/)
- [@video@Microservices explained in 5 minutes](https://www.youtube.com/watch?v=lL_j7ilk7rc)
- [@feed@Explore top posts about Microservices](https://app.daily.dev/tags/microservices?ref=roadmapsh)

@ -1,7 +1,17 @@
# Migration Strategies
Learn how to run database migrations effectively. Especially zero downtime multi-phase schema migrations. Rather than make all changes at once, do smaller incremental changes to allow old code, and new code to work with the database at the same time, before removing old code, and finally removing the parts of the database schema which is no longer used.
Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:
1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.
2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.
3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.
4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.
5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.
6. **Retire**: Decommissioning applications that are no longer needed or are redundant.
Each strategy has its own trade-offs in terms of cost, complexity, and benefits, and the choice depends on factors like the application’s architecture, business needs, and resource availability.
Visit the following resources to learn more:
- [@article@Databases as a Challenge for Continuous Delivery](https://phauer.com/2015/databases-challenge-continuous-delivery/)
- [@video@AWS Cloud Migration Strategies](https://www.youtube.com/watch?v=9ziB82V7qVM)

@ -1,13 +1,11 @@
# MongoDB
MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL).
MongoDB is a NoSQL, open-source database designed for storing and managing large volumes of unstructured or semi-structured data. It uses a document-oriented data model where data is stored in BSON (Binary JSON) format, which allows for flexible and hierarchical data representation. Unlike traditional relational databases, MongoDB doesn't require a fixed schema, making it suitable for applications with evolving data requirements or varying data structures. It supports horizontal scaling through sharding and offers high availability with replica sets. MongoDB is commonly used for applications requiring rapid development, real-time analytics, and large-scale data handling, such as content management systems, IoT applications, and big data platforms.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated MongoDB Roadmap](/mongodb)
- [@article@MongoDB Website](https://www.mongodb.com/)
- [@article@MongoDB Documentation](https://docs.mongodb.com/)
- [@official@MongoDB Website](https://www.mongodb.com/)
- [@official@Learning Path for MongoDB Developers](https://learn.mongodb.com/catalog)
- [@article@MongoDB Online Sandbox](https://mongoplayground.net/)
- [@article@Learning Path for MongoDB Developers](https://learn.mongodb.com/catalog)
- [@article@Dynamo DB Docs](https://docs.aws.amazon.com/dynamodb/index.html)
- [@article@Official Developers Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html)
- [@feed@daily.dev MongoDB Feed](https://app.daily.dev/tags/mongodb)

@ -1,18 +1,9 @@
# Monitoring
Distributed systems are hard to build, deploy and maintain. They consist of multiple components which communicate with each other. In parallel to that, users use the system, resulting in multiple requests. Making sense of this noise is important to understand:
- how the system behaves
- is it broken
- is it fast enough
- what can be improved
Monitoring involves continuously observing and tracking the performance, availability, and health of systems, applications, and infrastructure. It typically includes collecting and analyzing metrics, logs, and events to ensure systems are operating within desired parameters. Monitoring helps detect anomalies, identify potential issues before they escalate, and provides insights into system behavior. It often involves tools and platforms that offer dashboards, alerts, and reporting features to facilitate real-time visibility and proactive management. Effective monitoring is crucial for maintaining system reliability, performance, and for supporting incident response and troubleshooting.
A product can integrate with existing monitoring products (APM - application performance management). They can show a detailed view of each request - its user, time, components involved, state(error or OK) etc.
A few popular tools are Grafana, Sentry, Mixpanel, NewRelic.
We can build dashboards with custom events or metrics according to our needs. Automatic alert rules can be configured on top of these events/metrics.
A few popular tools are Grafana, Sentry, Mixpanel, NewRelic etc
- [@article@Observability vs Monitoring?](https://www.dynatrace.com/news/blog/observability-vs-monitoring/)
- [@article@What is APM?](https://www.sumologic.com/blog/the-role-of-apm-and-distributed-tracing-in-observability/)
- [@article@Top monitoring tools 2024](https://thectoclub.com/tools/best-application-monitoring-software/)
- [@article@Caching strategies](https://medium.com/@genchilu/cache-strategy-in-backend-d0baaacd2d79)
- [@video@Grafana Explained in 5 Minutes](https://www.youtube.com/watch?v=lILY8eSspEo)
- [@feed@daily.dev Monitoring Feed](https://app.daily.dev/tags/monitoring)

@ -1,10 +1,9 @@
# Monolithic Apps
Monolithic architecture is a pattern in which an application handles requests, executes business logic, interacts with the database, and creates the HTML for the front end. In simpler terms, this one application does many things. It's inner components are highly coupled and deployed as one unit.
It is recommended to build simple applications as a monolith for faster development cycle. Also suitable for Proof-of-Concept(PoC) projects.
Monolithic applications are designed as a single, cohesive unit where all components—such as user interface, business logic, and data access—are tightly integrated and run as a single service. This architecture simplifies development and deployment since the entire application is managed and deployed together. However, it can lead to challenges with scalability, maintainability, and agility as the application grows. Changes to one part of the application may require redeploying the entire system, and scaling might necessitate duplicating the entire application rather than scaling individual components. Monolithic architectures can be suitable for smaller applications or projects with less complex requirements, but many organizations transition to microservices or modular architectures to address these limitations as they scale.
Visit the following resources to learn more:
- [@article@Pattern: Monolithic Architecture](https://microservices.io/patterns/monolithic.html)
- [@article@Monolithic Architecture - Advantages & Disadvantages](https://datamify.medium.com/monolithic-architecture-advantages-and-disadvantages-e71a603eec89)
- [@video@Monolithic vs Microservice Architecture](https://www.youtube.com/watch?v=NdeTGlZ__Do)

@ -1,9 +1,10 @@
# MS IIS
Internet Information Services (IIS) for Windows® Server is a flexible, secure and manageable Web server for hosting anything on the Web.
Microsoft Internet Information Services (IIS) is a flexible, secure, and high-performance web server developed by Microsoft for hosting and managing web applications and services on Windows Server. IIS supports a variety of web technologies, including ASP.NET, PHP, and static content. It provides features such as request handling, authentication, SSL/TLS encryption, and URL rewriting. IIS also offers robust management tools, including a graphical user interface and command-line options, for configuring and monitoring web sites and applications. It is commonly used for deploying enterprise web applications and services in a Windows-based environment, offering integration with other Microsoft technologies and services.
Visit the following resources to learn more:
- [@official@Official Website](https://www.iis.net/)
- [@video@Learn Windows Web Server IIS](https://www.youtube.com/watch?v=1VdxPWwtISA)
- [@video@What is IIS?](https://www.youtube.com/watch?v=hPWSqEXOjQY)
- [@feed@Explore top posts about .NET](https://app.daily.dev/tags/.net?ref=roadmapsh)

@ -1,9 +1,10 @@
# MS SQL
MS SQL (or Microsoft SQL Server) is the Microsoft developed relational database management system (RDBMS). MS SQL uses the T-SQL (Transact-SQL) query language to interact with the relational databases. There are many different versions and editions available of MS SQL
Microsoft SQL Server (MS SQL) is a relational database management system developed by Microsoft for managing and storing structured data. It supports a wide range of data operations, including querying, transaction management, and data warehousing. SQL Server provides tools and features for database design, performance optimization, and security, including support for complex queries through T-SQL (Transact-SQL), data integration with SQL Server Integration Services (SSIS), and business intelligence with SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS). It is commonly used in enterprise environments for applications requiring reliable data storage, transaction processing, and reporting.
Visit the following resources to learn more:
- [@roadmap@SQL Roadmap](/sql)
- [@article@MS SQL website](https://www.microsoft.com/en-ca/sql-server/)
- [@article@Tutorials for SQL Server](https://docs.microsoft.com/en-us/sql/sql-server/tutorials-for-sql-server-2016?view=sql-server-ver15)
- [@video@SQL Server tutorial for beginners](https://www.youtube.com/watch?v=-EPMOaV7h_Q)

@ -1,12 +1,12 @@
# MySQL
MySQL is an incredibly popular open source relational database management system (RDBMS). MySQL can be used as a stand-alone client or in conjunction with other services to provide database connectivity. The **M** in LAMP stack stands for MySQL; that alone should provide an idea of its prevalence.
MySQL is an open-source relational database management system (RDBMS) known for its speed, reliability, and ease of use. It uses SQL (Structured Query Language) for database interactions and supports a range of features for data management, including transactions, indexing, and stored procedures. MySQL is widely used for web applications, data warehousing, and various other applications due to its scalability and flexibility. It integrates well with many programming languages and platforms, and is often employed in conjunction with web servers and frameworks in popular software stacks like LAMP (Linux, Apache, MySQL, PHP/Python/Perl). MySQL is maintained by Oracle Corporation and has a large community and ecosystem supporting its development and use.
Visit the following resources to learn more:
- [@official@MySQL website](https://www.mysql.com/)
- [@article@W3Schools - MySQL tutorial ](https://www.w3schools.com/mySQl/default.asp)
- [@video@MySQL tutorial for beginners](https://www.youtube.com/watch?v=7S_tz1z_5bA)
- [@article@MySQL for Developers](https://planetscale.com/courses/mysql-for-developers/introduction/course-introduction)
- [@article@MySQL Tutorial](https://www.mysqltutorial.org/)
- [@video@MySQL Full Course for free](https://www.youtube.com/watch?v=5OdVJbNCSso)
- [@feed@Explore top posts about MySQL](https://app.daily.dev/tags/mysql?ref=roadmapsh)

@ -1,9 +1,10 @@
# N plus one problem
The N+1 query problem happens when your code executes N additional query statements to fetch the same data that could have been retrieved when executing the primary query.
The N+1 problem occurs in database querying when an application performs a query to retrieve a list of items and then issues additional queries to fetch related data for each item individually. This often results in inefficiencies and performance issues because the number of queries issued grows proportionally with the number of items retrieved. For example, if an application retrieves 10 items and then performs an additional query for each item to fetch related details, it ends up executing 11 queries (1 for the list and 10 for the details), leading to a total of 11 queries instead of 2. This can severely impact performance, especially with larger datasets. Solutions to the N+1 problem typically involve optimizing queries to use joins or batching techniques to retrieve related data in fewer, more efficient queries.
Visit the following resources to learn more:
- [@article@In Detail Explanation of N+1 Problem](https://medium.com/doctolib/understanding-and-fixing-n-1-query-30623109fe89)
- [@article@What is the N+1 Problem](https://planetscale.com/blog/what-is-n-1-query-problem-and-how-to-solve-it)
- [@article@Solving N+1 Problem: For Java Backend Developers](https://dev.to/jackynote/solving-the-notorious-n1-problem-optimizing-database-queries-for-java-backend-developers-2o0p)
- [@video@SQLite and the N+1 (no) problem](https://www.youtube.com/watch?v=qPfAQY_RahA)

@ -1,8 +1,10 @@
# Graph databases
# NEO4J
A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it.
Neo4j is a highly popular open-source graph database designed to store, manage, and query data as interconnected nodes and relationships. Unlike traditional relational databases that use tables and rows, Neo4j uses a graph model where data is represented as nodes (entities) and edges (relationships), allowing for highly efficient querying of complex, interconnected data. It supports Cypher, a declarative query language specifically designed for graph querying, which simplifies operations like traversing relationships and pattern matching. Neo4j is well-suited for applications involving complex relationships, such as social networks, recommendation engines, and fraud detection, where understanding and leveraging connections between data points is crucial.
Visit the following resources to learn more:
- [@article@What is a Graph Database?](https://neo4j.com/developer/graph-database/)
- [@official@Neo4j Website](https://neo4j.com)
- [@video@Neo4j in 100 Seconds](https://www.youtube.com/watch?v=T6L9EoBy8Zk)
- [@video@Neo4j Course for Beginners](https://www.youtube.com/watch?v=_IgbB24scLI)
- [@feed@Explore top posts about Backend Development](https://app.daily.dev/tags/backend?ref=roadmapsh)

@ -1,9 +1,10 @@
# Nginx
NGINX is a powerful web server and uses a non-threaded, event-driven architecture that enables it to outperform Apache if configured correctly. It can also do other important things, such as load balancing, HTTP caching, or be used as a reverse proxy.
Nginx is a high-performance, open-source web server and reverse proxy server known for its efficiency, scalability, and low resource consumption. Originally developed as a web server, Nginx is also commonly used as a load balancer, HTTP cache, and mail proxy. It excels at handling a large number of concurrent connections due to its asynchronous, event-driven architecture. Nginx's features include support for serving static content, handling dynamic content through proxying to application servers, and providing SSL/TLS termination. Its modular design allows for extensive customization and integration with various applications and services, making it a popular choice for modern web infrastructures.
Visit the following resources to learn more:
- [@official@Official Website](https://nginx.org/)
- [@video@NGINX Explained in 100 Seconds](https://www.youtube.com/watch?v=JKxlsvZXG7c)
- [@video@NGINX Tutorial for Beginners](https://www.youtube.com/watch?v=9t9Mp0BGnyI)
- [@feed@Explore top posts about Nginx](https://app.daily.dev/tags/nginx?ref=roadmapsh)

@ -1,12 +1,9 @@
# Database Normalization
Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model.
Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model. Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Visit the following resources to learn more:
- [@article@What is Normalization in DBMS (SQL)? 1NF, 2NF, 3NF, BCNF Database with Example](https://www.guru99.com/database-normalization.html)
- [@article@Database normalization](https://en.wikipedia.org/wiki/Database_normalization)
- [@video@Basic Concept of Database Normalization](https://www.youtube.com/watch?v=xoTyrdT9SZI)
- [@video@Complete guide to Database Normalization in SQL](https://www.youtube.com/watch?v=rBPQ5fg_kiY)
- [@feed@Explore top posts about Database](https://app.daily.dev/tags/database?ref=roadmapsh)

@ -1,7 +1,13 @@
# NoSQL databases
NoSQL databases offer data storage and retrieval that is modelled differently to "traditional" relational databases. NoSQL databases typically focus more on horizontal scaling, eventual consistency, speed and flexibility and is used commonly for big data and real-time streaming applications.
NoSQL is often described as a BASE system (**B**asically **A**vailable, **S**oft state, **E**ventual consistency) as opposed to SQL/relational which typically focus on ACID (Atomicity, Consistency, Isolation, Durability). Common NoSQL data structures include key-value pair, wide column, graph and document.
NoSQL databases are a category of database management systems designed for handling unstructured, semi-structured, or rapidly changing data. Unlike traditional relational databases, which use fixed schemas and SQL for querying, NoSQL databases offer flexible data models and can be classified into several types:
1. **Document Stores**: Store data in JSON, BSON, or XML formats, allowing for flexible and hierarchical data structures (e.g., MongoDB, CouchDB).
2. **Key-Value Stores**: Store data as key-value pairs, suitable for high-speed read and write operations (e.g., Redis, Riak).
3. **Column-Family Stores**: Store data in columns rather than rows, which is useful for handling large volumes of data and wide columnar tables (e.g., Apache Cassandra, HBase).
4. **Graph Databases**: Optimize the storage and querying of data with complex relationships using graph structures (e.g., Neo4j, Amazon Neptune).
NoSQL databases are often used for applications requiring high scalability, flexibility, and performance, such as real-time analytics, content management systems, and distributed data storage.
Visit the following resources to learn more:

@ -1,15 +1,10 @@
# OAuth
OAuth stands for **O**pen **Auth**orization and is an open standard for authorization. It works to authorize devices, APIs, servers and applications using access tokens rather than user credentials, known as "secure delegated access".
In its most simplest form, OAuth delegates authentication to services like Facebook, Amazon, Twitter and authorizes third-party applications to access the user account **without** having to enter their login and password.
It is mostly utilized for REST/APIs and only provides a limited scope of a user's data.
OAuth is an open standard for authorization that allows third-party applications to access a user's resources without exposing their credentials. It works by issuing access tokens after users grant permission, which applications then use to interact with resource servers on behalf of the user. This process involves a resource owner (the user), a resource server (which holds the data), and an authorization server (which issues tokens). OAuth enables secure, token-based access management, commonly used for granting applications permissions to interact with services like social media accounts or cloud storage.
Visit the following resources to learn more:
- [@article@Okta - What the Heck is OAuth](https://developer.okta.com/blog/2017/06/21/what-the-heck-is-oauth)
- [@article@DigitalOcean - An Introduction to OAuth 2](https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2)
- [@video@What is OAuth really all about](https://www.youtube.com/watch?v=t4-416mg6iU)
- [@video@OAuth 2.0: An Overview](https://www.youtube.com/watch?v=CPbvxxslDTU)
- [@video@OAuth 2 Explained In Simple Terms](https://www.youtube.com/watch?v=ZV5yTm4pT8g)
- [@feed@Explore top posts about OAuth](https://app.daily.dev/tags/oauth?ref=roadmapsh)

@ -1,16 +1,12 @@
# Observability
In software development, observability is the measure of how well we can understand a system from the work it does, and how to make it better.
So what makes a system to be "observable"? It is its ability of producing and collecting metrics, logs and traces in order for us to understand what happens under the hood and identify issues and bottlenecks faster.
You can of course implement all those features by yourself, but there are a lot of softwares out there that can help you with it like Datadog, Sentry and CloudWatch.
Observability refers to the ability to understand and monitor the internal state of a system based on its external outputs, such as metrics, logs, and traces. It encompasses collecting, analyzing, and visualizing data to gain insights into system performance, detect anomalies, and troubleshoot issues. Effective observability involves integrating these data sources to provide a comprehensive view of system behavior, enabling proactive management and rapid response to problems. It helps in understanding complex systems, improving reliability, and optimizing performance by making it easier to identify and address issues before they impact users.
Visit the following resources to learn more:
- [@article@DataDog Docs](https://docs.datadoghq.com/)
- [@article@AWS CloudWatch Docs](https://aws.amazon.com/cloudwatch/getting-started/)
- [@article@Sentry Docs](https://docs.sentry.io/)
- [@video@AWS re:Invent 2017: Improving Microservice and Serverless Observability with Monitor](https://www.youtube.com/watch?v=Wx0SHRb2xcI)
- [@article@Observability and Instrumentation: What They Are and Why They Matter](https://newrelic.com/blog/best-practices/observability-instrumentation)
- [@video@What is observability?](https://www.youtube.com/watch?v=--17See0KHs)
- [@feed@Explore top posts about Observability](https://app.daily.dev/tags/observability?ref=roadmapsh)

@ -1,12 +1,10 @@
# Open api spec
The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.
An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.
The OpenAPI Specification (OAS), formerly known as Swagger, is a standard for defining and documenting RESTful APIs. It provides a structured format in YAML or JSON to describe API endpoints, request and response formats, authentication methods, and other metadata. By using OAS, developers can create a comprehensive and machine-readable API description that facilitates client generation, automated documentation, and testing. This specification promotes consistency and clarity in API design, enhances interoperability between different systems, and enables tools to generate client libraries, server stubs, and interactive API documentation.
Visit the following resources to learn more:
- [@article@OpenAPI Specification Website](https://swagger.io/specification/)
- [@article@Open API Live Editor](https://swagger.io/tools/swagger-editor/)
- [@article@Official training guide](https://swagger.io/docs/specification/about/)
- [@official@OpenAPI Specification Website](https://swagger.io/specification/)
- [@official@Open API Live Editor](https://swagger.io/tools/swagger-editor/)
- [@video@OpenAPI 3.0: How to Design and Document APIs with the Latest OpenAPI Specification 3.0](https://www.youtube.com/watch?v=6kwmW_p_Tig)
- [@vidoe@ REST API and OpenAPI: It’s Not an Either/Or Question ](https://www.youtube.com/watch?v=pRS9LRBgjYg)

@ -1,13 +1,11 @@
# OpenID
OpenID is a protocol that utilizes the authorization and authentication mechanisms of OAuth 2.0 and is now widely adopted by many identity providers on the Internet.
It solves the problem of needing to share user's personal info between many different web services(e.g. online shops, discussion forums etc.)
OpenID is an open standard for decentralized authentication that allows users to log in to multiple websites and applications using a single set of credentials, managed by an identity provider (IdP). It enables users to authenticate their identity through an external service, simplifying the login process and reducing the need for multiple usernames and passwords. OpenID typically works in conjunction with OAuth 2.0 for authorization, allowing users to grant access to their data while maintaining security. This approach enhances user convenience and streamlines identity management across various platforms.
Visit the following resources to learn more:
- [@official@Official Website](https://openid.net/)
- [@official@What is OpenID](https://openid.net/connect/)
- [@article@OAuth vs OpenID](https://securew2.com/blog/oauth-vs-openid-which-is-better)
- [@article@OpenID Connect Protocol](https://auth0.com/docs/authenticate/protocols/openid-connect-protocol)
- [@video@An Illustrated Guide to OAuth and OpenID Connect](https://www.youtube.com/watch?v=t18YB3xDfXI)
- [@video@OAuth 2.0 and OpenID Connect (in plain English)](https://www.youtube.com/watch?v=996OiexHze0)
- [@feed@Explore top posts about Authentication](https://app.daily.dev/tags/authentication?ref=roadmapsh)

@ -1,6 +1,6 @@
# Oracle
Oracle Database Server or sometimes called Oracle RDBMS or even simply Oracle is a world leading relational database management system produced by Oracle Corporation.
Oracle Database is a highly robust, enterprise-grade relational database management system (RDBMS) developed by Oracle Corporation. Known for its scalability, reliability, and comprehensive features, Oracle Database supports complex data management tasks and mission-critical applications. It provides advanced functionalities like SQL querying, transaction management, high availability through clustering, and data warehousing. Oracle's database solutions include support for various data models, such as relational, spatial, and graph, and offer tools for security, performance optimization, and data integration. It is widely used in industries requiring large-scale, secure, and high-performance data processing.
Visit the following resources to learn more:

@ -1,9 +1,10 @@
# ORMs
Object-Relational Mapping (ORM) is a technique that lets you query and manipulate data from a database using an object-oriented paradigm. When talking about ORM, most people are referring to a library that implements the Object-Relational Mapping technique, hence the phrase "an ORM".
Object-Relational Mapping (ORM) is a programming technique that allows developers to interact with a relational database using object-oriented programming concepts. ORM frameworks map database tables to classes and rows to objects, enabling developers to perform database operations through objects rather than writing raw SQL queries. This abstraction simplifies data manipulation and improves code maintainability by aligning database interactions with the application's object model. ORM tools handle the translation between objects and database schemas, manage relationships, and often provide features like lazy loading and caching. Popular ORM frameworks include Hibernate for Java, Entity Framework for .NET, and SQLAlchemy for Python.
Visit the following resources to learn more:
- [@article@Object Relational Mapping - Wikipedia](https://en.wikipedia.org/wiki/Object–relational_mapping)
- [@article@What is an ORM, how does it work, and how should I use one?](https://stackoverflow.com/a/1279678)
- [@article@What is an ORM](https://www.freecodecamp.org/news/what-is-an-orm-the-meaning-of-object-relational-mapping-database-tools/)
- [@video@Why Use an ORM?](https://www.youtube.com/watch?v=vHt2LC1EM3Q)
- [@feed@Explore top posts about Backend Development](https://app.daily.dev/tags/backend?ref=roadmapsh)

@ -4,7 +4,7 @@ OWASP or Open Web Application Security Project is an online community that produ
Visit the following resources to learn more:
- [@article@Wikipedia - OWASP](https://en.wikipedia.org/wiki/OWASP)
- [@official@OWASP Website](https://owasp.org/)
- [@opensource@OWASP Application Security Verification Standard](https://github.com/OWASP/ASVS)
- [@article@OWASP Top 10 Security Risks](https://cheatsheetseries.owasp.org/IndexTopTen.html)
- [@article@OWASP Cheatsheets](https://cheatsheetseries.owasp.org/cheatsheets/AJAX_Security_Cheat_Sheet.html)

@ -1,13 +1,11 @@
# PHP
PHP is a general purpose scripting language often used for making dynamic and interactive Web pages. It was originally created by Danish-Canadian programmer Rasmus Lerdorf in 1994. The PHP reference implementation is now produced by The PHP Group and supported by PHP Foundation. PHP supports procedural and object-oriented styles of programming with some elements of functional programming as well.
PHP (Hypertext Preprocessor) is a widely-used, open-source scripting language designed primarily for web development but also applicable for general-purpose programming. It is embedded within HTML to create dynamic web pages and interact with databases, often working with MySQL or other database systems. PHP is known for its simplicity, ease of integration with various web servers, and extensive support for web-related functionalities. Its wide adoption is driven by its role in powering major platforms and content management systems like WordPress, Joomla, and Drupal. PHP's features include server-side scripting, session management, and support for various web protocols and formats.
Visit the following resources to learn more:
- [@official@PHP Website](https://php.net/)
- [@article@Learn PHP - W3Schools](https://www.w3schools.com/php/)
- [@article@PHP - The Right Way](https://phptherightway.com/)
- [@video@PHP for Beginners](https://www.youtube.com/watch?v=U2lQWR6uIuo\&list=PL3VM-unCzF8ipG50KDjnzhugceoSG3RTC)
- [@video@PHP For Absolute Beginners](https://www.youtube.com/watch?v=2eebptXfEvw)
- [@video@Full PHP 8 Tutorial - Learn PHP The Right Way In 2022](https://www.youtube.com/watch?v=sVbEyFZKgqk\&list=PLr3d3QYzkw2xabQRUpcZ_IBk9W50M9pe-)
- [@video@PHP for Beginners](https://www.youtube.com/watch?v=zZ6vybT1HQs)
- [@feed@Explore top posts about PHP](https://app.daily.dev/tags/php?ref=roadmapsh)

@ -1,9 +1,3 @@
# Learn a Language
Even if you’re a beginner the least you would have known is that Web Development is majorly classified into two facets: Frontend Development and Backend Development. And obviously, they both have their respective set of tools and technologies. For instance, when we talk about Frontend Development, there always comes 3 names first and foremost – HTML, CSS, and JavaScript.
In the same way, when it comes to Backend Web Development – we primarily require a backend (or you can say server-side) programming language to make the website function along with various other tools & technologies such as databases, frameworks, web servers, etc.
Pick a language from the given list and make sure to learn its quirks, core details about its runtime e.g. concurrency, memory model etc.
[@article@ Top Languages for job ads](https://www.tiobe.com/tiobe-index/)
Even if you’re a beginner the least you would have known is that Web Development is majorly classified into two facets: Frontend Development and Backend Development. And obviously, they both have their respective set of tools and technologies. For instance, when we talk about Frontend Development, there always comes 3 names first and foremost – HTML, CSS, and JavaScript. In the same way, when it comes to Backend Web Development – we primarily require a backend (or you can say server-side) programming language to make the website function along with various other tools & technologies such as databases, frameworks, web servers, etc.

@ -1,12 +1,12 @@
# PostgreSQL
PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
PostgreSQL is an advanced, open-source relational database management system (RDBMS) known for its robustness, extensibility, and standards compliance. It supports a wide range of data types and advanced features, including complex queries, foreign keys, and full-text search. PostgreSQL is highly extensible, allowing users to define custom data types, operators, and functions. It supports ACID (Atomicity, Consistency, Isolation, Durability) properties for reliable transaction processing and offers strong support for concurrency and data integrity. Its capabilities make it suitable for various applications, from simple web apps to large-scale data warehousing and analytics solutions.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated PostgreSQL DBA Roadmap](/postgresql-dba)
- [@official@Official Website](https://www.postgresql.org/)
- [@article@Learn PostgreSQL - Full Tutorial for Beginners](https://www.postgresqltutorial.com/)
- [@video@Learn PostgreSQL Tutorial - Full Course for Beginners](https://www.youtube.com/watch?v=qw--VYLpxG4)
- [@video@Postgres tutorial for Beginners](https://www.youtube.com/watch?v=eMIxuk0nOkU)
- [@video@PostgreSQL in 100 Seconds](https://www.youtube.com/watch?v=n2Fluyr3lbc)
- [@video@Postgres tutorial for Beginners](https://www.youtube.com/watch?v=SpfIwlAYaKk)
- [@feed@Explore top posts about PostgreSQL](https://app.daily.dev/tags/postgresql?ref=roadmapsh)

@ -1,11 +1,8 @@
# Profiling Performance
There are several ways to profile the performance of a database:
Profiling performance involves analyzing a system or application's behavior to identify bottlenecks, inefficiencies, and areas for optimization. This process typically involves collecting detailed information about resource usage, such as CPU and memory consumption, I/O operations, and execution time of functions or methods. Profiling tools can provide insights into how different parts of the code contribute to overall performance, highlighting slow or resource-intensive operations. By understanding these performance characteristics, developers can make targeted improvements, optimize code paths, and enhance system responsiveness and scalability. Profiling is essential for diagnosing performance issues and ensuring that applications meet desired performance standards.
- Monitor system performance: You can use tools like the Windows Task Manager or the Unix/Linux top command to monitor the performance of your database server. These tools allow you to see the overall CPU, memory, and disk usage of the system, which can help identify any resource bottlenecks.
- Use database-specific tools: Most database management systems (DBMSs) have their own tools for monitoring performance. For example, Microsoft SQL Server has the SQL Server Management Studio (SSMS) and the sys.dm_os_wait_stats dynamic management view, while Oracle has the Oracle Enterprise Manager and the v$waitstat view. These tools allow you to see specific performance metrics, such as the amount of time spent waiting on locks or the number of physical reads and writes.
- Use third-party tools: There are also several third-party tools that can help you profile the performance of a database. Some examples include SolarWinds Database Performance Analyzer, Quest Software Foglight, and Redgate SQL Monitor. These tools often provide more in-depth performance analysis and can help you identify specific issues or bottlenecks.
- Analyze slow queries: If you have specific queries that are running slowly, you can use tools like EXPLAIN PLAN or SHOW PLAN in MySQL or SQL Server to see the execution plan for the query and identify any potential issues. You can also use tools like the MySQL slow query log or the SQL Server Profiler to capture slow queries and analyze them further.
- Monitor application performance: If you are experiencing performance issues with a specific application that is using the database, you can use tools like Application Insights or New Relic to monitor the performance of the application and identify any issues that may be related to the database.
Learn more from the following resources:
Have a look at the documentation for the database that you are using.
- [@video@Performance Profiling](https://www.youtube.com/watch?v=MaauQTeGg2k)
- [@article@How to Profile SQL Queries for Better Performance](https://servebolt.com/articles/profiling-sql-queries/)

@ -1,16 +1,12 @@
# Python
Python is a well known programming language which is both a strongly typed and a dynamically typed language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways.
Python is a high-level, interpreted programming language known for its readability, simplicity, and versatility. Its design emphasizes code readability and a clear, straightforward syntax, making it accessible for both beginners and experienced developers. Python supports multiple programming paradigms, including procedural, object-oriented, and functional programming. It has a rich ecosystem of libraries and frameworks, such as Django and Flask for web development, Pandas and NumPy for data analysis, and TensorFlow and PyTorch for machine learning. Python is widely used in web development, data science, automation, and scripting, and it benefits from a strong community and extensive documentation.
Visit the following resources to learn more:
- [@roadmap@Visit Dedicated Python Roadmap](/python)
- [@official@Python Website](https://www.python.org/)
- [@official@Python Getting Started](https://www.python.org/about/gettingstarted/)
- [@article@Automate the Boring Stuff](https://automatetheboringstuff.com/)
- [@article@Python principles - Python basics](https://pythonprinciples.com/)
- [@article@W3Schools - Python Tutorial ](https://www.w3schools.com/python/)
- [@article@Python Crash Course](https://ehmatthes.github.io/pcc/)
- [@course@Python Full Course for free](https://www.youtube.com/watch?v=ix9cRaBkVe0)
- [@article@An Introduction to Python for Non-Programmers](https://thenewstack.io/an-introduction-to-python-for-non-programmers/)
- [@article@Getting Started with Python and InfluxDB](https://thenewstack.io/getting-started-with-python-and-influxdb/)
- [@feed@Explore top posts about Python](https://app.daily.dev/tags/python?ref=roadmapsh)

@ -1,6 +1,6 @@
# RabbitMQ
With tens of thousands of users, RabbitMQ is one of the most popular open-source message brokers. RabbitMQ is lightweight and easy to deploy on-premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.
RabbitMQ is an open-source message broker that facilitates the exchange of messages between distributed systems using the Advanced Message Queuing Protocol (AMQP). It enables asynchronous communication by queuing and routing messages between producers and consumers, which helps decouple application components and improve scalability and reliability. RabbitMQ supports features such as message durability, acknowledgments, and flexible routing through exchanges and queues. It is highly configurable, allowing for various messaging patterns, including publish/subscribe, request/reply, and point-to-point communication. RabbitMQ is widely used in enterprise environments for handling high-throughput messaging and integrating heterogeneous systems.
Visit the following resources to learn more:

@ -1,11 +1,3 @@
# Real Time Data
There are many ways to get real time data from the backend. Some of them are:
- Websockets
- Server Sent Events
- Long Polling
- Short Polling
- [@video@Introduction to HTTP Polling and Web Sockets](https://www.youtube.com/watch?v=OsgrJDMPl58)
- [@article@Introduction to Long Polling](https://www.pubnub.com/guides/long-polling/)
Real-time data refers to information that is processed and made available immediately or with minimal delay, allowing users or systems to react promptly to current conditions. This type of data is essential in applications requiring immediate updates and responses, such as financial trading platforms, online gaming, real-time analytics, and monitoring systems. Real-time data processing involves capturing, analyzing, and delivering information as it is generated, often using technologies like stream processing frameworks (e.g., Apache Kafka, Apache Flink) and low-latency databases. Effective real-time data systems can handle high-speed data flows, ensuring timely and accurate decision-making.

@ -1,12 +1,10 @@
# Key-Value Databases
# Redis
A key-value database (KV database) is a type of database that stores data as a collection of key-value pairs. In a KV database, each piece of data is identified by a unique key, and the value is the data associated with that key.
KV databases are designed for fast and efficient storage and retrieval of data, and they are often used in applications that require high performance and low latency. They are particularly well-suited for storing large amounts of unstructured data, such as log data and user profiles.
Some popular KV databases include Redis, Memcached, and LevelDB. These databases are often used in combination with other types of databases, such as relational databases or document databases, to provide a complete and scalable data storage solution.
Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.
Visit the following resources to learn more:
- [@article@Key-Value Databases - Wikipedia](https://en.wikipedia.org/wiki/Key-value_database)
- [@feed@Explore top posts about Backend Development](https://app.daily.dev/tags/backend?ref=roadmapsh)
- [@official@Redis Website](https://redis.io/)
- [@video@Redis in 100 Seconds](https://www.youtube.com/watch?v=G1rOthIU-uo)
- [@course@Redis Crash Course](https://www.youtube.com/watch?v=XCsS_NVAa1g)
- [@feed@Explore top posts about Redis](https://app.daily.dev/tags/redis?ref=roadmapsh)

@ -1,9 +1,10 @@
# Redis
Redis is an open source (BSD licensed), in-memory **data structure store** used as a database, cache, message broker, and streaming engine. Redis provides data structures such as [strings](https://redis.io/topics/data-types-intro#strings), [hashes](https://redis.io/topics/data-types-intro#hashes), [lists](https://redis.io/topics/data-types-intro#lists), [sets](https://redis.io/topics/data-types-intro#sets), [sorted sets](https://redis.io/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](https://redis.io/topics/data-types-intro#bitmaps), [hyperloglogs](https://redis.io/topics/data-types-intro#hyperloglogs), [geospatial indexes](https://redis.io/commands/geoadd), and [streams](https://redis.io/topics/streams-intro). Redis has built-in [replication](https://redis.io/topics/replication), [Lua scripting](https://redis.io/commands/eval), [LRU eviction](https://redis.io/topics/lru-cache), [transactions](https://redis.io/topics/transactions), and different levels of [on-disk persistence](https://redis.io/topics/persistence), and provides high availability via [Redis Sentinel](https://redis.io/topics/sentinel) and automatic partitioning with [Redis Cluster](https://redis.io/topics/cluster-tutorial).
Redis is an open-source, in-memory data structure store known for its speed and versatility. It supports various data types, including strings, lists, sets, hashes, and sorted sets, and provides functionalities such as caching, session management, real-time analytics, and message brokering. Redis operates as a key-value store, allowing for rapid read and write operations, and is often used to enhance performance and scalability in applications. It supports persistence options to save data to disk, replication for high availability, and clustering for horizontal scaling. Redis is widely used for scenarios requiring low-latency access to data and high-throughput performance.
Visit the following resources to learn more:
- [@official@Redis Website](https://redis.io/)
- [@video@Redis in 100 Seconds](https://www.youtube.com/watch?v=G1rOthIU-uo)
- [@course@Redis Crash Course](https://www.youtube.com/watch?v=XCsS_NVAa1g)
- [@feed@Explore top posts about Redis](https://app.daily.dev/tags/redis?ref=roadmapsh)

@ -1,6 +1,6 @@
# Relational Databases
A relational database is **a type of database that stores and provides access to data points that are related to one another**. Relational databases store data in a series of tables. Interconnections between the tables are specified as foreign keys. A foreign key is a unique reference from one row in a relational table to another row in a table, which can be the same table but is most commonly a different table.
Relational databases are a type of database management system (DBMS) that organizes data into structured tables with rows and columns, using a schema to define data relationships and constraints. They employ Structured Query Language (SQL) for querying and managing data, supporting operations such as data retrieval, insertion, updating, and deletion. Relational databases enforce data integrity through keys (primary and foreign) and constraints (such as unique and not-null), and they are designed to handle complex queries, transactions, and data relationships efficiently. Examples of relational databases include MySQL, PostgreSQL, and Oracle Database. They are commonly used for applications requiring structured data storage, strong consistency, and complex querying capabilities.
Visit the following resources to learn more:

@ -1,10 +1,10 @@
# Repo Hosting Services
When working on a team, you often need a remote place to put your code so others can access it, create their own branches, and create or review pull requests. These services often include issue tracking, code review, and continuous integration features. A few popular choices are GitHub, GitLab, BitBucket, and AWS CodeCommit.
Repo hosting services are platforms that provide storage, management, and collaboration tools for version-controlled code repositories. These services support version control systems like Git, Mercurial, or Subversion, allowing developers to manage and track changes to their codebases, collaborate with others, and automate workflows. Key features often include branching and merging, pull requests, issue tracking, code review, and integration with continuous integration/continuous deployment (CI/CD) pipelines. Popular repo hosting services include GitHub, GitLab, and Bitbucket, each offering various levels of free and paid features tailored to different team sizes and project requirements.
Visit the following resources to learn more:
- [@opensource@GitHub](https://github.com/features/)
- [@article@GitLab](https://about.gitlab.com/)
- [@article@BitBucket](https://bitbucket.org/product/guides/getting-started/overview)
- [@official@GitHub](https://github.com)
- [@official@GitLab](https://about.gitlab.com/)
- [@official@BitBucket](https://bitbucket.org/product/guides/getting-started/overview)
- [@article@How to choose the best source code repository](https://blockandcapital.com/en/choose-code-repository/)

@ -1,11 +1,11 @@
# REST
REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other.
A REST API (Representational State Transfer Application Programming Interface) is an architectural style for designing networked applications. It relies on standard HTTP methods (GET, POST, PUT, DELETE) to interact with resources, which are represented as URIs (Uniform Resource Identifiers). REST APIs are stateless, meaning each request from a client to a server must contain all the information needed to understand and process the request. They use standard HTTP status codes to indicate the outcome of requests and often communicate in formats like JSON or XML. REST APIs are widely used due to their simplicity, scalability, and ease of integration with web services and applications.
Visit the following resources to learn more:
- [@article@REST Fundamental](https://dev.to/cassiocappellari/fundamentals-of-rest-api-2nag)
- [@article@What is a REST API?](https://www.redhat.com/en/topics/api/what-is-a-rest-api)
- [@article@Roy Fieldings dissertation chapter, Representational State Transfer (REST)](https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm)
- [@article@Learn REST: A RESTful Tutorial](https://restapitutorial.com/)
- [@video@What is a REST API?](https://www.youtube.com/watch?v=-mN3VyJuCjM)
- [@feed@Explore top posts about REST API](https://app.daily.dev/tags/rest-api?ref=roadmapsh)

@ -1 +1,8 @@
# RethinkDB
RethinkDB is an open-source, distributed NoSQL database designed for real-time applications. It focuses on providing real-time capabilities by allowing applications to automatically receive updates when data changes, using its changefeed feature. RethinkDB's data model is based on JSON documents, and it supports rich queries, including joins, aggregations, and filtering. It offers a flexible schema and supports horizontal scaling through sharding and replication for high availability. Although development on RethinkDB ceased in 2016, its approach to real-time data and powerful querying capabilities make it notable for applications needing immediate data updates and responsiveness.
Learn more from the following resources:
- [@official@RethinkDB Website](https://rethinkdb.com/)
- [@course@RethinkDB Crash Course](https://www.youtube.com/watch?v=pW3PFtchHDc)

@ -1,6 +1,6 @@
# Ruby
Ruby is a high-level, interpreted programming language that blends Perl, Smalltalk, Eiffel, Ada, and Lisp. Ruby focuses on simplicity and productivity along with a syntax that reads and writes naturally. Ruby supports procedural, object-oriented and functional programming and is dynamically typed.
Ruby is a high-level, object-oriented programming language known for its simplicity, productivity, and elegant syntax. Designed to be intuitive and easy to read, Ruby emphasizes developer happiness and quick development cycles. It supports multiple programming paradigms, including procedural, functional, and object-oriented programming. Ruby is particularly famous for its web framework, Ruby on Rails, which facilitates rapid application development by providing conventions and tools for building web applications efficiently. The language's flexibility, combined with its rich ecosystem of libraries and a strong community, makes it popular for web development, scripting, and prototyping.
Visit the following resources to learn more:

@ -1,11 +1,11 @@
# Rust
Rust is a modern systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory safe without using garbage collection.
Rust is a systems programming language known for its focus on safety, performance, and concurrency. It provides fine-grained control over system resources while ensuring memory safety without needing a garbage collector. Rust's ownership model enforces strict rules on how data is accessed and managed, preventing common issues like null pointer dereferences and data races. Its strong type system and modern features, such as pattern matching and concurrency support, make it suitable for a wide range of applications, from low-level systems programming to high-performance web servers and tools. Rust is gaining traction in both industry and open source for its reliability and efficiency.
Visit the following resources to learn more:
- [@article@The Rust Programming Language - online book](https://doc.rust-lang.org/book/)
- [@article@Rust by Example - collection of runnable examples](https://doc.rust-lang.org/stable/rust-by-example/index.html)
- [@official@The Rust Programming Language - online book](https://doc.rust-lang.org/book/)
- [@article@Rust vs. Go: Why They’re Better Together](https://thenewstack.io/rust-vs-go-why-theyre-better-together/)
- [@article@Rust by the Numbers: The Rust Programming Language in 2021](https://thenewstack.io/rust-by-the-numbers-the-rust-programming-language-in-2021/)
- [@video@Learn Rust Programming](https://www.youtube.com/watch?v=BpPEoZW5IiY)
- [@feed@Explore top posts about Rust](https://app.daily.dev/tags/rust?ref=roadmapsh)

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save