Add new topics to backend roadmap

pull/3174/head
Kamran Ahmed 2 years ago
parent 7a9dd74f21
commit 00b9ad0016
  1. 9
      content/roadmaps/101-backend/content-paths.json
  2. 7
      content/roadmaps/101-backend/content/108-more-about-databases/105-database-indexes.md
  3. 7
      content/roadmaps/101-backend/content/108-more-about-databases/105-read-contention.md
  4. 7
      content/roadmaps/101-backend/content/108-more-about-databases/106-data-replication.md
  5. 11
      content/roadmaps/101-backend/content/108-more-about-databases/106-profiling-performance.md
  6. 9
      content/roadmaps/101-backend/content/108-more-about-databases/107-sharding-strategies.md
  7. 10
      content/roadmaps/101-backend/content/108-more-about-databases/108-cap-theorem.md
  8. 11
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/100-graceful-degradation.md
  9. 14
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/101-throttling.md
  10. 14
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/102-backpressure.md
  11. 14
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/103-loadshifting.md
  12. 10
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/104-circuit-breaker.md
  13. 0
      content/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/readme.md
  14. 1753
      public/project/backend.json

@ -55,6 +55,8 @@
"more-about-databases:transactions": "/roadmaps/101-backend/content/108-more-about-databases/102-transactions.md",
"more-about-databases:n-plus-one-problem": "/roadmaps/101-backend/content/108-more-about-databases/103-n-plus-one-problem.md",
"more-about-databases:database-normalization": "/roadmaps/101-backend/content/108-more-about-databases/104-database-normalization.md",
"more-about-databases:read-contention": "/roadmaps/101-backend/content/108-more-about-databases/105-read-contention.md",
"more-about-databases:profiling-performance": "/roadmaps/101-backend/content/108-more-about-databases/106-profiling-performance.md",
"scaling-databases": "/roadmaps/101-backend/content/109-scaling-databases/readme.md",
"scaling-databases:database-indexes": "/roadmaps/101-backend/content/109-scaling-databases/100-database-indexes.md",
"scaling-databases:data-replication": "/roadmaps/101-backend/content/109-scaling-databases/101-data-replication.md",
@ -133,7 +135,12 @@
"web-servers:caddy": "/roadmaps/101-backend/content/122-web-servers/102-caddy.md",
"web-servers:ms-iis": "/roadmaps/101-backend/content/122-web-servers/103-ms-iis.md",
"scalability": "/roadmaps/101-backend/content/123-scalability/readme.md",
"scalability:mitigation-strategies": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies.md",
"scalability:mitigation-strategies": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/readme.md",
"scalability:mitigation-strategies:graceful-degradation": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/100-graceful-degradation.md",
"scalability:mitigation-strategies:throttling": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/101-throttling.md",
"scalability:mitigation-strategies:backpressure": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/102-backpressure.md",
"scalability:mitigation-strategies:loadshifting": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/103-loadshifting.md",
"scalability:mitigation-strategies:circuit-breaker": "/roadmaps/101-backend/content/123-scalability/100-mitigation-strategies/104-circuit-breaker.md",
"scalability:instrumentation-monitoring-telemetry": "/roadmaps/101-backend/content/123-scalability/101-instrumentation-monitoring-telemetry.md",
"scalability:migration-strategies": "/roadmaps/101-backend/content/123-scalability/102-migration-strategies.md",
"scalability:horizontal-vertical-scaling": "/roadmaps/101-backend/content/123-scalability/103-horizontal-vertical-scaling.md",

@ -1,7 +0,0 @@
# Database Indexes
An index is a data structure that you build and assign on top of an existing table that basically looks through your table and tries to analyze and summarize so that it can create shortcuts.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://www.freecodecamp.org/news/database-indexing-at-a-glance-bb50809d48bd/'>An in-depth look at Database Indexing</BadgeLink>
<BadgeLink badgeText='Watch' href='https://www.youtube.com/watch?v=-qNSXK7s7_w'>Database Indexing Explained</BadgeLink>

@ -0,0 +1,7 @@
# Read Contention
Database contention is a situation that occurs when multiple users or processes are trying to access the same resource in a database concurrently, and there are limited resources available to handle these requests. This can cause delays or conflicts as the database tries to manage the competing demands for resources.
Contention can occur at various levels in a database, such as at the table level, the page level, or the row level. For example, if two users are trying to update the same row in a table at the same time, the database may have to resolve the conflict by choosing which update to apply.
Contention can have a negative impact on the performance of a database, as it can lead to delays in processing requests and reduced efficiency. To address contention, database administrators may need to optimize the design of the database, implement locking mechanisms, or increase the availability of resources.

@ -1,7 +0,0 @@
# Data Replication
Data replication is the process by which data residing on a physical/virtual server(s) or cloud instance (primary instance) is continuously replicated or copied to a secondary server(s) or cloud instance (standby instance). Organizations replicate data to support high availability, backup, and/or disaster recovery.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink badgeText='Watch' href='https://youtu.be/fUrKt-AQYtE'>What is Data Replication?</BadgeLink>

@ -0,0 +1,11 @@
# Profiling Performance
There are several ways to profile the performance of a database:
* Monitor system performance: You can use tools like the Windows Task Manager or the Unix/Linux top command to monitor the performance of your database server. These tools allow you to see the overall CPU, memory, and disk usage of the system, which can help identify any resource bottlenecks.
* Use database-specific tools: Most database management systems (DBMSs) have their own tools for monitoring performance. For example, Microsoft SQL Server has the SQL Server Management Studio (SSMS) and the sys.dm_os_wait_stats dynamic management view, while Oracle has the Oracle Enterprise Manager and the v$waitstat view. These tools allow you to see specific performance metrics, such as the amount of time spent waiting on locks or the number of physical reads and writes.
* Use third-party tools: There are also several third-party tools that can help you profile the performance of a database. Some examples include SolarWinds Database Performance Analyzer, Quest Software Foglight, and Redgate SQL Monitor. These tools often provide more in-depth performance analysis and can help you identify specific issues or bottlenecks.
* Analyze slow queries: If you have specific queries that are running slowly, you can use tools like EXPLAIN PLAN or SHOW PLAN in MySQL or SQL Server to see the execution plan for the query and identify any potential issues. You can also use tools like the MySQL slow query log or the SQL Server Profiler to capture slow queries and analyze them further.
* Monitor application performance: If you are experiencing performance issues with a specific application that is using the database, you can use tools like Application Insights or New Relic to monitor the performance of the application and identify any issues that may be related to the database.
Have a look at the documentation for the database that you are using.

@ -1,9 +0,0 @@
# Sharding strategies
Sharding strategy is a technique to split a large dataset into smaller chunks (logical shard) in which we distribute these chunks in different machines/database nodes in order to distribute the traffic load. It’s a good mechanism to improve the scalability of an application. Many databases support sharding, but not all.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://www.geeksforgeeks.org/database-sharding-a-system-design-concept/'>Database Sharding – System Design Interview Concept</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://en.wikipedia.org/wiki/Shard_(database_architecture)'>Wikipedia - Sharding in Datbase Architectures</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://stackoverflow.blog/2022/03/14/how-sharding-a-database-can-make-it-faster/'>How sharding a database can make it faster</BadgeLink>

@ -1,10 +0,0 @@
# CAP Theorem
CAP is an acronym that stands for Consistency, Availability and Partition Tolerance. According to CAP theorem, any distributed system can only guarantee two of the three properties at any point of time. You can't guarantee all three properties at once.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://www.bmc.com/blogs/cap-theorem/'>What is CAP Theorem?</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://en.wikipedia.org/wiki/CAP_theorem'>CAP Theorem - Wikipedia</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://mwhittaker.github.io/blog/an_illustrated_proof_of_the_cap_theorem/'>An Illustrated Proof of the CAP Theorem</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://www.ibm.com/uk-en/cloud/learn/cap-theorem'>CAP Theorem and it's applications in NoSQL Databases</BadgeLink>
<BadgeLink colorScheme='purple' badgeText='Watch' href='https://www.youtube.com/watch?v=_RbsFXWRZ10'>What is CAP Theorem?</BadgeLink>

@ -0,0 +1,11 @@
# Graceful Degradation
Graceful degradation is a design principle that states that a system should be designed to continue functioning, even if some of its components or features are not available. In the context of web development, graceful degradation refers to the ability of a web page or application to continue functioning, even if the user's browser or device does not support certain features or technologies.
Graceful degradation is often used as an alternative to progressive enhancement, a design principle that states that a system should be designed to take advantage of advanced features and technologies if they are available.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://blog.hubspot.com/website/graceful-degradation'>What is Graceful Degradation & Why Does it Matter?</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://newrelic.com/blog/best-practices/design-software-for-graceful-degradation'>Four Considerations When Designing Systems For Graceful Degradation</BadgeLink>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://farfetchtechblog.com/en/blog/post/the-art-of-failure-ii-graceful-degradation/'>The Art of Graceful Degradation</BadgeLink>

@ -0,0 +1,14 @@
# Throttling
Throttling is a design pattern that is used to limit the rate at which a system or component can be used. It is commonly used in cloud computing environments to prevent overuse of resources, such as compute power, network bandwidth, or storage capacity.
There are several ways to implement throttling in a cloud environment:
* Rate limiting: This involves setting a maximum number of requests that can be made to a system or component within a specified time period.
* Resource allocation: This involves allocating a fixed amount of resources to a system or component, and then limiting the use of those resources if they are exceeded.
* Token bucket: This involves using a "bucket" of tokens to represent the available resources, and then allowing a certain number of tokens to be "consumed" by each request. When the bucket is empty, additional requests are denied until more tokens become available.
Throttling is an important aspect of cloud design, as it helps to ensure that resources are used efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as auto-scaling and load balancing, to provide a scalable and resilient cloud environment.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://aws.amazon.com/architecture/well-architected/serverless/patterns/throttling/'>Throttling - AWS Well-Architected Framework</BadgeLink>

@ -0,0 +1,14 @@
# Backpressure
Backpressure is a design pattern that is used to manage the flow of data through a system, particularly in situations where the rate of data production exceeds the rate of data consumption. It is commonly used in cloud computing environments to prevent overloading of resources and to ensure that data is processed in a timely and efficient manner.
There are several ways to implement backpressure in a cloud environment:
* Buffering: This involves storing incoming data in a buffer until it can be processed, allowing the system to continue receiving data even if it is temporarily unable to process it.
* Batching: This involves grouping incoming data into batches and processing the batches in sequence, rather than processing each piece of data individually.
* Flow control: This involves using mechanisms such as flow control signals or windowing to regulate the rate at which data is transmitted between systems.
Backpressure is an important aspect of cloud design, as it helps to ensure that data is processed efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as auto-scaling and load balancing, to provide a scalable and resilient cloud environment.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://aws.amazon.com/architecture/well-architected/serverless/patterns/backpressure/'>Backpressure - AWS Well-Architected Framework</BadgeLink>

@ -0,0 +1,14 @@
# Load Shifting
Load shifting is a design pattern that is used to manage the workload of a system by shifting the load to different components or resources at different times. It is commonly used in cloud computing environments to balance the workload of a system and to optimize the use of resources.
There are several ways to implement load shifting in a cloud environment:
* Scheduling: This involves scheduling the execution of tasks or workloads to occur at specific times or intervals.
* Load balancing: This involves distributing the workload of a system across multiple resources, such as servers or containers, to ensure that the workload is balanced and that resources are used efficiently.
* Auto-scaling: This involves automatically adjusting the number of resources that are available to a system based on the workload, allowing the system to scale up or down as needed.
Load shifting is an important aspect of cloud design, as it helps to ensure that resources are used efficiently and that the system remains stable and available. It is often used in conjunction with other design patterns, such as throttling and backpressure, to provide a scalable and resilient cloud environment.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://aws.amazon.com/architecture/well-architected/serverless/patterns/load-shifting/'>Load Shifting - AWS Well-Architected Framework</BadgeLink>

@ -0,0 +1,10 @@
# Circuit Breaker
The circuit breaker design pattern is a way to protect a system from failures or excessive load by temporarily stopping certain operations if the system is deemed to be in a failed or overloaded state. It is commonly used in cloud computing environments to prevent cascading failures and to improve the resilience and availability of a system.
A circuit breaker consists of three states: closed, open, and half-open. In the closed state, the circuit breaker allows operations to proceed as normal. If the system encounters a failure or becomes overloaded, the circuit breaker moves to the open state, and all subsequent operations are immediately stopped. After a specified period of time, the circuit breaker moves to the half-open state, and a small number of operations are allowed to proceed. If these operations are successful, the circuit breaker moves back to the closed state; if they fail, the circuit breaker moves back to the open state.
The circuit breaker design pattern is useful for protecting a system from failures or excessive load by providing a way to temporarily stop certain operations and allow the system to recover. It is often used in conjunction with other design patterns, such as retries and fallbacks, to provide a more robust and resilient cloud environment.
<ResourceGroupTitle>Free Content</ResourceGroupTitle>
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://aws.amazon.com/architecture/well-architected/serverless/patterns/circuit-breaker/'>Circuit Breaker - AWS Well-Architected Framework</BadgeLink>

File diff suppressed because it is too large Load Diff
Loading…
Cancel
Save