Add content for performance antipatterns

pull/3331/head
Kamran Ahmed 2 years ago
parent ab36350cdc
commit 6582d65935
  1. 2
      src/components/YouTubeBanner.astro
  2. 1
      src/roadmaps/system-design/content/116-performance-antipatterns/100-busy-database.md
  3. 9
      src/roadmaps/system-design/content/116-performance-antipatterns/101-busy-frontend.md
  4. 13
      src/roadmaps/system-design/content/116-performance-antipatterns/102-chatty-io.md
  5. 1
      src/roadmaps/system-design/content/116-performance-antipatterns/103-extraneous-fetching.md
  6. 2
      src/roadmaps/system-design/content/116-performance-antipatterns/104-improper-instantiation.md
  7. 3
      src/roadmaps/system-design/content/116-performance-antipatterns/105-monolithic-persistence.md
  8. 14
      src/roadmaps/system-design/content/116-performance-antipatterns/106-no-caching.md
  9. 5
      src/roadmaps/system-design/content/116-performance-antipatterns/107-noisy-neighbor.md
  10. 2
      src/roadmaps/system-design/content/116-performance-antipatterns/108-retry-storm.md
  11. 22
      src/roadmaps/system-design/content/116-performance-antipatterns/109-synchronous-io.md
  12. 4
      src/roadmaps/system-design/content/116-performance-antipatterns/index.md

@ -3,7 +3,7 @@ import Icon from './Icon.astro';
---
<div
class='sticky top-0 border-b border-b-yellow-300 z-10 flex h-[37px]'
class='sticky top-0 border-b border-b-yellow-300 z-20 flex h-[37px]'
youtube-banner
>
<a

@ -5,4 +5,3 @@ A busy database in system design refers to a database that is handling a high vo
To learn more, visit the following links:
- [Busy Database antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/busy-database/)
- [Database Design](https://www.sciencedirect.com/topics/computer-science/database-design)

@ -1,8 +1,11 @@
# Busy Frontend
A busy frontend in system design refers to a frontend that is handling a high volume of requests or traffic, this can occur when a system is experiencing high traffic or when a frontend is not properly optimized for the workload it is handling. This can lead to Performance degradation, Increased resource utilization, Increased error rates, and Poor user experience. To address a busy frontend, a number of approaches can be taken such as Scaling out, Optimizing the code, Caching, and Load balancing.
Performing asynchronous work on a large number of background threads can starve other concurrent foreground tasks of resources, decreasing response times to unacceptable levels.
To learn more, visit the following link:
Resource-intensive tasks can increase the response times for user requests and cause high latency. One way to improve response times is to offload a resource-intensive task to a separate thread. This approach lets the application stay responsive while processing happens in the background. However, tasks that run on a background thread still consume resources. If there are too many of them, they can starve the threads that are handling requests.
This problem typically occurs when an application is developed as monolithic piece of code, with all of the business logic combined into a single tier shared with the presentation layer.
To learn more about this and how to fix this pattern, visit the following link:
- [Busy Front End antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/busy-front-end/)
- [What is Front end system design?](https://www.youtube.com/watch?v=XPNMiWyHBAU)

@ -1,8 +1,13 @@
# Chat IO
# Chat I/O
Chat IO in system design refers to the design of a chat system, which allows real-time communication between multiple users. A chat system typically consists of the following components: Client, Server, Messaging protocol, Message store, and Notification. To design a chat system, there are several key considerations to keep in mind such as Scalability, Reliability, and Security.
The cumulative effect of a large number of I/O requests can have a significant impact on performance and responsiveness.
Network calls and other I/O operations are inherently slow compared to compute tasks. Each I/O request typically has significant overhead, and the cumulative effect of numerous I/O operations can slow down the system. Here are some common causes of chatty I/O.
- Reading and writing individual records to a database as distinct requests
- Implementing a single logical operation as a series of HTTP requests
- Reading and writing to a file on disk
To learn more, visit the following links:
- [Chat Applications System Design](https://javascript.plainenglish.io/chat-applications-system-design-6a070c60c8cd)
- [Design A Chat System](https://bytebytego.com/courses/system-design-interview/design-a-chat-system)
- [Chatty I/O antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/chatty-io/)

@ -12,4 +12,3 @@ Extraneous fetching can lead to a number of issues, such as:
Visit the following links to learn more:
- [Extraneous Fetching antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/extraneous-fetching/)
- [What’s the difference between extraneous and confounding variables?](https://www.scribbr.com/frequently-asked-questions/extraneous-vs-confounding-variables/)

@ -3,5 +3,5 @@
Improper instantiation in system design refers to the practice of creating unnecessary instances of an object, class or service, which can lead to performance and scalability issues. This can happen when the system is not properly designed, when the code is not written in an efficient way, or when the code is not optimized for the specific use case.
Learn more from the following links:
- [Improper Instantiation antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/improper-instantiation/)
- [What is Instantiation?](https://www.techtarget.com/whatis/definition/instantiation)

@ -1,8 +1,7 @@
# Monolithic Persistence
Monolithic Persistence in system design refers to the use of a single, monolithic database to store all of the data for an application or system. This approach can be used for simple, small-scale systems but as the system grows and evolves it can become a bottleneck, resulting in poor scalability, limited flexibility, and increased complexity. To address these limitations, a number of approaches can be taken such as Microservices, Sharding, and NoSQL databases.
Monolithic Persistence refers to the use of a single, monolithic database to store all of the data for an application or system. This approach can be used for simple, small-scale systems but as the system grows and evolves it can become a bottleneck, resulting in poor scalability, limited flexibility, and increased complexity. To address these limitations, a number of approaches can be taken such as Microservices, Sharding, and NoSQL databases.
To learn more, visit the following links:
- [Monolithic Persistence antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/monolithic-persistence/)
- [System Design: Monoliths and Microservices](https://dev.to/karanpratapsingh/system-design-monoliths-and-microservices-24jn)

@ -1,15 +1,13 @@
# No Caching
Monolithic persistence in system design refers to the use of a single, monolithic database to store all of the data for an application or system. This approach can be used for simple, small-scale systems, but as the system grows and evolves, it can become a bottleneck, resulting in poor scalability, limited flexibility, and increased complexity.
No caching antipattern occurs when a cloud application that handles many concurrent requests, repeatedly fetches the same data. This can reduce performance and scalability.
A monolithic persistence can have several disadvantages:
When data is not cached, it can cause a number of undesirable behaviors, including:
- Scalability
- Limited Flexibility
- Increased Complexity
- Single Point of Failure
- Repeatedly fetching the same information from a resource that is expensive to access, in terms of I/O overhead or latency.
- Repeatedly constructing the same objects or data structures for multiple requests.
- Making excessive calls to a remote service that has a service quota and throttles clients past a certain limit.
Learn from the following links:
In turn, these problems can lead to poor response times, increased contention in the data store, and poor scalability.
- [What is Caching in system design?](enjoyalgorithms.com/blog/caching-system-design-concept)
- [No Caching antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/no-caching/)

@ -1,6 +1,6 @@
# Noisy Neighbor
Noisy neighbor in system design refers to a situation in which one or more components of a system are utilizing a disproportionate amount of shared resources, leading to resource contention and reduced performance for other components. This can occur when a system is not properly designed or configured to handle the workload, or when a component is behaving unexpectedly.
Noisy neighbor refers to a situation in which one or more components of a system are utilizing a disproportionate amount of shared resources, leading to resource contention and reduced performance for other components. This can occur when a system is not properly designed or configured to handle the workload, or when a component is behaving unexpectedly.
Examples of noisy neighbor scenarios include:
@ -10,5 +10,4 @@ Examples of noisy neighbor scenarios include:
Learn from the following links:
- [Noisy Neighbor](https://docs.aws.amazon.com/wellarchitected/latest/saas-lens/noisy-neighbor.html)
- [Get started with Noisy Neighbor antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/noisy-neighbor/noisy-neighbor)
- [Noisy Neighbor antipattern](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/noisy-neighbor/noisy-neighbor)

@ -1,6 +1,6 @@
# Retry Storm
Retry Storm in system design refers to a situation in which a large number of retries are triggered in a short period of time, leading to a significant increase in traffic and resource usage. This can occur when a system is not properly designed to handle failures or when a component is behaving unexpectedly. This can lead to Performance degradation, Increased resource utilization, Increased network traffic, and Poor user experience. To address retry storms, a number of approaches can be taken such as Exponential backoff, Circuit breaking, and Monitoring and alerting.
Retry Storm refers to a situation in which a large number of retries are triggered in a short period of time, leading to a significant increase in traffic and resource usage. This can occur when a system is not properly designed to handle failures or when a component is behaving unexpectedly. This can lead to Performance degradation, Increased resource utilization, Increased network traffic, and Poor user experience. To address retry storms, a number of approaches can be taken such as Exponential backoff, Circuit breaking, and Monitoring and alerting.
To learn more, visit the following links:

@ -1,11 +1,21 @@
# Synchronous IO
# Synchronous I/O
In system design, synchronous IO refers to a type of input/output (IO) operation where the program execution is blocked or halted until the IO operation completes. This means that the program will wait for the IO operation to finish before it can continue executing the next instruction. Synchronous IO can be used in a variety of scenarios, such as:
Blocking the calling thread while I/O completes can reduce performance and affect vertical scalability.
- **Reading and writing files:** When a program needs to read or write a file, it can use synchronous IO to ensure that the operation completes before continuing.
- **Communicating with a database:** When a program needs to query or update a database, it can use synchronous IO to ensure that the operation completes before continuing.
- **Networking:** When a program needs to send or receive data over a network, it can use synchronous IO to ensure that the operation completes before continuing.
A synchronous I/O operation blocks the calling thread while the I/O completes. The calling thread enters a wait state and is unable to perform useful work during this interval, wasting processing resources.
To learn more, visit the following links:
Common examples of I/O include:
- Retrieving or persisting data to a database or any type of persistent storage.
- Sending a request to a web service.
- Posting a message or retrieving a message from a queue.
- Writing to or reading from a local file.
This antipattern typically occurs because:
- It appears to be the most intuitive way to perform an operation.
- The application requires a response from a request.
- The application uses a library that only provides synchronous methods for I/O.
- An external library performs synchronous I/O operations internally. A single synchronous I/O call can block an entire call chain.
- [What is Synchronous I/O antipattern?](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/synchronous-io/)

@ -1,9 +1,8 @@
# Performance Antipatterns
what is Performance Antipatterns in system design
Performance antipatterns in system design refer to common mistakes or suboptimal practices that can lead to poor performance in a system. These patterns can occur at different levels of the system and can be caused by a variety of factors such as poor design, lack of optimization, or lack of understanding of the workload.
Examples of performance antipatterns include:
Some of the examples of performance antipatterns include:
- **N+1 queries:** This occurs when a system makes multiple queries to a database to retrieve related data, instead of using a single query to retrieve all the necessary data.
- **Chatty interfaces:** This occurs when a system makes too many small and frequent requests to an external service or API, instead of making fewer, larger requests.
@ -13,4 +12,3 @@ Examples of performance antipatterns include:
Learn more from the following links:
- [Performance antipatterns for cloud applications](https://learn.microsoft.com/en-us/azure/architecture/antipatterns/)
- [Guide to Software Performance Antipatterns](http://www.perfeng.com/papers/antipat.pdf)
Loading…
Cancel
Save