Add content to cloud design patterns

pull/3331/head
Kamran Ahmed 2 years ago
parent a715a85b46
commit ad4f35764d
  1. 16
      src/roadmaps/system-design/content/114-idempotent-operations.md
  2. 7
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/100-asynchronous-request-reply.md
  3. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/101-claim-check.md
  4. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/102-choreography.md
  5. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/103-competing-consumers.md
  6. 3
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/104-pipes-and-filters.md
  7. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/105-priority-queue.md
  8. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/106-publisher-subscriber.md
  9. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/107-queue-based-load-leveling.md
  10. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/108-scheduling-agent-supervisor.md
  11. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/109-sequential-convoy.md
  12. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/100-messaging/index.md
  13. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/100-cache-aside.md
  14. 4
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/101-cqrs.md
  15. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/102-event-sourcing.md
  16. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/103-index-table.md
  17. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/104-materialized-view.md
  18. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/105-sharding.md
  19. 3
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/106-static-content-hosting.md
  20. 7
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/107-valet-key.md
  21. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/101-data-management/index.md
  22. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/100-ambassador.md
  23. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/101-anti-corruption-layer.md
  24. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/102-backends-for-frontend.md
  25. 4
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/103-cqrs.md
  26. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/104-compute-resource-consolidation.md
  27. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/105-external-configuration-store.md
  28. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/106-gateway-aggregation.md
  29. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/107-gateway-offloading.md
  30. 9
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/108-gateway-routing.md
  31. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/109-leader-election.md
  32. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/110-pipes-and-filters.md
  33. 7
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/111-sidecar.md
  34. 3
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/112-static-content-hosting.md
  35. 3
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/113-strangler-fig.md
  36. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/102-design-and-implementation/index.md
  37. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/deployment-stamps.md
  38. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/geodes.md
  39. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/health-endpoint-monitoring.md
  40. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/index.md
  41. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/queue-based-load-leveling.md
  42. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/100-availability/throttling.md
  43. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/bulkhead.md
  44. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/circuit-breaker.md
  45. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/deployment-stamps.md
  46. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/geodes.md
  47. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/health-endpoint-monitoring.md
  48. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/101-high-availability/index.md
  49. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/bulkhead.md
  50. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/circuit-breaker.md
  51. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/compensating-transaction.md
  52. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/health-endpoint-monitoring.md
  53. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/index.md
  54. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/leader-election.md
  55. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/queue-based-load-leveling.md
  56. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/retry.md
  57. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/102-resiliency/scheduler-agent-supervisor.md
  58. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/103-security/gatekeeper.md
  59. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/103-security/index.md
  60. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/103-security/valet-key.md
  61. 2
      src/roadmaps/system-design/content/118-cloud-design-patterns/103-reliability-patterns/index.md
  62. 5
      src/roadmaps/system-design/content/118-cloud-design-patterns/index.md

@ -2,21 +2,7 @@
Idempotent operations are operations that can be applied multiple times without changing the result beyond the initial application. In other words, if an operation is idempotent, it will have the same effect whether it is executed once or multiple times. Idempotent operations are operations that can be applied multiple times without changing the result beyond the initial application. In other words, if an operation is idempotent, it will have the same effect whether it is executed once or multiple times.
For example, consider an HTTP PUT request to update a resource. If the request is idempotent, it will have the same effect whether it is executed once or multiple times, regardless of the state of the resource. In contrast, a non-idempotent operation such as an HTTP POST request, which creates a new resource, will have a different effect each time it is executed. It is also important to understand the benefits of [idempotent](https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning) operations, especially when using message or task queues that do not guarantee *exactly once* processing. Many queueing systems guarantee *at least once* message delivery or processing. These systems are not completely synchronized, for instance, across geographic regions, which simplifies some aspects of their implemntation or design. Designing the operations that a task queue executes to be idempotent allows one to use a queueing system that has accepted this design trade-off.
Idempotent operations are useful in distributed systems, where network failures and other errors may cause the same operation to be executed multiple times. Idempotent operations can help to ensure that the system remains in a consistent state, even in the face of these types of errors.
Examples of idempotent operations are:
- HTTP GET requests
- HTTP PUT requests that update a resource to a specific state
- Database operations such as SELECT statements
Examples of non-idempotent operations are:
- HTTP POST requests that create a new resource
- HTTP DELETE requests
- Database operations that modify data such as INSERT, UPDATE, DELETE.
To learn more, visit the following links: To learn more, visit the following links:

@ -1,8 +1,7 @@
# Asynchronous Request Reply # Asynchronous Request-Reply
Asynchronous Request-Reply in system design refers to a pattern where a client sends a request to a server and the server responds asynchronously, allowing the client to continue processing other tasks or requests without waiting for the server's response. This can improve the performance and scalability of a system by allowing multiple requests to be processed concurrently. It can be implemented using callbacks, promises or event-based models. Decouple backend processing from a frontend host, where backend processing needs to be asynchronous, but the frontend still needs a clear response.
Learn more from the following links: Learn more from the following links:
- [Asynchronous Request-Reply pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply) - [Asynchronous Request-Reply pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply)
- [Intro to Asynchronous Request-Response](https://codeopinion.com/asynchronous-request-response-pattern-for-non-blocking-workflows/)

@ -1,8 +1,7 @@
# Claim Check # Claim Check
Claim check in system design is a pattern where large or complex data is replaced with a small token or reference, which is passed along with a message or request. This can help to reduce the size and complexity of messages, and improve the performance and scalability of a system. The large or complex data is stored in a separate location, and a token generator is used to create a unique token for the actual data. Split a large message into a claim check and a payload. Send the claim check to the messaging platform and store the payload to an external service. This pattern allows large messages to be processed, while protecting the message bus and the client from being overwhelmed or slowed down. This pattern also helps to reduce costs, as storage is usually cheaper than resource units used by the messaging platform.
Learn more from the following links: Learn more from the following links:
- [An Introduction to Claim-Check Pattern and Its Uses](https://aws.plainenglish.io/an-introduction-to-claim-check-pattern-and-its-uses-b018649a380d) - [Claim Check - Cloud Design patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check)
- [Claim Check - Cloud Design patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/)

@ -1,8 +1,7 @@
# Choreography # Choreography
Choreography in system design refers to the design and coordination of interactions between autonomous systems or services, without the use of a central controlling entity. Each system or service is responsible for its own behavior and communication with other systems or services, and there is no central point of control or coordination. Choreography can be used to improve the scalability, flexibility, and resilience of a system, by allowing each service to evolve and scale independently. It can be implemented using event-based, message-based or API-based models. Have each component of the system participate in the decision-making process about the workflow of a business transaction, instead of relying on a central point of control.
Learn more from the following links: Learn more from the following links:
- [Choreography pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/choreography) - [Choreography pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/choreography)
- [Service choreography](https://en.wikipedia.org/wiki/Service_choreography)

@ -1,8 +1,7 @@
# Competing Consumers # Competing Consumers
Competing Consumers in system design is a pattern that allows multiple consumers to process messages concurrently from a shared message queue. This approach can be used to improve the performance and scalability of a system by allowing multiple consumers to process messages in parallel. This pattern can be used in scenarios like load balancing and fault tolerance. It can be implemented using a variety of messaging technologies such as message queues, message brokers, and publish-subscribe systems. Enable multiple concurrent consumers to process messages received on the same messaging channel. With multiple concurrent consumers, a system can process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload.
Learn more from the following links: Learn more from the following links:
- [Competing Consumers pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers) - [Competing Consumers pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers)
- [Competing Consumers Pattern - Explained](https://medium.com/event-driven-utopia/competing-consumers-pattern-explained-b338d54eff2b)

@ -1,8 +1,7 @@
# Pipes and Filters # Pipes and Filters
Pipes and Filters in system design is a pattern that separates the processing of a task into a series of smaller, independent components, connected together in a pipeline. Each component, or filter, performs a specific task, and the output of one filter is passed as the input to the next filter. This approach can be used to build modular and extensible systems, by allowing filters to be added, removed, or replaced easily. Pipes and Filters pattern can be used in scenarios like data processing and data transformation. It can be implemented using a variety of technologies such as streams, generators, and iterators. Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently.
Learn more from the following links: Learn more from the following links:
- [Pipe and Filter Architectural Style](https://cs.uwaterloo.ca/~m2nagapp/courses/CS446/1181/Arch_Design_Activity/PipeFilter.pdf)
- [Pipes and Filters pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters) - [Pipes and Filters pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters)

@ -1,6 +1,6 @@
# Priority Queue # Priority Queue
A priority queue in system design is a data structure that stores items with a priority value, and allows for efficient retrieval and manipulation of the items based on their priority. The items with the highest priority are retrieved first. This pattern is useful in situations where certain items or tasks are more important than others and should be processed first. Priority Queue can be used in scenarios like scheduling and real-time systems. It can be implemented using various data structures such as heap, linked list, and array. Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,7 @@
# Publisher Subscriber # Publisher Subscriber
Publisher-Subscriber in system design is a pattern that allows multiple subscribers to receive updates from a single publisher, without the publisher and subscribers being aware of each other's existence. This pattern allows for decoupling of the publisher and subscribers, and can be used to build scalable and flexible systems. It can be used in scenarios like event-driven architecture and data streaming. It can be implemented using a variety of technologies such as message queues, message brokers, and event buses. Enable an application to announce events to multiple interested consumers asynchronously, without coupling the senders to the receivers.
Learn more from the following links: Learn more from the following links:
- [What is Pub/Sub Messaging?](https://aws.amazon.com/pub-sub-messaging/) - [Publisher-Subscriber pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/publisher-subscriber)
- [Publisher Subscriber - Pattern](https://www.enjoyalgorithms.com/blog/publisher-subscriber-pattern)

@ -1,8 +1,7 @@
# Queue Based Load Leveling # Queue Based Load Leveling
Queue-based load leveling in system design is a pattern that allows for the buffering of incoming requests, and the processing of those requests at a controlled rate. This pattern can be used to prevent overloading of a system, and to ensure that the system can handle a variable rate of incoming requests. It can be used in scenarios like traffic spikes and variable workloads. It can be implemented using various data structures such as linked list, array, and heap. Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service.
Learn more from the following links: Learn more from the following links:
- [Queue-Based Load Leveling pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling) - [Queue-Based Load Leveling pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling)
- [Design Patterns: Queue-Based Load Leveling Pattern](https://blog.cdemi.io/design-patterns-queue-based-load-leveling-pattern/)

@ -1,6 +1,6 @@
# Scheduling Agent Supervisor # Scheduling Agent Supervisor
Scheduling Agent Supervisor in system design is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately. Coordinate a set of distributed actions as a single operation. If any of the actions fail, try to handle the failures transparently, or else undo the work that was performed, so the entire operation succeeds or fails as a whole. This can add resiliency to a distributed system, by enabling it to recover and retry actions that fail due to transient exceptions, long-lasting faults, and process failures.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Sequential Convoy # Sequential Convoy
Sequential Convoy in system design is a pattern that allows for the execution of a series of tasks, or convoy, in a specific order. This pattern can be used to ensure that a set of dependent tasks are executed in the correct order and to handle errors or failures during the execution of the tasks. It can be used in scenarios like workflow and transaction. It can be implemented using a variety of technologies such as state machines, workflows, and transactions. Sequential Convoy is a pattern that allows for the execution of a series of tasks, or convoy, in a specific order. This pattern can be used to ensure that a set of dependent tasks are executed in the correct order and to handle errors or failures during the execution of the tasks. It can be used in scenarios like workflow and transaction. It can be implemented using a variety of technologies such as state machines, workflows, and transactions.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,7 @@
# Messaging # Messaging
Messaging in system design is a pattern that allows for the communication and coordination between different components or systems, using messaging technologies such as message queues, message brokers, and event buses. This pattern allows for decoupling of the sender and receiver, and can be used to build scalable and flexible systems. Messaging pattern can be used in scenarios like asynchronous communication, loose coupling, and scalability. It can be implemented using a variety of technologies such as message queues, message brokers, and event buses. Messaging is a pattern that allows for the communication and coordination between different components or systems, using messaging technologies such as message queues, message brokers, and event buses. This pattern allows for decoupling of the sender and receiver, and can be used to build scalable and flexible systems.
Learn more from the following links: Learn more from the following links:
- [System Design — Message Queues](https://medium.com/must-know-computer-science/system-design-message-queues-245612428a22) - [Messaging Cloud Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/messaging)
- [Intro to System Design - Message Queues](https://dev.to/karanpratapsingh/system-design-message-queues-k9a)

@ -1,6 +1,6 @@
# Cache Aside # Cache Aside
Cache-Aside in system design is a pattern that allows for the caching of data, in order to improve the performance and scalability of a system. This pattern is typically used in systems where data is read more frequently than it is written. It can be used to reduce the load on a primary data store, and to improve the responsiveness of a system by reducing the latency of data access. Cache-Aside pattern can be used in scenarios like read-heavy workloads and latency-sensitive workloads. It can be implemented using various caching technologies such as in-memory cache, distributed cache, and file-based cache. Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,6 @@
# CQRS # CQRS
CQRS (Command Query Responsibility Segregation) in system design is a pattern that separates the responsibilities of handling read and write operations in a system. This pattern allows for the separation of concerns between the read and write operations, and can be used to improve the scalability, performance, and maintainability of a system. CQRS stands for Command and Query Responsibility Segregation, a pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level.
In this pattern, the read and write operations are handled by different components in the system. The write operations, known as commands, are handled by a Command component that updates the state of the system. The read operations, known as queries, are handled by a Query component that retrieves the current state of the system.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,7 @@
# Event Sourcing # Event Sourcing
Event Sourcing in system design is a pattern that stores the state of a system as a sequence of events, rather than the current state. Each change to the state of the system is recorded as an event, which is stored in an event store. The current state of the system can be derived from the events in the event store. Event sourcing can be used for various purposes such as tracking history, reconstruct state, recover from failures, and auditing. It is often implemented in conjunction with CQRS (Command Query Responsibility Segregation) pattern, which separates the responsibilities of handling read and write operations in a system. Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialize the domain objects. This can simplify tasks in complex domains, by avoiding the need to synchronize the data model and the business domain, while improving performance, scalability, and responsiveness. It can also provide consistency for transactional data, and maintain full audit trails and history that can enable compensating actions.
Learn more from the following links: Learn more from the following links:
- [Event Sourcing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing) - [Event Sourcing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing)
- [Overview of Event Sourcing](https://microservices.io/patterns/data/event-sourcing.html)

@ -1,8 +1,7 @@
# Index Table # Index Table
An index table in system design is a data structure that allows for efficient lookup of data in a larger data set. It is used to improve the performance of searching, sorting, and retrieving data, by allowing for quick access to specific records or data elements. There are several types of index tables such as B-Tree, Hash table, and Trie each with its own strengths and weaknesses. Index tables can be used in a variety of scenarios such as searching, sorting, and retrieving. Create indexes over the fields in data stores that are frequently referenced by queries. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store.
Learn more from the following links: Learn more from the following links:
- [System Design — Indexes](https://medium.com/must-know-computer-science/system-design-indexes-f6ad3de9925d) - [Index Table pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/index-table)
- [Overview of Index Table](https://dev.to/karanpratapsingh/system-design-indexes-2574)

@ -1,8 +1,7 @@
# Materialized View # Materialized View
A Materialized View in system design is a pre-computed and stored version of a query result, which is used to improve the performance of frequently executed queries. It can be used to improve the performance of read-heavy workloads, by providing a pre-computed version of the data that can be quickly accessed. Materialized views can be used in scenarios like complex queries, large datasets, and real-time analytics. A materialized view can be created by executing a query and storing the result in a table. The data in the materialized view is typically updated periodically, to ensure that it stays up-to-date with the underlying data.s Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. This can help support efficient querying and data extraction, and improve application performance.
Learn more from the following links: Learn more from the following links:
- [Materialized View pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/materialized-view) - [Materialized View pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/materialized-view)
- [Overview of Materialized View Pattern](https://medium.com/design-microservices-architecture-with-patterns/materialized-view-pattern-f29ea249f8f8)

@ -1,8 +1,7 @@
# Sharding # Sharding
Sharding in system design is a technique used to horizontally partition a large data set across multiple servers, in order to improve the performance, scalability, and availability of a system. This is done by breaking the data set into smaller chunks, called shards, and distributing the shards across multiple servers. Each shard is self-contained and can be managed and scaled independently of the other shards. Sharding can be used in scenarios like scalability, availability, and geo-distribution. Sharding can be implemented using several different algorithms such as range-based sharding, hash-based sharding, and directory-based sharding. Sharding is a technique used to horizontally partition a large data set across multiple servers, in order to improve the performance, scalability, and availability of a system. This is done by breaking the data set into smaller chunks, called shards, and distributing the shards across multiple servers. Each shard is self-contained and can be managed and scaled independently of the other shards. Sharding can be used in scenarios like scalability, availability, and geo-distribution. Sharding can be implemented using several different algorithms such as range-based sharding, hash-based sharding, and directory-based sharding.
Learn more from the following links: Learn more from the following links:
- [Database Sharding: Concepts and Examples](https://www.mongodb.com/features/database-sharding-explained) - [Sharding pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/sharding)
- [Database Sharding – System Design Interview Concept](https://www.geeksforgeeks.org/database-sharding-a-system-design-concept/)

@ -1,8 +1,7 @@
# Static Content Hosting # Static Content Hosting
Static Content Hosting in system design is a technique used to serve static resources such as images, stylesheets, and JavaScript files, from a dedicated server or service, rather than from the main application server. This approach can be used to improve the performance, scalability, and availability of a system. Static content hosting can be used in scenarios like performance, scalability, and availability. Static content hosting can be implemented using several different techniques such as Content Delivery Network (CDN), Object Storage and File Server. Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances.
Learn more from the following links: Learn more from the following links:
- [The pros and cons of the Static Content Hosting](https://www.redhat.com/architect/pros-and-cons-static-content-hosting-architecture-pattern)
- [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting) - [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting)

@ -1,10 +1,7 @@
# Valet Key # Valet Key
A valet key in system design is a type of security feature that allows a user to grant limited access to a resource. It is commonly used in the automotive industry, where a valet key is used to allow a valet parking attendant to drive and park a car, but not to access the trunk or the glove compartment of the car. Use a token that provides clients with restricted direct access to a specific resource, in order to offload data transfer from the application. This is particularly useful in applications that use cloud-hosted storage systems or queues, and can minimize cost and maximize scalability and performance.
In system design, a valet key can be used as a security feature to allow a user to grant limited access to a resource, such as a file or a service, to a third party. The third party can access the resource, but only with the limited permissions that have been granted by the valet key.
Learn more from the following links: Learn more from the following links:
- [Valet Key pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/valet-key) - [Valet Key pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/valet-key)
- [Explanation of Valet Key](https://www.youtube.com/watch?v=sapu2CE1W8s)

@ -1,8 +1,7 @@
# Data Management # Data Management
Data management in cloud system design refers to the process of designing, implementing and maintaining the data infrastructure and data management processes in a cloud computing environment. This includes designing and configuring data storage systems, data replication, data backup and disaster recovery, data security and access control, and data governance policies. It's a set of actions that aims to ensure the data is properly managed, stored and protected in a cloud environment. Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations.
Learn more from the following links: Learn more from the following links:
- [Data Management in the Cloud: Promises, State-of-the-art](https://link.springer.com/article/10.1007/s13222-010-0033-3) - [Data management patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/data-management)
- [Data Management: What It Is, Importance, And Challenges](https://www.tableau.com/learn/articles/what-is-data-management)

@ -1,8 +1,9 @@
# Ambassador # Ambassador
Ambassador in system design is a type of software that acts as a facade for other services or applications. It's a reverse proxy and service mesh that allows to control access to services, and provide features such as authentication, rate limiting, and observability. Ambassador can be used to route requests, authenticate and authorize requests, provide observability, and rate limiting. Ambassador is designed to work with Kubernetes and other cloud-native platforms. Create helper services that send network requests on behalf of a consumer service or application. An ambassador service can be thought of as an out-of-process proxy that is co-located with the client.
This pattern can be useful for offloading common client connectivity tasks such as monitoring, logging, routing, security (such as TLS), and resiliency patterns in a language agnostic way. It is often used with legacy applications, or other applications that are difficult to modify, in order to extend their networking capabilities. It can also enable a specialized team to implement those features.
To learn more, visit the following links: To learn more, visit the following links:
- [Design System Ambassadors](https://medium.com/sprout-social-design/design-system-ambassadors-c240e480baf6)
- [Ambassador pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/ambassador) - [Ambassador pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/ambassador)

@ -1,6 +1,6 @@
# Anti-orruption Layer # Anti-orruption Layer
An Anti-Corruption Layer (ACL) in system design is a software pattern that acts as a buffer between a system and external systems or legacy systems that use incompatible data formats or protocols. It's purpose is to protect the internal system from being affected by changes or inconsistencies in the external systems, and to provide a stable and consistent interface for the internal system to interact with the external systems. It can be used in scenarios like integration with legacy systems, integration with external systems, and isolation of dependencies. An ACL can be implemented using several different techniques such as data mapping, data validation, and error handling. Implement a façade or adapter layer between different subsystems that don't share the same semantics. This layer translates requests that one subsystem makes to the other subsystem. Use this pattern to ensure that an application's design is not limited by dependencies on outside subsystems. This pattern was first described by Eric Evans in Domain-Driven Design.
To learn more, visit the following links: To learn more, visit the following links:

@ -1,8 +1,7 @@
# Backends for Frontend # Backends for Frontend
Backends for Frontend (BFF) in system design is a pattern that involves creating a separate backend service for each frontend client. This allows each client to have its own API, tailored to its specific needs, while still sharing a common set of underlying services and data. BFF can be used to provide a tailored API, decouple frontend and backend, and reduce complexity. BFF can be implemented using several different techniques such as Microservices and API Gateway. Create separate backend services to be consumed by specific frontend applications or interfaces. This pattern is useful when you want to avoid customizing a single backend for multiple interfaces. This pattern was first described by Sam Newman.
To learn more, visit the following links: To learn more, visit the following links:
- [Why “Backend For Frontend” Application Architecture?](https://www.mobilelive.ca/blog/why-backend-for-frontend-application-architecture) - [Backends for Frontends pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/backends-for-frontends)
- [what is Backend for frontend (BFF) pattern](https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0)

@ -1,8 +1,6 @@
# CQRS # CQRS
CQRS (Command Query Responsibility Segregation) in system design is a pattern that separates the responsibilities of handling read and write operations in a system. This pattern allows for the separation of concerns between the read and write operations, and can be used to improve the scalability, performance, and maintainability of a system. CQRS stands for Command and Query Responsibility Segregation, a pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level.
In this pattern, the read and write operations are handled by different components in the system. The write operations, known as commands, are handled by a Command component that updates the state of the system. The read operations, known as queries, are handled by a Query component that retrieves the current state of the system.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,7 @@
# Compute Resource Consolidation # Compute Resource Consolidation
Compute resource consolidation in system design is the process of combining multiple servers, storage devices, and network resources into a smaller number of more powerful and efficient systems. This approach can be used to reduce costs, improve performance, and simplify the management and maintenance of the IT infrastructure. Compute resource consolidation can be achieved through several different techniques such as Virtualization, Cloud computing, and Containers. It can also be used to reduce costs, improve performance, and simplify management and maintenance. Consolidate multiple tasks or operations into a single computational unit. This can increase compute resource utilization, and reduce the costs and management overhead associated with performing compute processing in cloud-hosted applications.
To learn more, visit the following links: To learn more, visit the following links:
- [Compute Resource Consolidation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/compute-resource-consolidation) - [Compute Resource Consolidation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/compute-resource-consolidation)
- [Tutorial - The Compute Resource Consolidation Pattern](https://www.youtube.com/watch?v=XzBmJvu6gpQ)

@ -1,8 +1,7 @@
# External Configuration Store # External Configuration Store
An External Configuration Store (ECS) in system design is a centralized, external location where system configuration settings are stored and managed. It separates the configuration of a system from the code of the system, making it easier to manage and update the configuration settings. It can be used in scenarios such as centralized configuration management, dynamic configuration and environment-specific configuration. It can be implemented using techniques such as environment variables, configuration files, and distributed key-value stores. Move configuration information out of the application deployment package to a centralized location. This can provide opportunities for easier management and control of configuration data, and for sharing configuration data across applications and application instances.
To learn more, visit the following links: To learn more, visit the following links:
- [External Configuration Store pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store) - [External Configuration Store pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store)
- [External Configuration Store Pattern - Azure Cloud Design Patterns](https://www.youtube.com/watch?v=e-x1G4fRzf8)

@ -1,8 +1,7 @@
# Gateway Aggregation # Gateway Aggregation
Gateway Aggregation in system design is a pattern that involves using a single gateway to aggregate multiple services or microservices into a single endpoint. This allows for a simplified client-side API and can also provide additional functionality such as authentication, rate limiting, and observability. It can be used to simplify the client-side API, provide additional functionality, and decouple the client and services. It can be implemented using techniques such as API Gateway and Service Mesh. Use a gateway to aggregate multiple individual requests into a single request. This pattern is useful when a client must make multiple calls to different backend systems to perform an operation.
To learn more, visit the following links: To learn more, visit the following links:
- [Gateway Aggregation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-aggregation) - [Gateway Aggregation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-aggregation)
- [Overview of Gateway Aggregation Pattern](https://medium.com/design-microservices-architecture-with-patterns/gateway-aggregation-pattern-9ff92e1771d0)

@ -1,6 +1,6 @@
# Gateway Offloading # Gateway Offloading
Gateway Offloading in system design is a pattern that involves using a gateway to offload certain tasks or processing from the backend services or microservices to the gateway itself. This can be used to reduce the load on the backend services, improve performance, and provide additional functionality such as caching, compression, and encryption. It can be implemented using techniques such as API Gateway and Edge Computing. Offload shared or specialized service functionality to a gateway proxy. This pattern can simplify application development by moving shared service functionality, such as the use of SSL certificates, from other parts of the application into the gateway.
To learn more, visit the following links: To learn more, visit the following links:

@ -1,8 +1,11 @@
# Gateway Routing # Gateway Routing
Gateway Routing in system design is a pattern that involves using a gateway to route requests to the appropriate backend service or microservice. The gateway acts as a single entry point for all incoming requests and routes them to the correct service based on the request's information such as the endpoint, headers, or payload. It can be used to decouple the client and services, provide additional functionality, and scale the system. It can be implemented using techniques such as API Gateway and Service Mesh. Route requests to multiple services or multiple service instances using a single endpoint. The pattern is useful when you want to:
- Expose multiple services on a single endpoint and route to the appropriate service based on the request
- Expose multiple instances of the same service on a single endpoint for load balancing or availability purposes
- Expose differing versions of the same service on a single endpoint and route traffic across the different versions
To learn more, visit the following links: To learn more, visit the following links:
- [Gateway Routing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-routing) - [Gateway Routing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-routing)
- [Iverview of Gateway Routing Pattern](https://medium.com/design-microservices-architecture-with-patterns/gateway-routing-pattern-f40eb56a2dd9)

@ -1,8 +1,7 @@
# Leader Election # Leader Election
Leader Election in system design is a pattern that is used to elect a leader among a group of distributed nodes in a system. The leader is responsible for coordinating the activities of the other nodes and making decisions on behalf of the group. Leader Election is important in distributed systems, as it ensures that there is a single point of coordination and decision-making, reducing the risk of conflicting actions or duplicate work. Leader Election can be used to ensure a single point of coordination, provide fault tolerance, and scalability. There are several algorithms such as Raft, Paxos, and Zab that can be used to implement Leader Election in distributed systems. Coordinate the actions performed by a collection of collaborating instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the others. This can help to ensure that instances don't conflict with each other, cause contention for shared resources, or inadvertently interfere with the work that other instances are performing.
To learn more, visit the following links: To learn more, visit the following links:
- [Overview of Leader Election](https://aws.amazon.com/builders-library/leader-election-in-distributed-systems/) - [Overview of Leader Election](https://learn.microsoft.com/en-us/azure/architecture/patterns/leader-election)
- [What is Leader Election in system design?](https://www.enjoyalgorithms.com/blog/leader-election-system-design)

@ -1,8 +1,7 @@
# Pipes and Filters # Pipes and Filters
Pipes and Filters in system design is a pattern that is used to decompose a large system into smaller, reusable components that can be combined in different ways to perform different tasks. It is based on the idea of data flowing through a series of connected "pipes", where each "pipe" represents a processing step or "filter" that performs a specific task on the data. It can be used to decompose a large system, increase flexibility and increase reusability. It can be implemented in several ways such as pipeline and Chain of Responsibility pattern. Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently.
To learn more, visit the following links: To learn more, visit the following links:
- [Pipe and Filter Architectural Style](https://cs.uwaterloo.ca/~m2nagapp/courses/CS446/1181/Arch_Design_Activity/PipeFilter.pdf) - [Pipe and Filter Architectural Style](https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters)
- [What are Pipes and Filters?](https://syedhasan010.medium.com/pipe-and-filter-architecture-bd7babdb908)

@ -1,8 +1,9 @@
# Sidecar # Sidecar
A Sidecar in system design is a pattern that involves running an additional process alongside a primary process, in order to provide additional functionality to the primary process. The sidecar process is typically used to manage cross-cutting concerns such as logging, monitoring, security, or networking. It is used to provide additional functionality, decouple the primary process from the additional functionality, and allow for easier upgrades. It can be implemented in several ways such as Service Mesh and Sidecar Container. Deploy components of an application into a separate process or container to provide isolation and encapsulation. This pattern can also enable applications to be composed of heterogeneous components and technologies.
This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, being created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is a decomposition pattern.
To learn more, visit the following links: To learn more, visit the following links:
- [Sidecar pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar) - [Sidecar pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar)
- [What is Sidecar Pattern?](https://www.oreilly.com/library/view/designing-distributed-systems/9781491983638/ch02.html)

@ -1,8 +1,7 @@
# Static Content Hosting # Static Content Hosting
Static Content Hosting in system design is a technique used to serve static resources such as images, stylesheets, and JavaScript files, from a dedicated server or service, rather than from the main application server. This approach can be used to improve the performance, scalability, and availability of a system. Static content hosting can be used in scenarios like performance, scalability, and availability. Static content hosting can be implemented using several different techniques such as Content Delivery Network (CDN), Object Storage and File Server. Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances.
Learn more from the following links: Learn more from the following links:
- [The pros and cons of the Static Content Hosting](https://www.redhat.com/architect/pros-and-cons-static-content-hosting-architecture-pattern)
- [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting) - [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting)

@ -1,8 +1,7 @@
# Strangler fig # Strangler fig
Strangler Fig in system design is a pattern that is used to gradually replace a monolithic application with a microservices-based architecture. It's based on the idea of a "strangler fig" vine slowly wrapping around and strangling a tree, gradually replacing it. This pattern can be used to gradually replace a monolithic application, reduce the impact of changes, and preserve existing functionality. It can be implemented in several ways such as API Gateway and Service Mesh. Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. As features from the legacy system are replaced, the new system eventually replaces all of the old system's features, strangling the old system and allowing you to decommission it.
To learn more, visit the following links: To learn more, visit the following links:
- [The Sstrangler fig pattern](https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-aspnet-web-services/fig-pattern.html)
- [What is Strangler fig?](https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig) - [What is Strangler fig?](https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig)

@ -1,8 +1,7 @@
# Design and Implementation # Design and Implementation
Design and Implementation in system design refers to the process of creating a system that meets the requirements and goals of the stakeholders. It involves several steps such as requirements gathering, design, implementation, testing, deployment, and maintenance. The design phase involves creating a high-level plan for the system, including the overall architecture, the components that will be used, and the interfaces between them. The implementation phase involves taking the design and creating a working system using the chosen technologies and tools. Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services.
To learn more, visit the following links: To learn more, visit the following links:
- [What is Design and Implementation?](https://www.marketlinks.org/good-practice-center/value-chain-wiki/design-and-implementation-overview) - [Design and implementation patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/design-implementation)
- [Overview of System Design and Implementation](https://www.tutorialspoint.com/operating-system-design-and-implementation)

@ -1,6 +1,6 @@
# Deployment Stamps # Deployment Stamps
Deployment Stamps in system design refers to a technique used to manage the deployment of a system across different environments, such as development, staging, and production. A deployment stamp is a set of environment-specific configurations and settings that are applied to the system during the deployment process. It allows to manage environment-specific configurations, ensure consistency across environments, and simplify the deployment process. It can be implemented in several different ways such as Configuration files, Environment variables and Deployment script. Deployment Stamps refers to a technique used to manage the deployment of a system across different environments, such as development, staging, and production. A deployment stamp is a set of environment-specific configurations and settings that are applied to the system during the deployment process. It allows to manage environment-specific configurations, ensure consistency across environments, and simplify the deployment process. It can be implemented in several different ways such as Configuration files, Environment variables and Deployment script.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Geodes # Geodes
Geodes in system design refers to a technique of partitioning a large dataset into smaller chunks, called geodes, that can be stored and processed more efficiently. Geodes are similar to shards in database partitioning, but the term is often used in the context of distributed systems and data processing. It allows to Scale the system, Improve performance and balance the load. It can be implemented in several different ways such as Hashing and Range-based partitioning. Geodes refers to a technique of partitioning a large dataset into smaller chunks, called geodes, that can be stored and processed more efficiently. Geodes are similar to shards in database partitioning, but the term is often used in the context of distributed systems and data processing. It allows to Scale the system, Improve performance and balance the load. It can be implemented in several different ways such as Hashing and Range-based partitioning.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Health Endpoint Monitoring # Health Endpoint Monitoring
Health Endpoint Monitoring in system design refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring. Health Endpoint Monitoring refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Availability # Availability
Availability in system design refers to the ability of a system to perform its intended function without interruption. High availability is desired as it means that the system is less likely to experience downtime, and when it does, it can quickly recover. To increase the availability of a system, several methods can be used such as Redundancy, Load balancing, Failover, Monitoring, and Automated recovery. Availability refers to the ability of a system to perform its intended function without interruption. High availability is desired as it means that the system is less likely to experience downtime, and when it does, it can quickly recover. To increase the availability of a system, several methods can be used such as Redundancy, Load balancing, Failover, Monitoring, and Automated recovery.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Queue-Based load leveling # Queue-Based load leveling
Queue-based load leveling in system design refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. Queue-based load leveling refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Throttling # Throttling
Throttling in system design refers to a technique for limiting the rate at which requests are processed by a system. This is often used to prevent the system from being overwhelmed by a high volume of requests, or to ensure that resources are used efficiently. Throttling can be applied to incoming requests, outgoing requests or both, and can be implemented at different levels of the system, such as at the network, application, or service level. It allows to prevent system overload, ensure efficient resource usage, provide Quality of Service (QoS) and prevent Denial of Service (DoS). It can be implemented in several different ways such as Rate limiting, Leaking bucket and Token bucket. Throttling refers to a technique for limiting the rate at which requests are processed by a system. This is often used to prevent the system from being overwhelmed by a high volume of requests, or to ensure that resources are used efficiently. Throttling can be applied to incoming requests, outgoing requests or both, and can be implemented at different levels of the system, such as at the network, application, or service level. It allows to prevent system overload, ensure efficient resource usage, provide Quality of Service (QoS) and prevent Denial of Service (DoS). It can be implemented in several different ways such as Rate limiting, Leaking bucket and Token bucket.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Bulkhead # Bulkhead
Bulkhead in system design refers to a technique for isolating different parts of a system to prevent one part from affecting the performance of the whole system. The term "bulkhead" is used to refer to the partitions or walls that are used to separate different parts of the system. It allows to Isolate critical parts of the system, prevent cascading failures and provide isolation for different types of requests. It can be implemented in several different ways such as Thread pools, Circuit breakers, and Workers. Bulkhead refers to a technique for isolating different parts of a system to prevent one part from affecting the performance of the whole system. The term "bulkhead" is used to refer to the partitions or walls that are used to separate different parts of the system. It allows to Isolate critical parts of the system, prevent cascading failures and provide isolation for different types of requests. It can be implemented in several different ways such as Thread pools, Circuit breakers, and Workers.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Circuit Breaker # Circuit Breaker
Circuit Breaker in system design is a pattern that is used to prevent an application from repeatedly trying to perform an action that is likely to fail. By tripping the circuit breaker when an operation fails a certain number of times, the system can prevent cascading failures, provide fallback behavior, and monitor system health. It can be implemented in several different ways such as State machine, and Hystrix (library for Java). Circuit Breaker is a pattern that is used to prevent an application from repeatedly trying to perform an action that is likely to fail. By tripping the circuit breaker when an operation fails a certain number of times, the system can prevent cascading failures, provide fallback behavior, and monitor system health. It can be implemented in several different ways such as State machine, and Hystrix (library for Java).
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Deployment Stamps # Deployment Stamps
Deployment Stamps in system design refers to a technique used to manage the deployment of a system across different environments, such as development, staging, and production. A deployment stamp is a set of environment-specific configurations and settings that are applied to the system during the deployment process. It allows to manage environment-specific configurations, ensure consistency across environments, and simplify the deployment process. It can be implemented in several different ways such as Configuration files, Environment variables and Deployment script. Deployment Stamps refers to a technique used to manage the deployment of a system across different environments, such as development, staging, and production. A deployment stamp is a set of environment-specific configurations and settings that are applied to the system during the deployment process. It allows to manage environment-specific configurations, ensure consistency across environments, and simplify the deployment process. It can be implemented in several different ways such as Configuration files, Environment variables and Deployment script.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Geodes # Geodes
Geodes in system design refers to a technique of partitioning a large dataset into smaller chunks, called geodes, that can be stored and processed more efficiently. Geodes are similar to shards in database partitioning, but the term is often used in the context of distributed systems and data processing. It allows to Scale the system, Improve performance and balance the load. It can be implemented in several different ways such as Hashing and Range-based partitioning. Geodes refers to a technique of partitioning a large dataset into smaller chunks, called geodes, that can be stored and processed more efficiently. Geodes are similar to shards in database partitioning, but the term is often used in the context of distributed systems and data processing. It allows to Scale the system, Improve performance and balance the load. It can be implemented in several different ways such as Hashing and Range-based partitioning.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Health Endpoint Monitoring # Health Endpoint Monitoring
Health Endpoint Monitoring in system design refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring. Health Endpoint Monitoring refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# High availability # High availability
High availability in system design refers to the ability of a system to continue operating even in the event of a failure or outage. This is often achieved by designing the system to be redundant, meaning that multiple copies of the system are running at the same time, and if one copy fails, the others can take over. It can be achieved by using Redundancy, Load balancing, and Failover. It can be measured using metrics such as Mean Time Between Failures (MTBF), Mean Time To Recovery (MTTR) and Availability. High availability refers to the ability of a system to continue operating even in the event of a failure or outage. This is often achieved by designing the system to be redundant, meaning that multiple copies of the system are running at the same time, and if one copy fails, the others can take over. It can be achieved by using Redundancy, Load balancing, and Failover. It can be measured using metrics such as Mean Time Between Failures (MTBF), Mean Time To Recovery (MTTR) and Availability.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Bulkhead # Bulkhead
Bulkhead in system design refers to a technique for isolating different parts of a system to prevent one part from affecting the performance of the whole system. The term "bulkhead" is used to refer to the partitions or walls that are used to separate different parts of the system. It allows to Isolate critical parts of the system, prevent cascading failures and provide isolation for different types of requests. It can be implemented in several different ways such as Thread pools, Circuit breakers, and Workers. Bulkhead refers to a technique for isolating different parts of a system to prevent one part from affecting the performance of the whole system. The term "bulkhead" is used to refer to the partitions or walls that are used to separate different parts of the system. It allows to Isolate critical parts of the system, prevent cascading failures and provide isolation for different types of requests. It can be implemented in several different ways such as Thread pools, Circuit breakers, and Workers.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Circuit Breaker # Circuit Breaker
Circuit Breaker in system design is a pattern that is used to prevent an application from repeatedly trying to perform an action that is likely to fail. By tripping the circuit breaker when an operation fails a certain number of times, the system can prevent cascading failures, provide fallback behavior, and monitor system health. It can be implemented in several different ways such as State machine, and Hystrix (library for Java). Circuit Breaker is a pattern that is used to prevent an application from repeatedly trying to perform an action that is likely to fail. By tripping the circuit breaker when an operation fails a certain number of times, the system can prevent cascading failures, provide fallback behavior, and monitor system health. It can be implemented in several different ways such as State machine, and Hystrix (library for Java).
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Compensating Transaction # Compensating Transaction
A Compensating Transaction in system design refers to a mechanism for reversing or undoing the effects of a previously executed transaction in a system. It can be used to ensure that the system remains in a consistent state, even if a subsequent transaction fails or is rolled back. Typically used in systems that implement the principles of ACID transactions, it can be implemented in several different ways such as undo logs, savepoints. A Compensating Transaction refers to a mechanism for reversing or undoing the effects of a previously executed transaction in a system. It can be used to ensure that the system remains in a consistent state, even if a subsequent transaction fails or is rolled back. Typically used in systems that implement the principles of ACID transactions, it can be implemented in several different ways such as undo logs, savepoints.
Learn more from the following resources: Learn more from the following resources:

@ -1,6 +1,6 @@
# Health Endpoint Monitoring # Health Endpoint Monitoring
Health Endpoint Monitoring in system design refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring. Health Endpoint Monitoring refers to a technique for monitoring the health of a system by periodically sending requests to a specific endpoint, called a "health endpoint", on the system. The health endpoint returns a response indicating the current status of the system, such as whether it is running properly or if there are any issues. It allows to Monitor the overall health of the system, Provide insight into the system's performance, and automate the process of monitoring. It can be implemented in several different ways such as Periodic requests and Event-based monitoring.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Resilience # Resilience
Resilience in system design refers to the ability of a system to withstand and recover from disruptions, failures or unexpected conditions. It means the system can continue to function and provide service even when faced with stressors such as high traffic, failures or unexpected changes. Resilience can be achieved by designing the system to be redundant, fault-tolerant, scalable, having automatic recovery, and monitoring and alerting mechanisms. It can be measured by Recovery Time Objective (RTO), Recovery Point Objective (RPO), Mean time to failure (MTTF), and Mean time to recovery (MTTR). Resilience refers to the ability of a system to withstand and recover from disruptions, failures or unexpected conditions. It means the system can continue to function and provide service even when faced with stressors such as high traffic, failures or unexpected changes. Resilience can be achieved by designing the system to be redundant, fault-tolerant, scalable, having automatic recovery, and monitoring and alerting mechanisms. It can be measured by Recovery Time Objective (RTO), Recovery Point Objective (RPO), Mean time to failure (MTTF), and Mean time to recovery (MTTR).
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Leader Election # Leader Election
Leader Election in system design is a pattern that is used to elect a leader among a group of distributed nodes in a system. The leader is responsible for coordinating the activities of the other nodes and making decisions on behalf of the group. Leader Election is important in distributed systems, as it ensures that there is a single point of coordination and decision-making, reducing the risk of conflicting actions or duplicate work. Leader Election can be used to ensure a single point of coordination, provide fault tolerance, and scalability. There are several algorithms such as Raft, Paxos, and Zab that can be used to implement Leader Election in distributed systems. Leader Election is a pattern that is used to elect a leader among a group of distributed nodes in a system. The leader is responsible for coordinating the activities of the other nodes and making decisions on behalf of the group. Leader Election is important in distributed systems, as it ensures that there is a single point of coordination and decision-making, reducing the risk of conflicting actions or duplicate work. Leader Election can be used to ensure a single point of coordination, provide fault tolerance, and scalability. There are several algorithms such as Raft, Paxos, and Zab that can be used to implement Leader Election in distributed systems.
To learn more, visit the following links: To learn more, visit the following links:

@ -1,6 +1,6 @@
# Queue-Based load leveling # Queue-Based load leveling
Queue-based load leveling in system design refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. Queue-based load leveling refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue.
To learn more visit the following links: To learn more visit the following links:

@ -1,6 +1,6 @@
# Retry # Retry
Retry in system design refers to the process of automatically re-executing a failed operation in the hopes of getting a successful outcome. Retries are used to handle transient failures such as network errors, temporary unavailability of a service, or other issues that may be resolved quickly. Retries can be an effective way of dealing with these types of failures, as they can help to ensure that the system continues to function, even in the face of temporary disruptions. Retry refers to the process of automatically re-executing a failed operation in the hopes of getting a successful outcome. Retries are used to handle transient failures such as network errors, temporary unavailability of a service, or other issues that may be resolved quickly. Retries can be an effective way of dealing with these types of failures, as they can help to ensure that the system continues to function, even in the face of temporary disruptions.
Learn more from the following resources: Learn more from the following resources:

@ -1,6 +1,6 @@
# Scheduling Agent Supervisor # Scheduling Agent Supervisor
Scheduling Agent Supervisor in system design is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately. Scheduling Agent Supervisor is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Gatekeeper # Gatekeeper
A Gatekeeper in system design is a pattern that is used to control access to a system or service. It acts as a central point of control and decision-making for requests to the system or service and is responsible for enforcing policies, rules, and constraints that are used to govern access. It can be implemented in different ways such as API Gateway, Service Mesh, and Load Balancer. It can be useful for Authentication and Authorization, Traffic Management, and Observability. A Gatekeeper is a pattern that is used to control access to a system or service. It acts as a central point of control and decision-making for requests to the system or service and is responsible for enforcing policies, rules, and constraints that are used to govern access. It can be implemented in different ways such as API Gateway, Service Mesh, and Load Balancer. It can be useful for Authentication and Authorization, Traffic Management, and Observability.
Learn more from the following resources: Learn more from the following resources:

@ -1,6 +1,6 @@
# Security # Security
Security in system design refers to the measures and techniques that are used to protect a system from unauthorized access, use, disclosure, disruption, modification, or destruction. It involves identifying and mitigating the risks that the system faces and implementing controls to prevent and detect security incidents. Security in system design can be divided into several areas such as Access Control, Data security, Network security, and Application security. It can be achieved by implementing best practices and standards such as Defense in depth, Least privilege, Separation of duties, and Monitoring and incident response. Security refers to the measures and techniques that are used to protect a system from unauthorized access, use, disclosure, disruption, modification, or destruction. It involves identifying and mitigating the risks that the system faces and implementing controls to prevent and detect security incidents. Security can be divided into several areas such as Access Control, Data security, Network security, and Application security. It can be achieved by implementing best practices and standards such as Defense in depth, Least privilege, Separation of duties, and Monitoring and incident response.
Learn more from the following links: Learn more from the following links:

@ -1,6 +1,6 @@
# Valet Key # Valet Key
A valet key in system design is a type of security feature that allows a user to grant limited access to a resource. It is commonly used in the automotive industry, where a valet key is used to allow a valet parking attendant to drive and park a car, but not to access the trunk or the glove compartment of the car. A valet key is a type of security feature that allows a user to grant limited access to a resource. It is commonly used in the automotive industry, where a valet key is used to allow a valet parking attendant to drive and park a car, but not to access the trunk or the glove compartment of the car.
In system design, a valet key can be used as a security feature to allow a user to grant limited access to a resource, such as a file or a service, to a third party. The third party can access the resource, but only with the limited permissions that have been granted by the valet key. In system design, a valet key can be used as a security feature to allow a user to grant limited access to a resource, such as a file or a service, to a third party. The third party can access the resource, but only with the limited permissions that have been granted by the valet key.

@ -1,6 +1,6 @@
# Reliability Patterns # Reliability Patterns
Reliability patterns in system design are solutions to common problems that arise when building systems that need to be highly available and fault-tolerant. These patterns provide a way to design and implement systems that can withstand failures, maintain high levels of performance, and recover quickly from disruptions. Some common reliability patterns include Failover, Circuit Breaker, Retry, Bulkhead, Backpressure, Cache-Aside, Idempotent Operations and Health Endpoint Monitoring. Reliability patterns are solutions to common problems that arise when building systems that need to be highly available and fault-tolerant. These patterns provide a way to design and implement systems that can withstand failures, maintain high levels of performance, and recover quickly from disruptions. Some common reliability patterns include Failover, Circuit Breaker, Retry, Bulkhead, Backpressure, Cache-Aside, Idempotent Operations and Health Endpoint Monitoring.
Learn more from the following links: Learn more from the following links:

@ -1,8 +1,7 @@
# Cloud Design Patterns # Cloud Design Patterns
Cloud design patterns in system design are solutions to common problems that arise when building systems that run on a cloud platform. These patterns provide a way to design and implement systems that can take advantage of the unique characteristics of the cloud, such as scalability, elasticity, and pay-per-use pricing. Some common cloud design patterns include Scalability, Elasticity, Fault Tolerance, Microservices, Serverless, Data Management, Front-end and Back-end separation and Hybrid. Cloud design patterns are solutions to common problems that arise when building systems that run on a cloud platform. These patterns provide a way to design and implement systems that can take advantage of the unique characteristics of the cloud, such as scalability, elasticity, and pay-per-use pricing. Some common cloud design patterns include Scalability, Elasticity, Fault Tolerance, Microservices, Serverless, Data Management, Front-end and Back-end separation and Hybrid.
To learn more, visit the following links: To learn more, visit the following links:
- [AWS Cloud Design Patterns](https://www.bmc.com/blogs/aws-cloud-design-patterns/) - [Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/)
- [Get started with Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/)
Loading…
Cancel
Save