parent
a715a85b46
commit
ad4f35764d
62 changed files with 94 additions and 134 deletions
@ -1,8 +1,7 @@ |
||||
# Asynchronous Request Reply |
||||
# Asynchronous Request-Reply |
||||
|
||||
Asynchronous Request-Reply in system design refers to a pattern where a client sends a request to a server and the server responds asynchronously, allowing the client to continue processing other tasks or requests without waiting for the server's response. This can improve the performance and scalability of a system by allowing multiple requests to be processed concurrently. It can be implemented using callbacks, promises or event-based models. |
||||
Decouple backend processing from a frontend host, where backend processing needs to be asynchronous, but the frontend still needs a clear response. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Asynchronous Request-Reply pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply) |
||||
- [Intro to Asynchronous Request-Response](https://codeopinion.com/asynchronous-request-response-pattern-for-non-blocking-workflows/) |
@ -1,8 +1,7 @@ |
||||
# Claim Check |
||||
|
||||
Claim check in system design is a pattern where large or complex data is replaced with a small token or reference, which is passed along with a message or request. This can help to reduce the size and complexity of messages, and improve the performance and scalability of a system. The large or complex data is stored in a separate location, and a token generator is used to create a unique token for the actual data. |
||||
Split a large message into a claim check and a payload. Send the claim check to the messaging platform and store the payload to an external service. This pattern allows large messages to be processed, while protecting the message bus and the client from being overwhelmed or slowed down. This pattern also helps to reduce costs, as storage is usually cheaper than resource units used by the messaging platform. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [An Introduction to Claim-Check Pattern and Its Uses](https://aws.plainenglish.io/an-introduction-to-claim-check-pattern-and-its-uses-b018649a380d) |
||||
- [Claim Check - Cloud Design patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/) |
||||
- [Claim Check - Cloud Design patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check) |
@ -1,8 +1,7 @@ |
||||
# Choreography |
||||
|
||||
Choreography in system design refers to the design and coordination of interactions between autonomous systems or services, without the use of a central controlling entity. Each system or service is responsible for its own behavior and communication with other systems or services, and there is no central point of control or coordination. Choreography can be used to improve the scalability, flexibility, and resilience of a system, by allowing each service to evolve and scale independently. It can be implemented using event-based, message-based or API-based models. |
||||
Have each component of the system participate in the decision-making process about the workflow of a business transaction, instead of relying on a central point of control. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Choreography pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/choreography) |
||||
- [Service choreography](https://en.wikipedia.org/wiki/Service_choreography) |
@ -1,8 +1,7 @@ |
||||
# Competing Consumers |
||||
|
||||
Competing Consumers in system design is a pattern that allows multiple consumers to process messages concurrently from a shared message queue. This approach can be used to improve the performance and scalability of a system by allowing multiple consumers to process messages in parallel. This pattern can be used in scenarios like load balancing and fault tolerance. It can be implemented using a variety of messaging technologies such as message queues, message brokers, and publish-subscribe systems. |
||||
Enable multiple concurrent consumers to process messages received on the same messaging channel. With multiple concurrent consumers, a system can process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Competing Consumers pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers) |
||||
- [Competing Consumers Pattern - Explained](https://medium.com/event-driven-utopia/competing-consumers-pattern-explained-b338d54eff2b) |
@ -1,8 +1,7 @@ |
||||
# Pipes and Filters |
||||
|
||||
Pipes and Filters in system design is a pattern that separates the processing of a task into a series of smaller, independent components, connected together in a pipeline. Each component, or filter, performs a specific task, and the output of one filter is passed as the input to the next filter. This approach can be used to build modular and extensible systems, by allowing filters to be added, removed, or replaced easily. Pipes and Filters pattern can be used in scenarios like data processing and data transformation. It can be implemented using a variety of technologies such as streams, generators, and iterators. |
||||
Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Pipe and Filter Architectural Style](https://cs.uwaterloo.ca/~m2nagapp/courses/CS446/1181/Arch_Design_Activity/PipeFilter.pdf) |
||||
- [Pipes and Filters pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters) |
@ -1,6 +1,6 @@ |
||||
# Priority Queue |
||||
|
||||
A priority queue in system design is a data structure that stores items with a priority value, and allows for efficient retrieval and manipulation of the items based on their priority. The items with the highest priority are retrieved first. This pattern is useful in situations where certain items or tasks are more important than others and should be processed first. Priority Queue can be used in scenarios like scheduling and real-time systems. It can be implemented using various data structures such as heap, linked list, and array. |
||||
Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Publisher Subscriber |
||||
|
||||
Publisher-Subscriber in system design is a pattern that allows multiple subscribers to receive updates from a single publisher, without the publisher and subscribers being aware of each other's existence. This pattern allows for decoupling of the publisher and subscribers, and can be used to build scalable and flexible systems. It can be used in scenarios like event-driven architecture and data streaming. It can be implemented using a variety of technologies such as message queues, message brokers, and event buses. |
||||
Enable an application to announce events to multiple interested consumers asynchronously, without coupling the senders to the receivers. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [What is Pub/Sub Messaging?](https://aws.amazon.com/pub-sub-messaging/) |
||||
- [Publisher Subscriber - Pattern](https://www.enjoyalgorithms.com/blog/publisher-subscriber-pattern) |
||||
- [Publisher-Subscriber pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/publisher-subscriber) |
@ -1,8 +1,7 @@ |
||||
# Queue Based Load Leveling |
||||
|
||||
Queue-based load leveling in system design is a pattern that allows for the buffering of incoming requests, and the processing of those requests at a controlled rate. This pattern can be used to prevent overloading of a system, and to ensure that the system can handle a variable rate of incoming requests. It can be used in scenarios like traffic spikes and variable workloads. It can be implemented using various data structures such as linked list, array, and heap. |
||||
Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Queue-Based Load Leveling pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling) |
||||
- [Design Patterns: Queue-Based Load Leveling Pattern](https://blog.cdemi.io/design-patterns-queue-based-load-leveling-pattern/) |
@ -1,6 +1,6 @@ |
||||
# Scheduling Agent Supervisor |
||||
|
||||
Scheduling Agent Supervisor in system design is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately. |
||||
Coordinate a set of distributed actions as a single operation. If any of the actions fail, try to handle the failures transparently, or else undo the work that was performed, so the entire operation succeeds or fails as a whole. This can add resiliency to a distributed system, by enabling it to recover and retry actions that fail due to transient exceptions, long-lasting faults, and process failures. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Messaging |
||||
|
||||
Messaging in system design is a pattern that allows for the communication and coordination between different components or systems, using messaging technologies such as message queues, message brokers, and event buses. This pattern allows for decoupling of the sender and receiver, and can be used to build scalable and flexible systems. Messaging pattern can be used in scenarios like asynchronous communication, loose coupling, and scalability. It can be implemented using a variety of technologies such as message queues, message brokers, and event buses. |
||||
Messaging is a pattern that allows for the communication and coordination between different components or systems, using messaging technologies such as message queues, message brokers, and event buses. This pattern allows for decoupling of the sender and receiver, and can be used to build scalable and flexible systems. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [System Design — Message Queues](https://medium.com/must-know-computer-science/system-design-message-queues-245612428a22) |
||||
- [Intro to System Design - Message Queues](https://dev.to/karanpratapsingh/system-design-message-queues-k9a) |
||||
- [Messaging Cloud Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/messaging) |
@ -1,6 +1,6 @@ |
||||
# Cache Aside |
||||
|
||||
Cache-Aside in system design is a pattern that allows for the caching of data, in order to improve the performance and scalability of a system. This pattern is typically used in systems where data is read more frequently than it is written. It can be used to reduce the load on a primary data store, and to improve the responsiveness of a system by reducing the latency of data access. Cache-Aside pattern can be used in scenarios like read-heavy workloads and latency-sensitive workloads. It can be implemented using various caching technologies such as in-memory cache, distributed cache, and file-based cache. |
||||
Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,6 @@ |
||||
# CQRS |
||||
|
||||
CQRS (Command Query Responsibility Segregation) in system design is a pattern that separates the responsibilities of handling read and write operations in a system. This pattern allows for the separation of concerns between the read and write operations, and can be used to improve the scalability, performance, and maintainability of a system. |
||||
|
||||
In this pattern, the read and write operations are handled by different components in the system. The write operations, known as commands, are handled by a Command component that updates the state of the system. The read operations, known as queries, are handled by a Query component that retrieves the current state of the system. |
||||
CQRS stands for Command and Query Responsibility Segregation, a pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Event Sourcing |
||||
|
||||
Event Sourcing in system design is a pattern that stores the state of a system as a sequence of events, rather than the current state. Each change to the state of the system is recorded as an event, which is stored in an event store. The current state of the system can be derived from the events in the event store. Event sourcing can be used for various purposes such as tracking history, reconstruct state, recover from failures, and auditing. It is often implemented in conjunction with CQRS (Command Query Responsibility Segregation) pattern, which separates the responsibilities of handling read and write operations in a system. |
||||
Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialize the domain objects. This can simplify tasks in complex domains, by avoiding the need to synchronize the data model and the business domain, while improving performance, scalability, and responsiveness. It can also provide consistency for transactional data, and maintain full audit trails and history that can enable compensating actions. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Event Sourcing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing) |
||||
- [Overview of Event Sourcing](https://microservices.io/patterns/data/event-sourcing.html) |
@ -1,8 +1,7 @@ |
||||
# Index Table |
||||
|
||||
An index table in system design is a data structure that allows for efficient lookup of data in a larger data set. It is used to improve the performance of searching, sorting, and retrieving data, by allowing for quick access to specific records or data elements. There are several types of index tables such as B-Tree, Hash table, and Trie each with its own strengths and weaknesses. Index tables can be used in a variety of scenarios such as searching, sorting, and retrieving. |
||||
Create indexes over the fields in data stores that are frequently referenced by queries. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [System Design — Indexes](https://medium.com/must-know-computer-science/system-design-indexes-f6ad3de9925d) |
||||
- [Overview of Index Table](https://dev.to/karanpratapsingh/system-design-indexes-2574) |
||||
- [Index Table pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/index-table) |
@ -1,8 +1,7 @@ |
||||
# Materialized View |
||||
|
||||
A Materialized View in system design is a pre-computed and stored version of a query result, which is used to improve the performance of frequently executed queries. It can be used to improve the performance of read-heavy workloads, by providing a pre-computed version of the data that can be quickly accessed. Materialized views can be used in scenarios like complex queries, large datasets, and real-time analytics. A materialized view can be created by executing a query and storing the result in a table. The data in the materialized view is typically updated periodically, to ensure that it stays up-to-date with the underlying data.s |
||||
Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. This can help support efficient querying and data extraction, and improve application performance. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Materialized View pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/materialized-view) |
||||
- [Overview of Materialized View Pattern](https://medium.com/design-microservices-architecture-with-patterns/materialized-view-pattern-f29ea249f8f8) |
@ -1,8 +1,7 @@ |
||||
# Sharding |
||||
|
||||
Sharding in system design is a technique used to horizontally partition a large data set across multiple servers, in order to improve the performance, scalability, and availability of a system. This is done by breaking the data set into smaller chunks, called shards, and distributing the shards across multiple servers. Each shard is self-contained and can be managed and scaled independently of the other shards. Sharding can be used in scenarios like scalability, availability, and geo-distribution. Sharding can be implemented using several different algorithms such as range-based sharding, hash-based sharding, and directory-based sharding. |
||||
Sharding is a technique used to horizontally partition a large data set across multiple servers, in order to improve the performance, scalability, and availability of a system. This is done by breaking the data set into smaller chunks, called shards, and distributing the shards across multiple servers. Each shard is self-contained and can be managed and scaled independently of the other shards. Sharding can be used in scenarios like scalability, availability, and geo-distribution. Sharding can be implemented using several different algorithms such as range-based sharding, hash-based sharding, and directory-based sharding. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Database Sharding: Concepts and Examples](https://www.mongodb.com/features/database-sharding-explained) |
||||
- [Database Sharding – System Design Interview Concept](https://www.geeksforgeeks.org/database-sharding-a-system-design-concept/) |
||||
- [Sharding pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/sharding) |
@ -1,8 +1,7 @@ |
||||
# Static Content Hosting |
||||
|
||||
Static Content Hosting in system design is a technique used to serve static resources such as images, stylesheets, and JavaScript files, from a dedicated server or service, rather than from the main application server. This approach can be used to improve the performance, scalability, and availability of a system. Static content hosting can be used in scenarios like performance, scalability, and availability. Static content hosting can be implemented using several different techniques such as Content Delivery Network (CDN), Object Storage and File Server. |
||||
Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [The pros and cons of the Static Content Hosting](https://www.redhat.com/architect/pros-and-cons-static-content-hosting-architecture-pattern) |
||||
- [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting) |
@ -1,10 +1,7 @@ |
||||
# Valet Key |
||||
|
||||
A valet key in system design is a type of security feature that allows a user to grant limited access to a resource. It is commonly used in the automotive industry, where a valet key is used to allow a valet parking attendant to drive and park a car, but not to access the trunk or the glove compartment of the car. |
||||
|
||||
In system design, a valet key can be used as a security feature to allow a user to grant limited access to a resource, such as a file or a service, to a third party. The third party can access the resource, but only with the limited permissions that have been granted by the valet key. |
||||
Use a token that provides clients with restricted direct access to a specific resource, in order to offload data transfer from the application. This is particularly useful in applications that use cloud-hosted storage systems or queues, and can minimize cost and maximize scalability and performance. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Valet Key pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/valet-key) |
||||
- [Explanation of Valet Key](https://www.youtube.com/watch?v=sapu2CE1W8s) |
@ -1,8 +1,7 @@ |
||||
# Data Management |
||||
|
||||
Data management in cloud system design refers to the process of designing, implementing and maintaining the data infrastructure and data management processes in a cloud computing environment. This includes designing and configuring data storage systems, data replication, data backup and disaster recovery, data security and access control, and data governance policies. It's a set of actions that aims to ensure the data is properly managed, stored and protected in a cloud environment. |
||||
Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [Data Management in the Cloud: Promises, State-of-the-art](https://link.springer.com/article/10.1007/s13222-010-0033-3) |
||||
- [Data Management: What It Is, Importance, And Challenges](https://www.tableau.com/learn/articles/what-is-data-management) |
||||
- [Data management patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/data-management) |
@ -1,8 +1,9 @@ |
||||
# Ambassador |
||||
|
||||
Ambassador in system design is a type of software that acts as a facade for other services or applications. It's a reverse proxy and service mesh that allows to control access to services, and provide features such as authentication, rate limiting, and observability. Ambassador can be used to route requests, authenticate and authorize requests, provide observability, and rate limiting. Ambassador is designed to work with Kubernetes and other cloud-native platforms. |
||||
Create helper services that send network requests on behalf of a consumer service or application. An ambassador service can be thought of as an out-of-process proxy that is co-located with the client. |
||||
|
||||
This pattern can be useful for offloading common client connectivity tasks such as monitoring, logging, routing, security (such as TLS), and resiliency patterns in a language agnostic way. It is often used with legacy applications, or other applications that are difficult to modify, in order to extend their networking capabilities. It can also enable a specialized team to implement those features. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Design System Ambassadors](https://medium.com/sprout-social-design/design-system-ambassadors-c240e480baf6) |
||||
- [Ambassador pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/ambassador) |
@ -1,6 +1,6 @@ |
||||
# Anti-orruption Layer |
||||
|
||||
An Anti-Corruption Layer (ACL) in system design is a software pattern that acts as a buffer between a system and external systems or legacy systems that use incompatible data formats or protocols. It's purpose is to protect the internal system from being affected by changes or inconsistencies in the external systems, and to provide a stable and consistent interface for the internal system to interact with the external systems. It can be used in scenarios like integration with legacy systems, integration with external systems, and isolation of dependencies. An ACL can be implemented using several different techniques such as data mapping, data validation, and error handling. |
||||
Implement a façade or adapter layer between different subsystems that don't share the same semantics. This layer translates requests that one subsystem makes to the other subsystem. Use this pattern to ensure that an application's design is not limited by dependencies on outside subsystems. This pattern was first described by Eric Evans in Domain-Driven Design. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Backends for Frontend |
||||
|
||||
Backends for Frontend (BFF) in system design is a pattern that involves creating a separate backend service for each frontend client. This allows each client to have its own API, tailored to its specific needs, while still sharing a common set of underlying services and data. BFF can be used to provide a tailored API, decouple frontend and backend, and reduce complexity. BFF can be implemented using several different techniques such as Microservices and API Gateway. |
||||
Create separate backend services to be consumed by specific frontend applications or interfaces. This pattern is useful when you want to avoid customizing a single backend for multiple interfaces. This pattern was first described by Sam Newman. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Why “Backend For Frontend” Application Architecture?](https://www.mobilelive.ca/blog/why-backend-for-frontend-application-architecture) |
||||
- [what is Backend for frontend (BFF) pattern](https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0) |
||||
- [Backends for Frontends pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/backends-for-frontends) |
@ -1,8 +1,6 @@ |
||||
# CQRS |
||||
|
||||
CQRS (Command Query Responsibility Segregation) in system design is a pattern that separates the responsibilities of handling read and write operations in a system. This pattern allows for the separation of concerns between the read and write operations, and can be used to improve the scalability, performance, and maintainability of a system. |
||||
|
||||
In this pattern, the read and write operations are handled by different components in the system. The write operations, known as commands, are handled by a Command component that updates the state of the system. The read operations, known as queries, are handled by a Query component that retrieves the current state of the system. |
||||
CQRS stands for Command and Query Responsibility Segregation, a pattern that separates read and update operations for a data store. Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Compute Resource Consolidation |
||||
|
||||
Compute resource consolidation in system design is the process of combining multiple servers, storage devices, and network resources into a smaller number of more powerful and efficient systems. This approach can be used to reduce costs, improve performance, and simplify the management and maintenance of the IT infrastructure. Compute resource consolidation can be achieved through several different techniques such as Virtualization, Cloud computing, and Containers. It can also be used to reduce costs, improve performance, and simplify management and maintenance. |
||||
Consolidate multiple tasks or operations into a single computational unit. This can increase compute resource utilization, and reduce the costs and management overhead associated with performing compute processing in cloud-hosted applications. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Compute Resource Consolidation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/compute-resource-consolidation) |
||||
- [Tutorial - The Compute Resource Consolidation Pattern](https://www.youtube.com/watch?v=XzBmJvu6gpQ) |
@ -1,8 +1,7 @@ |
||||
# External Configuration Store |
||||
|
||||
An External Configuration Store (ECS) in system design is a centralized, external location where system configuration settings are stored and managed. It separates the configuration of a system from the code of the system, making it easier to manage and update the configuration settings. It can be used in scenarios such as centralized configuration management, dynamic configuration and environment-specific configuration. It can be implemented using techniques such as environment variables, configuration files, and distributed key-value stores. |
||||
Move configuration information out of the application deployment package to a centralized location. This can provide opportunities for easier management and control of configuration data, and for sharing configuration data across applications and application instances. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [External Configuration Store pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store) |
||||
- [External Configuration Store Pattern - Azure Cloud Design Patterns](https://www.youtube.com/watch?v=e-x1G4fRzf8) |
@ -1,8 +1,7 @@ |
||||
# Gateway Aggregation |
||||
|
||||
Gateway Aggregation in system design is a pattern that involves using a single gateway to aggregate multiple services or microservices into a single endpoint. This allows for a simplified client-side API and can also provide additional functionality such as authentication, rate limiting, and observability. It can be used to simplify the client-side API, provide additional functionality, and decouple the client and services. It can be implemented using techniques such as API Gateway and Service Mesh. |
||||
Use a gateway to aggregate multiple individual requests into a single request. This pattern is useful when a client must make multiple calls to different backend systems to perform an operation. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Gateway Aggregation pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-aggregation) |
||||
- [Overview of Gateway Aggregation Pattern](https://medium.com/design-microservices-architecture-with-patterns/gateway-aggregation-pattern-9ff92e1771d0) |
@ -1,6 +1,6 @@ |
||||
# Gateway Offloading |
||||
|
||||
Gateway Offloading in system design is a pattern that involves using a gateway to offload certain tasks or processing from the backend services or microservices to the gateway itself. This can be used to reduce the load on the backend services, improve performance, and provide additional functionality such as caching, compression, and encryption. It can be implemented using techniques such as API Gateway and Edge Computing. |
||||
Offload shared or specialized service functionality to a gateway proxy. This pattern can simplify application development by moving shared service functionality, such as the use of SSL certificates, from other parts of the application into the gateway. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
|
@ -1,8 +1,11 @@ |
||||
# Gateway Routing |
||||
|
||||
Gateway Routing in system design is a pattern that involves using a gateway to route requests to the appropriate backend service or microservice. The gateway acts as a single entry point for all incoming requests and routes them to the correct service based on the request's information such as the endpoint, headers, or payload. It can be used to decouple the client and services, provide additional functionality, and scale the system. It can be implemented using techniques such as API Gateway and Service Mesh. |
||||
Route requests to multiple services or multiple service instances using a single endpoint. The pattern is useful when you want to: |
||||
|
||||
- Expose multiple services on a single endpoint and route to the appropriate service based on the request |
||||
- Expose multiple instances of the same service on a single endpoint for load balancing or availability purposes |
||||
- Expose differing versions of the same service on a single endpoint and route traffic across the different versions |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Gateway Routing pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/gateway-routing) |
||||
- [Iverview of Gateway Routing Pattern](https://medium.com/design-microservices-architecture-with-patterns/gateway-routing-pattern-f40eb56a2dd9) |
@ -1,8 +1,7 @@ |
||||
# Leader Election |
||||
|
||||
Leader Election in system design is a pattern that is used to elect a leader among a group of distributed nodes in a system. The leader is responsible for coordinating the activities of the other nodes and making decisions on behalf of the group. Leader Election is important in distributed systems, as it ensures that there is a single point of coordination and decision-making, reducing the risk of conflicting actions or duplicate work. Leader Election can be used to ensure a single point of coordination, provide fault tolerance, and scalability. There are several algorithms such as Raft, Paxos, and Zab that can be used to implement Leader Election in distributed systems. |
||||
Coordinate the actions performed by a collection of collaborating instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the others. This can help to ensure that instances don't conflict with each other, cause contention for shared resources, or inadvertently interfere with the work that other instances are performing. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Overview of Leader Election](https://aws.amazon.com/builders-library/leader-election-in-distributed-systems/) |
||||
- [What is Leader Election in system design?](https://www.enjoyalgorithms.com/blog/leader-election-system-design) |
||||
- [Overview of Leader Election](https://learn.microsoft.com/en-us/azure/architecture/patterns/leader-election) |
@ -1,8 +1,7 @@ |
||||
# Pipes and Filters |
||||
|
||||
Pipes and Filters in system design is a pattern that is used to decompose a large system into smaller, reusable components that can be combined in different ways to perform different tasks. It is based on the idea of data flowing through a series of connected "pipes", where each "pipe" represents a processing step or "filter" that performs a specific task on the data. It can be used to decompose a large system, increase flexibility and increase reusability. It can be implemented in several ways such as pipeline and Chain of Responsibility pattern. |
||||
Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Pipe and Filter Architectural Style](https://cs.uwaterloo.ca/~m2nagapp/courses/CS446/1181/Arch_Design_Activity/PipeFilter.pdf) |
||||
- [What are Pipes and Filters?](https://syedhasan010.medium.com/pipe-and-filter-architecture-bd7babdb908) |
||||
- [Pipe and Filter Architectural Style](https://learn.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters) |
@ -1,8 +1,9 @@ |
||||
# Sidecar |
||||
|
||||
A Sidecar in system design is a pattern that involves running an additional process alongside a primary process, in order to provide additional functionality to the primary process. The sidecar process is typically used to manage cross-cutting concerns such as logging, monitoring, security, or networking. It is used to provide additional functionality, decouple the primary process from the additional functionality, and allow for easier upgrades. It can be implemented in several ways such as Service Mesh and Sidecar Container. |
||||
Deploy components of an application into a separate process or container to provide isolation and encapsulation. This pattern can also enable applications to be composed of heterogeneous components and technologies. |
||||
|
||||
This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, being created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is a decomposition pattern. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [Sidecar pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar) |
||||
- [What is Sidecar Pattern?](https://www.oreilly.com/library/view/designing-distributed-systems/9781491983638/ch02.html) |
@ -1,8 +1,7 @@ |
||||
# Static Content Hosting |
||||
|
||||
Static Content Hosting in system design is a technique used to serve static resources such as images, stylesheets, and JavaScript files, from a dedicated server or service, rather than from the main application server. This approach can be used to improve the performance, scalability, and availability of a system. Static content hosting can be used in scenarios like performance, scalability, and availability. Static content hosting can be implemented using several different techniques such as Content Delivery Network (CDN), Object Storage and File Server. |
||||
Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
- [The pros and cons of the Static Content Hosting](https://www.redhat.com/architect/pros-and-cons-static-content-hosting-architecture-pattern) |
||||
- [Static Content Hosting pattern](https://learn.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting) |
@ -1,8 +1,7 @@ |
||||
# Strangler fig |
||||
|
||||
Strangler Fig in system design is a pattern that is used to gradually replace a monolithic application with a microservices-based architecture. It's based on the idea of a "strangler fig" vine slowly wrapping around and strangling a tree, gradually replacing it. This pattern can be used to gradually replace a monolithic application, reduce the impact of changes, and preserve existing functionality. It can be implemented in several ways such as API Gateway and Service Mesh. |
||||
Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. As features from the legacy system are replaced, the new system eventually replaces all of the old system's features, strangling the old system and allowing you to decommission it. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [The Sstrangler fig pattern](https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-aspnet-web-services/fig-pattern.html) |
||||
- [What is Strangler fig?](https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig) |
@ -1,8 +1,7 @@ |
||||
# Design and Implementation |
||||
|
||||
Design and Implementation in system design refers to the process of creating a system that meets the requirements and goals of the stakeholders. It involves several steps such as requirements gathering, design, implementation, testing, deployment, and maintenance. The design phase involves creating a high-level plan for the system, including the overall architecture, the components that will be used, and the interfaces between them. The implementation phase involves taking the design and creating a working system using the chosen technologies and tools. |
||||
Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [What is Design and Implementation?](https://www.marketlinks.org/good-practice-center/value-chain-wiki/design-and-implementation-overview) |
||||
- [Overview of System Design and Implementation](https://www.tutorialspoint.com/operating-system-design-and-implementation) |
||||
- [Design and implementation patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/category/design-implementation) |
@ -1,6 +1,6 @@ |
||||
# Queue-Based load leveling |
||||
|
||||
Queue-based load leveling in system design refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. |
||||
Queue-based load leveling refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. |
||||
|
||||
To learn more visit the following links: |
||||
|
||||
|
@ -1,6 +1,6 @@ |
||||
# Throttling |
||||
|
||||
Throttling in system design refers to a technique for limiting the rate at which requests are processed by a system. This is often used to prevent the system from being overwhelmed by a high volume of requests, or to ensure that resources are used efficiently. Throttling can be applied to incoming requests, outgoing requests or both, and can be implemented at different levels of the system, such as at the network, application, or service level. It allows to prevent system overload, ensure efficient resource usage, provide Quality of Service (QoS) and prevent Denial of Service (DoS). It can be implemented in several different ways such as Rate limiting, Leaking bucket and Token bucket. |
||||
Throttling refers to a technique for limiting the rate at which requests are processed by a system. This is often used to prevent the system from being overwhelmed by a high volume of requests, or to ensure that resources are used efficiently. Throttling can be applied to incoming requests, outgoing requests or both, and can be implemented at different levels of the system, such as at the network, application, or service level. It allows to prevent system overload, ensure efficient resource usage, provide Quality of Service (QoS) and prevent Denial of Service (DoS). It can be implemented in several different ways such as Rate limiting, Leaking bucket and Token bucket. |
||||
|
||||
To learn more visit the following links: |
||||
|
||||
|
@ -1,6 +1,6 @@ |
||||
# Resilience |
||||
|
||||
Resilience in system design refers to the ability of a system to withstand and recover from disruptions, failures or unexpected conditions. It means the system can continue to function and provide service even when faced with stressors such as high traffic, failures or unexpected changes. Resilience can be achieved by designing the system to be redundant, fault-tolerant, scalable, having automatic recovery, and monitoring and alerting mechanisms. It can be measured by Recovery Time Objective (RTO), Recovery Point Objective (RPO), Mean time to failure (MTTF), and Mean time to recovery (MTTR). |
||||
Resilience refers to the ability of a system to withstand and recover from disruptions, failures or unexpected conditions. It means the system can continue to function and provide service even when faced with stressors such as high traffic, failures or unexpected changes. Resilience can be achieved by designing the system to be redundant, fault-tolerant, scalable, having automatic recovery, and monitoring and alerting mechanisms. It can be measured by Recovery Time Objective (RTO), Recovery Point Objective (RPO), Mean time to failure (MTTF), and Mean time to recovery (MTTR). |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,6 +1,6 @@ |
||||
# Queue-Based load leveling |
||||
|
||||
Queue-based load leveling in system design refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. |
||||
Queue-based load leveling refers to a technique for managing the workload of a system by using a queue to buffer incoming requests and process them at a steady pace. By using a queue, the system can handle bursts of incoming requests without being overwhelmed, as well as prevent idle periods where there are not enough requests to keep the system busy. It allows to smooth out bursts of incoming requests, prevent idle periods, Provide a way to prioritize requests, and provide a way to monitor requests. It can be implemented in several different ways such as In-memory queue and Persistent queue. |
||||
|
||||
To learn more visit the following links: |
||||
|
||||
|
@ -1,6 +1,6 @@ |
||||
# Scheduling Agent Supervisor |
||||
|
||||
Scheduling Agent Supervisor in system design is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately. |
||||
Scheduling Agent Supervisor is a pattern that allows for the scheduling and coordination of tasks or processes by a central entity, known as the Scheduling Agent. The Scheduling Agent is responsible for scheduling tasks, monitoring their execution, and handling errors or failures. This pattern can be used to build robust and fault-tolerant systems, by ensuring that tasks are executed as intended and that any errors or failures are handled appropriately. |
||||
|
||||
Learn more from the following links: |
||||
|
||||
|
@ -1,8 +1,7 @@ |
||||
# Cloud Design Patterns |
||||
|
||||
Cloud design patterns in system design are solutions to common problems that arise when building systems that run on a cloud platform. These patterns provide a way to design and implement systems that can take advantage of the unique characteristics of the cloud, such as scalability, elasticity, and pay-per-use pricing. Some common cloud design patterns include Scalability, Elasticity, Fault Tolerance, Microservices, Serverless, Data Management, Front-end and Back-end separation and Hybrid. |
||||
Cloud design patterns are solutions to common problems that arise when building systems that run on a cloud platform. These patterns provide a way to design and implement systems that can take advantage of the unique characteristics of the cloud, such as scalability, elasticity, and pay-per-use pricing. Some common cloud design patterns include Scalability, Elasticity, Fault Tolerance, Microservices, Serverless, Data Management, Front-end and Back-end separation and Hybrid. |
||||
|
||||
To learn more, visit the following links: |
||||
|
||||
- [AWS Cloud Design Patterns](https://www.bmc.com/blogs/aws-cloud-design-patterns/) |
||||
- [Get started with Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/) |
||||
- [Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/) |
Loading…
Reference in new issue