Add content to asynchronism

pull/3331/head
Kamran Ahmed 2 years ago
parent 9a2bc75646
commit 3b05a615d8
  1. 13
      src/roadmaps/system-design/content/113-asynchronism/100-message-queues.md
  2. 3
      src/roadmaps/system-design/content/113-asynchronism/101-task-queues.md
  3. 6
      src/roadmaps/system-design/content/113-asynchronism/102-back-pressure.md
  4. 6
      src/roadmaps/system-design/content/113-asynchronism/index.md
  5. 18
      src/roadmaps/system-design/content/114-idempotent-operations.md

@ -7,17 +7,14 @@ Message queues receive, hold, and deliver messages. If an operation is too slow
The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers. The user is not blocked and the job is processed in the background. During this time, the client might optionally do a small amount of processing to make it seem like the task has completed. For example, if posting a tweet, the tweet could be instantly posted to your timeline, but it could take some time before your tweet is actually delivered to all of your followers.
## Redis - [Redis](https://redis.io/) is useful as a simple message broker but messages can be lost.
It is useful as a simple message broker but messages can be lost. - [RabbitMQ](https://www.rabbitmq.com/) is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
- [AWS SQS](https://aws.amazon.com/sqs/) is hosted but can have high latency and has the possibility of messages being delivered twice.
## RabbitMQ - [Apache Kafka](https://kafka.apache.org/) is a distributed event store and stream-processing platform.
This is popular but requires you to adapt to the 'AMQP' protocol and manage your own nodes.
## Amazon SQS
Amazon SQS is hosted but can have high latency and has the possibility of messages being delivered twice.
To learn more, visit the following links: To learn more, visit the following links:
- [What is Redis?](https://redis.io/) - [What is Redis?](https://redis.io/)
- [RabbitMQ in Message Queues](https://www.rabbitmq.com/) - [RabbitMQ in Message Queues](https://www.rabbitmq.com/)
- [Overview of Amazon SQS](https://aws.amazon.com/sqs/) - [Overview of Amazon SQS](https://aws.amazon.com/sqs/)
- [Apache Kafka](https://kafka.apache.org/)

@ -2,9 +2,8 @@
Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background. Tasks queues receive tasks and their related data, runs them, then delivers their results. They can support scheduling and can be used to run computationally-intensive jobs in the background.
Celery has support for scheduling and primarily has python support. [Celery](https://docs.celeryproject.org/en/stable/) has support for scheduling and primarily has python support.
To learn more, visit the following links: To learn more, visit the following links:
- [Overview of Task Queues](https://github.com/donnemartin/system-design-primer#task%20queues)
- [Celery - Distributed Task Queue](https://docs.celeryq.dev/en/stable/) - [Celery - Distributed Task Queue](https://docs.celeryq.dev/en/stable/)

@ -1,7 +1,3 @@
# Back Pressure # Back Pressure
If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. Back pressure can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with exponential backoff. If queues start to grow significantly, the queue size can become larger than memory, resulting in cache misses, disk reads, and even slower performance. [Back pressure](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html) can help by limiting the queue size, thereby maintaining a high throughput rate and good response times for jobs already in the queue. Once the queue fills up, clients get a server busy or HTTP 503 status code to try again later. Clients can retry the request at a later time, perhaps with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff).
To learn more, visit the following links:
- [Overview of Back Pressure](https://github.com/donnemartin/system-design-primer#back%20pressure)

@ -4,5 +4,9 @@ Asynchronous workflows help reduce request times for expensive operations that w
To learn more, visit the following links: To learn more, visit the following links:
- [Overview of Asynchronism](https://github.com/donnemartin/system-design-primer#Asynchronism) - [Asynchronous Thinking for Microservice System Design](https://www.datamachines.io/blog/asynchronous-thinking-for-microservice-system-design)
- [Patterns for microservices - Sync vs Async](https://medium.com/inspiredbrilliance/patterns-for-microservices-e57a2d71ff9e)
- [It's all a numbers game](https://www.youtube.com/watch?v=1KRYH75wgy4)
- [Applying back pressure when overloaded](http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html)
- [Little's law](https://en.wikipedia.org/wiki/Little%27s_law)
- [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function) - [What is the difference between a message queue and a task queue?](https://www.quora.com/What-is-the-difference-between-a-message-queue-and-a-task-queue-Why-would-a-task-queue-require-a-message-broker-like-RabbitMQ-Redis-Celery-or-IronMQ-to-function)

@ -1,6 +1,22 @@
# Idempotent Operations # Idempotent Operations
An idempotent operation is an operation, action, or request that can be applied multiple times without changing the result, i.e. the state of the system, beyond the initial application. EXAMPLES (WEB APP CONTEXT): IDEMPOTENT: Making multiple identical requests has the same effect as making a single request. Idempotent operations are operations that can be applied multiple times without changing the result beyond the initial application. In other words, if an operation is idempotent, it will have the same effect whether it is executed once or multiple times.
For example, consider an HTTP PUT request to update a resource. If the request is idempotent, it will have the same effect whether it is executed once or multiple times, regardless of the state of the resource. In contrast, a non-idempotent operation such as an HTTP POST request, which creates a new resource, will have a different effect each time it is executed.
Idempotent operations are useful in distributed systems, where network failures and other errors may cause the same operation to be executed multiple times. Idempotent operations can help to ensure that the system remains in a consistent state, even in the face of these types of errors.
Examples of idempotent operations are:
- HTTP GET requests
- HTTP PUT requests that update a resource to a specific state
- Database operations such as SELECT statements
Examples of non-idempotent operations are:
- HTTP POST requests that create a new resource
- HTTP DELETE requests
- Database operations that modify data such as INSERT, UPDATE, DELETE.
To learn more, visit the following links: To learn more, visit the following links:

Loading…
Cancel
Save