Adding content to 105-caching-strategies

content/system-design
syedmouaazfarrukh 2 years ago
parent bf5f25d30e
commit 91d0beaaf2
  1. 17
      src/roadmaps/system-design/content/112-caching/105-caching-strategies/100-cache-aside.md
  2. 14
      src/roadmaps/system-design/content/112-caching/105-caching-strategies/101-write-through.md
  3. 16
      src/roadmaps/system-design/content/112-caching/105-caching-strategies/102-write-behind.md
  4. 11
      src/roadmaps/system-design/content/112-caching/105-caching-strategies/103-refresh-ahead.md
  5. 25
      src/roadmaps/system-design/content/112-caching/105-caching-strategies/index.md

@ -1 +1,16 @@
# Cache aside
# Cache-aside
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
- Look for entry in cache, resulting in a cache miss
- Load entry from the database
- Add entry to cache
- Return entry
Memcached is generally used in this manner. Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
Learn more from the following links:
- [Getting started with Cache-aside](https://github.com/donnemartin/system-design-primer#cache-aside)
- [What is Memcached?](https://memcached.org/)

@ -1 +1,13 @@
# Write through
# Write-through
The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database:
- Application adds/updates entry in cache
- Cache synchronously writes entry to data store
- Return
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale.
To learn more, visit the following links:
- [Getting started with Write-through](https://github.com/donnemartin/system-design-primer#Write-through)

@ -1 +1,15 @@
# Write behind
# Write-behind
In write-behind, the application does the following:
- Add/update entry in cache
- Asynchronously write entry to the data store, improving write performance
## Disadvantages of write-behind:
- There could be data loss if the cache goes down prior to its contents hitting the data store.
- It is more complex to implement write-behind than it is to implement cache-aside or write-through.
To learn more, visit the following links:
- [Getting started with Write-behind](https://github.com/donnemartin/system-design-primer#Write-behind)

@ -1 +1,10 @@
# Refresh ahead
# Refresh-ahead
You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration. Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
## Disadvantage of refresh-ahead:
- Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
To learn more, visit the following links:
- [Getting started with Refresh-ahead](https://github.com/donnemartin/system-design-primer#refresh-ahead)

@ -1 +1,24 @@
# Caching strategies
# Caching Strategies
Caching improves page load times and can reduce the load on your servers and databases. In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
Databases often benefit from a uniform distribution of reads and writes across its partitions. Popular items can skew the distribution, causing bottlenecks. Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
## Client caching
Caches can be located on the client side (OS or browser), server side, or in a distinct cache layer.
## CDN caching
CDNs are considered a type of cache.
## Web server caching
Reverse proxies and caches such as Varnish can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
## Database caching
Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
## Application caching
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so cache invalidation algorithms such as least recently used (LRU) can help invalidate 'cold' entries and keep 'hot' data in RAM.
To learn more, visit the following links:
- [Getting started with Cache](https://github.com/donnemartin/system-design-primer#cache)
Loading…
Cancel
Save