ElastiCache

ByteByteGo | Top caching strategies | Memcached vs Redis |

Overview

  • Caches are in-memory databases with really high performance, low latency.

  • Support 2 open-source engines: Redis & Memcached

  • Benefit

    • reduce read workload of your databases

    • make your app stateless

    • fully managed by AWS (patching, optimizations, setup, configuration, monitoring, failure recovery and backups)

  • Use case:

    • User session

    • Improve performance of various of data stores, including relational DB, NoSQL DB, API by caching frequently accessed data.


Caching

Redis vs Memcached

Memcached is designed for simplicity while Redis offers a rich set of features that make it effective for a wide range of use cases. About HA, Memcached < Redis.

  • Redis featured encryption, PCI-DSS Compliance.

    • Use cases:

      • Real-time transaction

      • Chat

      • Gaming leaderboard

      • BI & analytics

  • MemCache support AutoDiscovery. Auto Discovery allows applications to automatically identify all of the nodes in a Memcached cluster, simplifying the management of Memcached nodes and allowing your application to scale in and out without manual intervention.

    • You can choose Memcached over Redis if you have the following requirements:

      • You need the simplest model possible.

      • You need to run large nodes with multiple cores or threads.

      • You need the ability to scale out and in, adding and removing nodes as demand on your system increases and decreases.

      • You need to cache objects, such as a database.

      • Multithreaded architecture

Write data

Write-through

  • write both cache & DB

  • An ElastiCache cluster with a write-through strategy will allow for the read requests to be redirected to ElastiCache efficiently -> increase the retrieval speed for the intensive read requests.

  • Pros: data consistency, up-to-date data, reduce the risk of data loss

  • Cons:

    • high latency in write operation

    • the old cache must be invalidated programmatically.

Write-around

  • write DB first

  • Pros: reduce cache pollution

  • Cons: cached not updated -> increase read latency

Write-back

  • write cache first -> async -> DB later

  • Pros: lower write latency

  • Cons: risk data loss

Read data

Lazy loading (Cache Aside)

  • The application must manage both cache and storage, complicating the code.

  • Ensuring data consistency is challenging due to the lack of atomic operations on cache and storage.

Read through

  • Pros: application always read from cache -> easy to implement.

Session store

  • store temp session data in a cache (using [TTL])

  • fully managed Redis & Memcached in-memory DB

  • data-intensive apps, or improve performance of your existing apps.

    • caching

    • game leaderboard

    • session management


Metrics


Trivia

  • Why Lazy loading is called "Lazy"?: a record does not load until the record is needed.

Concepts

  • Cache hit: the data is found in the cache.

  • Cache miss: the data is not found in the cache.

Last updated