Dealing with Cache Avalanche to Safeguard Your System

📆 · ⏳ 5 min read · ·

What is Cache Avalanche?

Imagine an online store, where customers flood in to explore products, and information about those products is stored in caches to expedite their experience.

Now, suppose a sudden spike in demand occurs, like a flash sale or a viral trend, leading to the expiration of numerous cache entries simultaneously.

This phenomenon, where a massive number of cache requests bombard the server at once, overwhelming its resources and causing it to slow down or crash, is known as Cache Avalanche.

In the previous blog we read about different caching strategies, and how they can help improve performance and user experience. But what happens when the cache becomes a liability rather than an asset?

Let’s take a closer look at Cache Avalanche, its causes, and how you can prevent it from wreaking havoc on your system.

The Ripple effect

Cache Avalanche is like a domino effect. When multiple cache entries expire simultaneously, the sudden influx of requests to regenerate those entries strains the server and database.

If the system isn’t equipped to handle this surge, it can lead to delayed responses, timeouts, or worse — system crashes.

This can have a detrimental impact on user experience, especially during critical moments.

Combatting Cache Avalanche

Randomized Expiry Time

The concept here is to introduce an element of randomness into cache expiry times. Rather than setting the same fixed expiry time for all cache entries, implement a dynamic expiry interval.

This means that cache entries will expire at different times, reducing the probability of simultaneous expiration and the subsequent avalanche.

For example let say you have a cache TTL set to 30 days, we can randomize the cache expiry time while keeping it within 15% of the original TTL (time-to-live).

The below javascript code snippet shows how we can achieve this (don’t worry if you don’t understand the code or the language, the concept is what matters)

const getRandomExpiry = (originalExpiry, deviationPercentage = 15) => {
// Calculate the range within which the expiry time can vary
const deviation = originalExpiry * (deviationPercentage / 100);
// Calculate the minimum and maximum expiry times
const minExpiry = originalExpiry - deviation;
const maxExpiry = originalExpiry + deviation;
// Generate a random expiry time within the specified range
const randomExpiry = Math.random() * (maxExpiry - minExpiry) + minExpiry;
return randomExpiry;
};
const originalTTL = 30 * 24 * 60 * 60 * 1000; // 30 days in milliseconds
const deviationPercentage = 15; // 15% deviation
const randomizedExpiry = getRandomExpiry(originalTTL, deviationPercentage);
console.log(`Original TTL: ${originalTTL} ms`);
console.log(`Randomized Expiry: ${randomizedExpiry} ms`);

The above code snippet generates a random expiry time within 15% of the original TTL. This means that the cache entry will expire at a random time between 25.5 days and 34.5 days.

Cache Preload

Think of cache preload as a proactive measure to thwart Cache Avalanche. By identifying critical or frequently accessed data, you can periodically load this data into the cache before it expires.

This approach is particularly beneficial when you anticipate traffic spikes due to events like flash sales, product launches, or anticipated surges in user activity.

By ensuring that essential data is already cached and ready to serve, you can smoothly navigate through traffic peaks without triggering an avalanche of requests.

Graceful Degradation

Graceful degradation is an ingenious approach that ensures your application doesn’t crumble if the cache becomes temporarily inaccessible. Design your application in such a way that it can gracefully handle situations where cache data is missing or expired.

This might involve having backup mechanisms in place, such as fetching data from the database directly when cache data isn’t available.

This strategy ensures that even if the cache experiences turbulence, your application can still function reasonably well, maintaining user experience without abrupt crashes.

đź’ˇ

The golden rule here is that your database should always be in a good enough state to serve requests, even if the cache is unavailable.

Throttling

Imagine standing in line at a buffet where the host controls how fast people can approach the food table. Similarly, with throttling, you control the rate at which cache regeneration requests are processed.

By limiting the number of requests that can trigger cache regeneration within a specific time frame, you prevent an overwhelming surge of requests that could lead to an avalanche.

Throttling helps manage the load on your server, allowing it to regenerate cache entries at a controlled pace. This ensures that the server doesn’t get overwhelmed and can continue to serve requests without crashing.

High Availability Architecture

High availability architecture involves having redundant cache layers across multiple servers or even different geographical locations.

If one cache layer becomes overwhelmed or goes offline, another can seamlessly take over the load, preventing the avalanche from sweeping through your system.

This architecture spreads the load and ensures that your system can withstand sudden spikes in demand without collapsing. It also helps maintain optimal performance and user experience, even during traffic peaks.

Rate Limiting

Rate limiting is all about maintaining order in the cache renewal process. By enforcing a cap on the number of requests that can trigger cache regeneration within a specified time frame, you ensure that requests are processed in a controlled manner.

This prevents a surge of requests from hitting the server simultaneously and overwhelming its resources. Rate limiting is like having a traffic cop managing the flow of cache-related requests, maintaining order and preventing chaos.

Conclusion

Cache Avalanche is a phenomenon that can wreak havoc on your system, leading to delayed responses, timeouts, or even system crashes. This can have a detrimental impact on user experience, especially during critical moments.

By understanding the problem and implementing proactive measures like randomized expiry times, cache preloading, and graceful degradation, you can ensure your applications run seamlessly even during traffic spikes.

Remember, a proactive approach is the key to averting the avalanche and maintaining optimal user experiences.

You may also like

  • # system design

    Building a Read-Heavy System: Key Considerations for Success

    In this article, we will discuss the key considerations for building a read-heavy system and how to ensure its success.

  • # system design

    Building a Write-Heavy System: Key Considerations for Success

    In this article, we'll discuss crucial considerations that can guide you towards success in building a write-heavy system and help you navigate the complexities of managing high volumes of write operations.

  • # system design

    Tackling Thundering Herd Problem effectively

    In this article, we will discuss what is the thundering herd problem and how you can tackle it effectively when designing a system.