Cache memory - An overview
Source: Web

Cache memory - An overview

In today’s world, unsaid expectation is to move ‘fast’. This is true in our tech world also, where sub-second latencies are the expected norm. In the WWW era, if page load slowly, you are bound to lose your potential customer, as they won’t wait long.

Instead of fetching data always from the storage, we can locally persist a subset of data based on some access pattern like temporal or local etc… It technically termed as Caching. Caching architecture pattern is the answer? Let us try to understand ‘cache’.

Let us start with what is Cache? The cache is a high speed, expensive piece of memory, which is used to speed up the memory retrieval process. The idea of introducing cache is that this extremely fast memory would store data that is frequently accessed and, if possible, the data that is around it- sound familiar from engineering days or working with distributed systems…

Let us understand different cache memory–L1, L2, L3

L1: The primary cache is on the chip itself and is used to store data that is most likely to be accessed by the processor.

L2: Secondary cache exists on a separate chip or on the CPU. It’s bigger than L1 and closer to the processor but still slower than L1, which allows it to be built into high-performance processors with a large cache size without slowing down latency too much.

L3: L3 cache is usually double the speed of DRAM (Dynamic Random Access Memory) but it is slower than L1 and L2.

We know about cache memory and now need to access it. We are going to be successful sometime and sometime not. Here comes the concept of Cache Hit and Miss–again sound familiar.. what are these?

In simple words.. What processor wants and gets it, this is called a cache hit. The processor requests something from the cache and does not get it, this a cache miss.

Is cache going to be always constant? If yes, then we lose advantages of it. So what we should to make space for new data? Simply, we should have some mechanism to discard old information from cache memory. This mechanism is called - Cache Replacement Policies.

What are these policies? Default one, Least recently used (LRU), The LRU policy is based on the assumption that if a location has not been accessed for some time, it’s likely to be used less frequently in the future. We replace the least recently used items first when new ones are added. Then we have Cache Write Policies like Write-back and Write-through.

In the end, why we should use it?

  • Performance–The foremost advantage of caching is that it enhances the system’s performance. By saving cached versions of data and gives you on demand wherever you request.
  • Access–We can store previously and regularly used data and serve to these data even if you do not have internet connectivity.
  • Travel time within network-Caching promotes more effective use of network bandwidth by decreasing the number of “trips” required to request and deliver information.
  • QoS–Definitely helps in providing better user experience. This can be a deal breaker for many products. And many mores…

Next time, I’ll talk about distributed cache. Till then happy reading...


Disclaimer: The opinions expressed in this post and all other posts are my personal opinions and do not reflect the opinions of any organization that I'm associated with - either in the past or the present.

To view or add a comment, sign in

More articles by Anand Tripathi

  • Engineering Team Happiness

    When you're leading a team, you often wonder, 'Are my team members happy?' In this post, I'll explain how we can tell…

  • A Simple Guiding principle for Prioritizing Tech Debt

    In the world of software development, tech debt is like that pile of tasks you keep putting off until they become a big…

    1 Comment
  • Event-Driven Architecture (EDA): Benefits and Pitfalls

    Deployment of microservices opens up many communication channels. It becomes increasingly difficult to manage the…

    5 Comments
  • Building a Large-Scale Distributed Technology Platform?

    ● Should you buy ‘off the shelf’ products? ● Should you write our own? - What architecture? - What should be the…

    3 Comments
  • Precept of High-performing team

    What we understand with word "TEAM"? Together, Everyone Achieves More An interdependent group of individuals who share…

    2 Comments

Others also viewed

Explore content categories