Cache Memory - Coddicted

Cache Memory

A cache is a component that transparently stores data so the future requests for the data can be served faster.The data that is stored within a cache, might be values that have been computed earlier or duplicates of the original values that are stored elsewhere.

Cache Hit, Cache Miss
If requested data is contained in the cache (cache hit occurs), this request can be served by simply reading the cache, which is comparatively faster.

Otherwise (cache miss occurs), the data has to be recomputed or fetched from its original storage location, which is comparatively slower.

Hence, the greater the number of requests that can be served from the cache, the faster the overall performance becomes.

Operation

  • A cache is made up of a pool of enteries. Each entry has a datum(piece of data) which is a copy of the same datum stored in some backing store.
  • Each entry has a tag, which specified the identity of the datum in the backing store of which the entry is copy.
  • When the cache client needs to access a datum presumed to exist in the backing store , it first checks the cache. If an entry can be found with tag matching that of the desired datum, the datum entry is used instead.This situation is known as Cache hit.
  • When the cache is consulted and found not to contain a datum with the desired tag, is known as cache miss. The previously uncached datum fetched from the backing store during cache miss handling is usually copied into the cache, ready for next access.

Index

Tag

V

M

R

data

 

 

 

 

 

 

 

 

 

 

 

 

V : valid bit

M : modified bit

R: replacement bit

Writing Policies
When a system writes a datum to cache, it must at some point write that datum to backing store as well. The timing of this write is controlled by what is known as the Writing policy.

Two basic approaches

1. Write through: Write is done synchronously both to the cache and to the backing store.

2. Write-back/Write-behind: Initially writing is done only to the cache. The write to backing store is postponed until the cache blocks containing the data are about to be modified/replaced by new content.

Example: A computer uses 32-bit byte addressing, the computer uses a 2-way associative cache with a capacity of 32KB.Each cache block contains 16 bytes. Calculate the number of bits in the TAG, SET and OFFSET fields of a main memory address.

Given:

  • The main memory address uses 32 bit and is byte addressable.
  • An OFFSET part identifies a particular byte within the cache line
  • A SET part identifies the set that contains the requested data.
  • A TAG part must be saved in each cache line along with its data to distinguish different addresses that could be placed in the set.
  • Cache lines/blocks – Each cache line consists of a TAG and a DATA (if any, valid bits, modified bits, replacement bits may be present)
  • 2 way set associative cache with total capacity 32KB.

SET

1

2

TAG

DATA

TAG

DATA

0

 

 

 

 

1

 

 

 

 

2

 

 

 

 

 

 

 

 

 

Each cache block size is given 16 byte then the set size = 16 X 2 = 32 Byte

No. of sets in cache = Total cache size/(cache line x associativity)

                                       =32KB/32B =1024

Hence 10 Bits are required to represent a SET.

Also a set is byte addressable , we need 4 bits to address each byte in block (16 Byte)

Hence 4 bits are required to represent OFFSET within a SET

No. of TAG bits = Bits in request address – SET- OFFSET

                            = 32-10-4 =18

Hence 18 bits are required to represent a TAG

 

TAG

SET

OFFSET

18

10

4


Back To Top
Rate this post

Leave a Reply