ADVANCED COMPUTER ARCHITECTURE Notes - Memory Hierarchy Design - 4, Study notes for Advanced Computer Architecture. Punjab Technical University (PTU)

Advanced Computer Architecture

Description: Notes on: Average memory access time, CPU stalls, cache optimizations, Reducing Cache Miss Penalty, Multi-Level Caches, Critical Word First and Early Restart, Merging Write Buffer, Victim Caches, Larger Block Size, Higher Associativity, Compiler Optimizations, Loop Interchange, Blocking, virtual memory, Fast Address Translation, Paged Virtual Memory, Alpha Memory Management, Kernel,
Showing pages  1  -  2  of  23
Memory Hierarchy Design
1. How to evaluate Cache Performance. Explain various cache optimization categories.
The average memory access time is calculated as follows
Average memory access time = hit time + Miss rate x Miss Penalty.
Where Hit Time is the time to deliver a block in the cache to the processor (includes time
to determine whether the block is in the cache), Miss Rate is the fraction of memory
references not found in cache (misses/references) and Miss Penalty is the additional time
required because of a miss
The average memory access time due to cache misses predicts processor performance.
First, there are other reasons for stalls, such as contention due to I/O devices using
memory and due to cache misses
Second, The CPU stalls during misses, and the memory stall time is strongly correlated
to average memory access time.
CPU time = (CPU execution clock cycles + Memory stall clock cycles) × Clock cycle
There are 17 cache optimizations into four categories:
1 Reducing the miss penalty: multilevel caches, critical word first, read miss before
write miss, merging write buffers, victim caches;
2 Reducing the miss rate larger block size, larger cache size, higher associativity,
pseudo-associativity, and compiler optimizations;
3 Reducing the miss penalty or miss rate via parallelism: nonblocking caches,
hardware prefetching, and compiler prefetching;
4 Reducing the time to hit in the cache: small and simple caches, avoiding address
translation, and pipelined cache access.
2. Explain various techniques for Reducing Cache Miss Penalty
There are five optimizations techniques to reduce miss penalty.
i) First Miss Penalty Reduction Technique: Multi-Level Caches
The First Miss Penalty Reduction Technique follows the Adding another
level of cache between the original cache and memory. The first-level cache can be small
enough to match the clock cycle time of the fast CPU and the second-level cache can be
large enough to capture many accesses that would go to main memory, thereby the
effective miss penalty.
The definition of average memory access time for a two-level cache. Using the
subscripts L1 and L2 to refer, respectively, to a first-level and a second-level cache, the
formula is
Average memory access time = Hit timeL1 + Miss rateL1 × Miss penaltyL1
and Miss penaltyL1 = Hit timeL2 + Miss rateL2 × Miss penaltyL2
so Average memory access time = Hit timeL1 + Miss rateL1× (Hit timeL2 + Miss rateL2 ×
Miss penaltyL2)
Local miss rate—This rate is simply the number of misses in a cache divided by the total
number of memory accesses to this cache. As you would expect, for the first-level cache
it is equal to Miss rateL1 and for the second-level cache it is Miss rateL2.
Global miss rate—The number of misses in the cache divided by the total num-ber of
memory accesses generated by the CPU. Using the terms above, the global miss rate for
Memory disk
L1 cache
L2 cache
The preview of this document ends here! Please or to read the full document or to download it.
Document information
Uploaded by: punjabforever
Views: 30570
Downloads : 50+
Address: Engineering
University: Punjab Technical University (PTU)
Upload date: 01/09/2011
Embed this document:
Docsity is not optimized for the browser you're using. In order to have a better experience please switch to Google Chrome, Firefox, Internet Explorer 9+ or Safari! Download Google Chrome