172  IBM System Storage N series Hardware Guide
11.5  N series read caching techniques
The random read performance of a storage system depends on drive count (total number of 
drives in the storage system) and drive rotational speed. Unfortunately, adding more drives to 
boost storage performance also means the use of more power, cooling, and space. With 
single disk capacity growing much more quickly than performance, many applications require 
more disk spindles to achieve optimum performance, even when the more capacity is not 
needed.
11.5.1  Introduction of read caching
Read caching is the process of deciding which data to keep or prefetch into storage system 
memory to satisfy read requests more rapidly. The N series uses a multilevel approach to 
read caching to break the link between random read performance and spindle count. This 
configuration provides you with the following options to deliver low read latency and high read 
throughput while minimizing the number of disk spindles you need:
ň° Read caching in system memory (the system buffer cache) provides the first-level read 
cache and is used in all current N series storage systems.
ň° Flash Cache (PAM II) provides an optional second-level read cache to supplement system 
memory.
ň° FlexCache creates a separate caching tier within your storage infrastructure to satisfy read 
throughput requirements in the most data-intensive environments.
The system buffer cache and Flash Cache increase read performance within a storage 
system. FlexCache scales read performance beyond the boundaries of any single systemâs 
performance capabilities.
N series deduplication and other storage efficiency technologies eliminate duplicate blocks 
from disk storage. These functions ensure that valuable cache space is not wasted storing 
multiple copies of the same data blocks. Both the system buffer cache and Flash Cache 
benefit from this âcache amplificationâ effect. The percentage of cache hits increases and 
average latency improves as more shared blocks are cached. N series FlexShare software 
can also be used to prioritize some workloads over others and modify caching behavior to 
meet specific objectives.
11.5.2  Read caching in system memory
Read caching features the following distinct aspects:
ň° Keeping âvaluableâ data in system memory
ň° Prefetching data into system memory before it is requested
Deciding which data to keep in system memory
The simplest means of accelerating read performance is to cache data in system memory 
after it arrives there. If another request for the same data is received, that request can then be 
satisfied from memory rather than having to reread it from disk. However, for each block in the 
system buffer cache, Data ONTAP must determine the potential âvalueâ of the block. The 
following questions must be addressed for each data block:
ň° Is the data likely to be reused?
ň° How long should the data stay in memory?
ň° Will the data change before it can be reused?