168  IBM System Storage N series Hardware Guide
The N series reduces this penalty by buffering NVRAM-protected writes in memory, and then 
writing full RAID stripes plus parity whenever possible. This process makes reading parity 
data before writing unnecessary, and requires only a single parity calculation for a full stripe of 
data blocks. WAFL does not overwrite existing blocks when they are modified, and it can write 
data and metadata to any location. In other data layouts, modified data blocks often are 
overwritten, and metadata is often required to be at fixed locations.
This approach offers much better write performance, even for double-parity RAID (RAID 6). 
Unlike other RAID 6 implementations, RAID-DP performs so well that it is the default option 
for N series storage systems. Tests show that random write performance declines only 2% 
versus the N series RAID 4 implementation. By comparison, another major storage vendorâs 
RAID 6 random write performance decreases by 33% relative to RAID 5 on the same system. 
RAID 4 and RAID 5 are single-parity RAID implementations. RAID 4 uses a designated parity 
disk; RAID 5 distributes parity information across all disks in a RAID group.
11.3  NVRAM and system memory
Caching technologies provide a way to decouple storage performance from the number of 
disks in the underlying disk array to substantially improve cost. The N series platform was a 
pioneer in the development of innovative read and write caching technologies. The N series 
storage systems use NVRAM to journal incoming write requests. This configuration allows it 
to commit write requests to nonvolatile memory and respond back to writing hosts without 
delay. Caching writes early in the stack allows the N series to optimize writes to disk, even 
when writing to double-parity RAID. Most other storage vendors cache writes at the device 
driver level.
The N series uses a multilevel approach to read caching. The first-level read cache is 
provided by the system buffer cache. Special algorithms decide which data to retain in 
memory and which data to prefetch to optimize this function. The N series Flash Cache 
provides an optional second-level cache. It accepts blocks as they are ejected from the buffer 
cache to create a large, low-latency block pool to satisfy read requests. Flash Cache can 
reduce your storage costs by 50% or more. It does so by reducing the number of spindles that 
are needed for a specific level of performance. Therefore, it allows you to replace 
high-performance disks with more economical options.
Both buffer cache and Flash Cache benefit from a cache amplification effect that occurs when 
N series deduplication or FlexClone technologies are used. Behavior can be further tuned 
and priorities can be set by using N series FlexShare to create different classes of service.
Traditionally, storage performance was closely tied to spindle count. The primary means of 
boosting storage performance was to add more or higher performance disks. However, the 
intelligent use of caching can dramatically improve storage performance for various 
applications.
From the beginning, the N series platform pioneered innovative approaches to read and write 
caching. These approaches allow you to do more with less hardware and at less cost. N 
series caching technologies can help you in the following ways:
ň° Increases I/O throughput while decreasing I/O latency (the time needed to satisfy an I/O 
request)
ň° Decreases storage capital and operating costs for a specific level of performance
ň° Eliminates much of the manual performance tuning that is necessary in traditional storage 
environments