170  IBM System Storage N series Hardware Guide
ň° Improves response times. Both block-oriented SAN protocols (Fibre Channel protocol, 
iSCSI, and FCoE) and file-oriented NAS storage protocols (CIFS and NFS) require an 
acknowledgement from the storage system that a write was completed. To reply to a write 
request, a storage system without any NVRAM must complete the following steps:
a. Update its in-memory data structures.
b. Allocate disk space for new data.
c. Wait for all modified data to reach disk.
A storage system with an NVRAM write cache runs the same steps, but copies modified 
data into NVRAM instead of waiting for disk writes. Data ONTAP can reply to a write 
request much more quickly because it must update only its in-memory data structures and 
log the request. It does not have to allocate disk space for new data or copy modified data 
and metadata to NVRAM.
ň° Optimizes disk writes. Journaling all write data immediately and acknowledging the client 
or host not only improves response times, but gives Data ONTAP more time to schedule 
and optimize disk writes. Storage systems that cache writes in the disk driver layer must 
accelerate processing in all the intervening layers to provide a quick response to host or 
client. This requirement gives them less time to optimize.
For more information about how Data ONTAP benefits from NVRAM, see IBM System 
Storage N series File System Design for an NFS File Server, REDP-4086, which is available 
at this website:
http://www.redbooks.ibm.com/abstracts/redp4086.html?Open
11.4.2  NVRAM operation
No matter how large a write cache is or how it is used, eventually data must be written to disk. 
Data ONTAP divides its NVRAM into two separate buffers. When one buffer is full, that 
triggers disk write activity to flush all the cached writes to disk and create a consistency point. 
Meanwhile, the second buffer continues to collect incoming writes until it is full, and then the 
process reverts to the first buffer. This approach to caching writes in combination with WAFL 
is closely integrated with N series RAID 4 and RAID-DP. It allows the N series to schedule 
writes such that disk write performance is optimized for the underlying RAID array. The 
combination of N series NVRAM and WAFL in effect turns a set of random writes into 
sequential writes.
The controller contains a special chunk of RAM called NVRAM. It is non-volatile because it 
has a battery. Therefore, if a sudden disaster that interrupts the power supply strikes the 
system, the data that is stored in NVRAM is not lost. 
After data gets to an N series storage system, it is treated in the same way whether it came 
through a SAN or NAS connection. As I/O requests come into the system, they first go to 
RAM. The RAM on an N series system is used as in any other system; it is where Data 
ONTAP does active processing. As the write requests come in, the operating system also 
logs them in to NVRAM. 
NVRAM is logically divided into two halves so that as one half is emptying out, the incoming 
requests fill up the other half. As soon as WAFL fills up one half of NVRAM, WAFL forces a 
consistency point (CP) to happen. It then writes the contents of that half of NVRAM to the 
storage media. A fully loaded system does back-to-back CPs, so it is filling and refilling both 
halves of the NVRAM. 
Upon receipt from the host, WAFL logs writes in NVRAM and immediately sends an 
acknowledgment (ACK) back to the host. At that point from the hostâs perspective, the data 
was written to storage. But in fact, the data might be temporarily held in NVRAM.