EasyManua.ls Logo

Pivot3 X5-6500 - Page 136

Pivot3 X5-6500
137 pages
Print Icon
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Loading...
Setup & User Guide
| 136
DOC-246-GDE-Pivot3 Acuity 2.3 Setup & User Guide-v1.0.docx
Replication reports its data transfer amounts based on the number of 256K chunks of data it needs to copy. For a first
replication of a volume, for data written sequentially, the size of the volume will match the size of the replication source
snapshot, which will match the amount of data transferred. However, if the data is written in small blocks sparsely to the
volume, the source snapshot size may not match the number of bytes transferred by the replication.
An example below illustrates these space reporting concepts using two example volumes, both 500GB. The first is written
sequentially with about 259GB of data. The second is written with about 10GB of 4K random data. Then a first replication
is performed for the volume, creating a source snapshot, a target volume, and a target snapshot.
Parameter Vol-1-Seq Vol-2-Rand Comment
Iometer write type 256K seq 4K rand
Amount written ~259GB ~10GB
Source Volume Size (MB) 258,656 306,396 HighWaterMark from volume
Source Snapshot Size (MB) 258,805 498,833 SizeUsed from snapshot
Target Volume Size (MB) 258,656 306,392 HighWaterMark from volume
Target Snapshot Size (MB) 258,805 498,832 SizeUsed from snapshot
Replicate transferred (MB) 258,794 306,396 From replication statistics
For sequential written volume, the amount of data originally written is about 259 GB. The reported volume size, snapshot
sizes, and replication size all report approximately the same 259 GB size. There are very slight differences due to data
alignment factors and the methods used to report the sizes. These numbers all are intuitive Pivot3 has written a fixed
amount of data and then replicated it to an identical target volume; all the sizes match.
For small block random data, the numbers are wildly different. The customer written data (~10 GB), randomly sprinkles its
4K writes throughout the volume address space. Internally, the Acuity manages the volume address space in 256K chunks,
so each 4K write will cause a 256K chunk to be reported as In Use. This is why the Source Volume size is reported as
306,396 MB. So, a small amount of sparsely/randomly written customer data can expand to a much bigger size of used
space reported for the volume, as the used space is computed in units of 256K chunks.
Then take a snapshot of the data. The snapshot only retains information on the snapshot in 1MB page units. Even though
only about 60% of the 256K chunks was written to, almost every 1MB page was involved, making the SizeUsed for the
Snapshot show 498,832 MB consumed (out of 500,000).
So while only 60 percent of the chunks were consumed, 99% of the pages were consumed. In cases of replication,
snapshots are examined based on 256K chunks, and every 256K chunk that has been written to is sent - this equates to
306,396 MBs. This disagrees with the snapshot size because the snapshot size is computed based on pages.
In conclusion, the way data usage is reported for volumes, snapshots, and replication will vary depending on the nature in
which the data is written. Sequential data writes result in a good match between the data usage reports and the actual
amount of data written. Small random or sparse block writes can result in data usage reports that exceed the expected
size because of the way the internal Acuity mechanisms report the size based on number of 256K chunks or 1MB pages.

Table of Contents