5.2 RAID 10 Notes
- If you force online a failed RAID 10, ASM erroneously shows two
drives rebuilding (the two underlying member drives), not one.
- You cannot change the priority of a RAID 10 verify. Setting
the priority at the start of a verify has no effect. The
priority is still shown as high. Changing the priority of
a running verify on a RAID-10 changes the displayed priority
until a rescan is done, then the priority shows as high again.
- Performing a Verify or Verify/Fix on an RAID 10 displays the
same message text in the event log: "Build/Verify started on
second level logical drive of 'LogicalDrive_0.'" You may see the
message three times for a Verify, but only once for a Verify/Fix.
5.3 RAID x0 Notes
- To create a RAID x0 with an odd number of drives (15, 25, etc),
specify an odd number of second-level devices in the Advanced
settings for the array. For a 25 drive RAID 50, for instance,
the default is 24 drives.
NOTE: This differs from the BIOS utility, which creates RAID x0
arrays with an odd number of drives by default.
- After building or verifying a leg of a second-level logical drive,
the status of the second-level logical drive is displayed as a
"Quick Initialized" drive.
5.4 RAID Volume Notes
- In ASM, a failed RAID Volume comprised of two RAID 1 logical
drives is erroneously reported as a failed RAID 10. A failed
RAID Volume comprised of two RAID 5 logical drives is
erroneously reported as a failed RAID 50.
5.5 JBOD Notes
- In this release, ASM deletes partitioned JBODs without issuing
a warning message.
- When migrating a JBOD to a Simple Volume, the disk must be quiescent
(no I/O load). Otherwise, the migration will fail with an I/O Read
error.
5.6 Hybrid RAID Notes
- ASM supports Hybrid RAID 1 and RAID 10 logical drives comprised
of hard disk drives and Solid State Drives (SSDs). For a Hybrid
RAID 10, you must select an equal number of SSDs and HDDs in
“every other drive” order, that is: SSD—HDD—SSD—HDD, and so on.
Failure to select drives in this order creates a standard
logical drive that does not take advantage of SSD performance.
5.7 RAID-Level Migration (RLM) Notes
- We strongly recommend that you use the default 256KB stripe
size for all RAID-level migrations. Choosing a different stripe
size may crash the system.
- If a disk error occurs when migrating a 2TB RAID 0 to RAID 5
(eg, bad blocks), ASM displays a message that the RAID 5 drive
is reconfiguring even though the migration failed and no
RAID-level migration task is running. To recreate the
logical drive, fix or replace the bad disk, delete the RAID 5
in ASM, then try again.
- When migrating a RAID 5EE, be careful not to remove and re-insert
a drive in the array. If you do, the drive will not be included
when the array is rebuilt. The migration will stop and the drive
7