-
Notifications
You must be signed in to change notification settings - Fork 4
DellRAID
Our RAID disks are really Just a Bunch of Disks (JBOD) as it is easier to manage disks from the file system. ZFS and Linux RAID want to talk to the disks directly, and it's easier to move such disks around.
We've encapsulated all the m620 setup code
in idrac-m620-setup.sh, which is a sequence of
functions you can selectively execute, including idrac
.
RAID And Storage Configuration using RACADM Commands in iDRAC7 is a simple manual documenting the raid commands. RACADM Command Line Reference Guide for iDRAC7 1.50.50 and CMC 4.5 (p106) is the complete reference.
We first have to reset the RAID config:
$ idrac 1 storage resetconfig:RAID.Integrated.1-1
$ idrac 1 jobqueue create RAID.Integrated.1-1 -r pwrcycle -s TIME_NOW -e TIME_NA
After this completes, we'll follow ThornLabs instructions:
$ for p in $(idrac 1 raid get pdisks); do
idrac 1 raid createvd:RAID.Integrated.1-1 -rl r0 -wp wt -rp nra -ss 64k "-pdkey:$p"a
done
$ idrac 1 job RAID.Integrated.1-1
This sets up all the physical disks (pdisks) as independent virtual disks with RAID0 (r0), write through (wt), no read ahead (nra), and 64k strip size (seems what people recommend for SSDs).
Wait a bit and verify that the vdisks have been created:
$ idrac 1 raid get vdisks -o
Disk.Virtual.0:RAID.Integrated.1-1
Status = Ok
DeviceDescription = Virtual Disk 0 on Integrated RAID Controller 1
Name = Virtual Disk 0
RollupStatus = Ok
State = Online
OperationalState = Not applicable
Layout = Raid-0
Size = 931.00 GB
SpanDepth = 1
AvailableProtocols = SATA
MediaType = SSD
ReadPolicy = No Read Ahead
WritePolicy = Write Through
StripeSize = 64K
DiskCachePolicy = Enabled
BadBlocksFound = NO
Secured = NO
RemainingRedundancy = 0
EnhancedCache = Not Applicable
T10PIStatus = Disabled
BlockSizeInBytes = 512
Testing performance of PERC H310 (non-RAID) vs motherboard S110:
lvcreate -y --wipesignatures y -L 20G -n test centos
mkfs.xfs /dev/mapper/centos-test
mount /dev/mapper/centos-test /mnt
cd /mnt
# sync on close (every 10G with enough RAM (64G))
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test conv=fdatasync; echo $(( $(date +%s) - $i ))
rm -f test
sync
# sync on write (every 1M)
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test oflag=dsync; echo $(( $(date +%s) - $i ))
cd
umount /mnt
lvremove /dev/mapper/centos-test
Write through, no read ahead on SSD. fdatasync, dsync 10GB/test (1024^3)
- H310 E2650 v2 2.6ghz 64g EVO 850 256g: 384MBps (26s), 156MBps (64s)
- S110 E2660 v2 2.2ghz 128g EVO 860 256g: 222MBps (45s), 131MBps (76s)
- S110 E2660 v2 2.2ghz 64g EVO 850 256g: 222MBps (45s), 129MBps (77s)
- S110 E2660 v2 2.2ghz 64g EVO 850 2TB mdadm RAID1: 232MBps (43s), 85MBps (117s)
References on performance of hardware RAID vs madam:
- https://serverfault.com/a/685328
- http://en.community.dell.com/support-forums/servers/f/906/t/19475037
- https://delightlylinux.wordpress.com/2016/05/24/motherboard-raid-or-linux-mdadm-which-is-faster/
- https://linuxaria.com/pills/how-to-properly-use-dd-on-linux-to-benchmark-the-write-speed-of-your-disk
- https://romanrm.net/dd-benchmark
- https://linuxaria.com/pills/how-to-properly-use-dd-on-linux-to-benchmark-the-write-speed-of-your-disk