NDS-4600 - All That Is Known
This document which covers what we know of the NDS-4600 has been written with help from /u/Offspring.
This document which covers what we know of the NDS-4600 has been written with help from /u/Offspring.
Thanks to a note and a good find by /u/DerUlmer I can now say there is a known way to change the zoning for the NDS-4600:
See the section for "Changing the zoning"
https://blog.carlesmateo.com/2019/06/07/dealing-with-performance-degradation-on-zfs-draid-rebuilds-when-migrating-from-a-single-processor-to-a-multiprocessor-platform/
Note: I've not tested this (yet).
My zero-U PDUs get covered up as my rack isn't super super deep.
One issue I've recently run into with a failed SATA drive in one of my NDS-4600 units is that Linux frequently tries to recover the drive by resetting the bus. This takes out a few other disks in the group with it. The resulting IO timeouts cause problems for my Ceph OSDs using those disks.
It should be noted that only some types of disk failures cause this. The host bus resets only are done by the Linux kernel in some cases (I think) and I suspect the cause of the other disks errors is said disk.
Previously I posted about the used 60-bay DAS units I recently acquired and racked. Since then I've figured out the basics of using them and have them up and working.
Guess what has telnet enabled and TCP/1138. Telnet is obvious but I'm still working on TCP/1138.
These two NDS-4600-JD-05 units each have space for 60 3.5" drives with four 6 Gbps SAS ports on each of the two controllers. The plan is to connect two R610s (eventually R620s) to each of them with the DAS units partitioned so that each of the R610s/R620s has 30 disks (well, 15-disks on each of a pair of redundant 6 Gbps SAS lines). There will be a Ceph OSD per disk on each of the 30 disks. Half of the 30 disks will be 8TB and half will be either 3TB or 2TB disks.