Ceph: What Drive Sizes To Use

Drive Counts

  • 43 2TB
  • 1 2.5TB
  • 23  3TB
  • 42 8TB (In Ceph)
  • 30 8TB (Not In Ceph)

Why

Why these drives? They're the main data drives I've had and used for years. The only ones I'm going to remove soon are a subset of the 2TB drives with high spin times. Some of them have more than 8.5 years of spin time. I'll probably remove any disk with more than 6 years of spin time as a preventive measure. 

Tags

Ceph: osd_memory_target

A couple of months back I changed the value of "osd_memory_target" for all of my OSDs from 4GiB to 1.5GiB. That change has stopped all RAM related issues on my cluster. While I suspect but can't prove a small performance drop, it's well worth it in my case. 

Tags

NDS-4600 - SATA Drive Failures In Linux

One issue I've recently run into with a failed SATA drive in one of my NDS-4600 units is that Linux frequently tries to recover the drive by resetting the bus. This takes out a few other disks in the group with it. The resulting IO timeouts cause problems for my Ceph OSDs using those disks. 

It should be noted that only some types of disk failures cause this. The host bus resets only are done by the Linux kernel in some cases (I think) and I suspect the cause of the other disks errors is said disk. 

Subscribe to