Ceph With Many OSDs

While setting up my Ceph cluster on a set of Dell R710s, one with 60 disks attached to it, I found that I needed to raise fs.aio-max-nr to around 1,000,000. SELinux also needed to be disabled. Once that was done the normal cephadm osd install worked great, even with 60 disks. 

$ cat /etc/sysctl.d/99-osd.conf

# For OSDs
fs.aio-max-nr=1000000

Tags

Ceph Orch Error When Adding Host

[ceph: root@xxxxxx0 /]# ceph orch host add xxxxxxxxxxx1
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
    wrapper_copy = lambda *l_args,

Tags

Two NDS-4600-JD-05

These two NDS-4600-JD-05 units each have space for 60 3.5" drives with four 6 Gbps SAS ports on each of the two controllers. The plan is to connect two R610s (eventually R620s) to each of them with the DAS units partitioned so that each of the R610s/R620s has 30 disks (well, 15-disks on each of a pair of redundant 6 Gbps SAS lines). There will be a Ceph OSD per disk on each of the 30 disks. Half of the 30 disks will be 8TB and half will be either 3TB or 2TB disks. 

Subscribe to Ceph