My Ceph cluster at home isn't designed for performance. It's not designed to maximum availability. It's designed for a low cost per TiB while still maintaining usability and decent disk-level redundancy. Here is some recent tuning to help with performance and corruption prevention....
Previously I posted about the used 60-bay DAS units I recently acquired and racked. Since then I've figured out the basics of using them and have them up and working.
[ceph: root@xxxxxx0 /]# ceph orch host add xxxxxxxxxxx1 Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda> wrapper_copy = lambda *l_args,
- One IP per mgmt card
- Basic telnet
- No login
- Can control various backplane options
- Working on finding out how to control backplane mode
- Not TLS
- openssl s_client causes it to restart
- Need to fuzz this
- Not TLS
Guess what has telnet enabled and TCP/1138. Telnet is obvious but I'm still working on TCP/1138.