Repeater Update | 449.375- 110.9hz | Changed Room
I've turned off the VA-Sterling room for now as I've got my node defaulting to the Virginia room with a few other nodes in the state.
Sterling, Va Wires-X Node & Repeater Is Up: 449.375- 110.9hz
The Sterling, VA, USA repeater and WIRES-X node is now up and operational.
It is a full duplex WIRES-X node, C4FM Repeater, and FM Repeater in FM19ha run by KG4TIH.
Scrubbing Vs Deep Scrubbing
Ceph has two forms of scrubbing that it runs periodically: Scrub and Deep Scrub
A Scrub is basically as fsck for replicated objects. It ensures that each object's replicas are all the latest version and exist.
A Deep Scrub is a full checksum validation of all data.
Tags
Ceph For Media Storage (Big But Slow I/O)
My Ceph cluster at home isn't designed for performance. It's not designed to maximum availability. It's designed for a low cost per TiB while still maintaining usability and decent disk-level redundancy. Here is some recent tuning to help with performance and corruption prevention....
70cm Sterling Wires-X Node Online
Sterling, Va now has a Wires-X node (100% digital with full control) on 439.800MHz.
Note: If/when coordinated frequencies come in and the associated repeater is up, the frequencies will change.
23TiB On CephFS & Growing
Original post on Reddit
Hardware
Previously I posted about the used 60-bay DAS units I recently acquired and racked. Since then I've figured out the basics of using them and have them up and working.
Tags
"error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]"
I was getting this from podman on a CentOS 8 box:
"error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]"
It was fixed by killing off all podman and /usr/bin/conmon processes as the user that I was running the commands as. Note: Don't do that as root using killall unless you limit to only your user.
The underlying error may have been running out of FD.
Ceph With Many OSDs
While setting up my Ceph cluster on a set of Dell R710s, one with 60 disks attached to it, I found that I needed to raise fs.aio-max-nr to around 1,000,000. SELinux also needed to be disabled. Once that was done the normal cephadm osd install worked great, even with 60 disks.
$ cat /etc/sysctl.d/99-osd.conf # For OSDs fs.aio-max-nr=1000000
Tags
Ceph Orch Error When Adding Host
[ceph: root@xxxxxx0 /]# ceph orch host add xxxxxxxxxxx1 Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda> wrapper_copy = lambda *l_args,