Mitigating A Series of Unfortunate Cosmic Events

Deep space cosmos Stock Photo

Mitigating A Series of Unfortunate Cosmic Events

Well after a bit of research, I determined that sharing remote NFS mount wasn’t the best idea. Without getting into too much detail, from what I’ve read is that people who have tried, usually end up with issues with file locks and synchronization between the SMB Server and the NFS server.

Now, I was willing except that risk because technically the NFS is running on the VM Host and the ‘Samba’ server would be running in a VM on that host, so as far as network latency issues are concerned, I figured the way I would be implementing it I could mitigate the risk of synchronization issues.

What I really wanted was to setup a distributed file system using GlusterFS. The problem is that I don’t have the bare metal available to do a proper 3 node setup.

After a series of hardware failures that I’m officially blaming on the Cosmos (act of God or Cosmic rays, I’m not a religious person, but one way or another, the odds of having 4 hard drive failures, on different systems, in different locations [office, my mini dev-ops lab(spare bedroom) and studio in basement], on different media types [mechanical and SSD] all within 4-5 weeks [yeah, March sucked] is so unlikely that I can’t help but blame it on something out of this realm.)

I survived these events with out any catastrophic data loss, due in part mainly because of my data hording and not so much because I’m super excellent at keeping regular backups. Yes, sorry folks, I’m human too, and have procrastinated too long on setting up a proper backup solution.

Now, in all fairness, I was in the middle of implementing a automated strategy, but some of these unforeseen hardware failures kind of left me scrambling to throw files on systems that just had the available space in the meantime until I could finish implementing my storage and backup strategy.

So my data is more of a mess and one of the servers I had planned on using as a node for my GlusterFS experiment is running in a degradded state because it only has 3 out of 4 drive available. I’m note going to get a single replacement for that server because in I eventually the near future I’m going to replace all the drive with larger capacity.

However, learning the way of a DevOp isn’t cheap, and neither are everyday living expenses. With this past year and a half devoted to
brushing up on a software development ‘learning a crap ton of new skills that I did’t plan on with no real income, savings tend to start shrinking like a shriveled raisin that use to be a plump-fat-ass grape.

So I can’t be dropping fat-stacks on countless server upgrade. Those of you that are in the ‘DevOp’ scene know. There’s always something else you need to have and add to your setup that will make all the difference.

With an upcoming series I’m putting together about Virtualization for the Home Office, I’ve already spent more than I should of on gear, so buying another 5-10 hard drives for storage nodes in a GlusterFS test is not a top priority. Mainly do to the emphasized word in that last sentence: it’s a test, meaning: not imperative to operations.

What is imperative is having network storage available for me and Mrs. Me to continue developing cool shit together. If we can’t do that, then what’s the point? So, I’ve decided, that for now, I’m just going to setup a SMB/Samba server on my most recent Virtualization host.

The silver lining is that it’s the first time I’ve used ZFS on a system, and I like what I see. In that regard it won’t be the typical Samba server I always setup.

That was part of what bothered me. I’ve setup Samba servers before, but I’m actively trying to learn new skills and techniques on how to manage these systems using the hardware I have available. (So no Geo-Replication for me yet). At least with this server it’s underlying file system is one I haven’t used and can gain more experience with. Doesn’t hurt that ZFS has lots of options that will even further lessen the likelihood that I’ll experience a data-loss event.

So with this update complete, I’m heading over to setup the server and document how I went about it. Once that’s polished up into a presentable format, I’ll let ya’ll know.

Peace out for now. Stay Safe out there. -Brando

Author: Brando

Leave a Reply

Your email address will not be published. Required fields are marked *