At the time of writing, I've been working on the server migration for roughly 10 hours straight. I figured I'd give you an idea of what I've been up to during these hours :)
I won't do a full build log like I did with willow as it's generally speaking much the same process in terms of hardware. I took the old server out of the rack case, cleaned it, moved the new server into its place. This particular build had a slight snag, which caused a delay. Basically, deeproot has a lot of network interfaces (5 of them). Given that the x99 platform doesn't have integrated graphics at all, I also needed a graphics card for this setup.
Now, the GPU I had (NVidia GT 710), is passively cooled but has a pretty large heatsink. Large enough to actually stop me from installing it in the PCI-e slot I needed to use (the last one in order to not block another slot), because the server case is built a bit differently than a tower.
So, I spent some time trying to bend the cooling fins out of the way, but eventually simply stop wasting time with that and used the GPU from the old deeproot. If that one goes bad at some point, it's easy enough to buy a new one.
Either way, during this process deeproot got restless and decided to attack me, drawing blood. (What really happened was I accidentally bumped my head into a sharp corner on one of the rack rails). No niggling injury or significant damage was done though. :)
I moved the case back into the rack, and it had now been 3.5 hours. Wow, time goes fast.
Next up, I spent some time reorganising the power cords, which were a messy tangle of cables and extension cords. I had picked up a
"Power Distribution Unit" just for this purpose. Effectively a power strip with a fancy name because it's related to 19" server gear.
After this, the hardware work was completed:
Next up, software. Going in to this migration, I know that I had a bunch of stuff remaining to do. In the old setup, deeproot had three primary purposes:
1. Firewall - Keeping things out that don't belong in the server network, and redirecting good traffic to the correct place
2. DNS - Making sure the servers know each-other's names and addresses
3. Mail - Sending outgoing mail to people who sign up, or get PMs with the notification enabled, or forum thread notifications.
The short of it is that I've spent a lot of time on all three of these, for different reasons.
Let's start with mail, because it should have been easy.
Before I even started this migration, I had outgoing mail configured and working on a new virtual machine (puggy). After the migration, I wanted the web site to use the new virtual server as its "forwarder", but for some reason it didn't work. I spent maybe 2 hours troubleshooting this, thinking that there was a DNS problem, thinking there was a firewall issue, thinking I mixed up the cables, tracing MAC addresses, looking at interface statu... And I notice one interface saying "Status: Disconnected". This was the network card that was supposed to carry backend traffic between the servers (including email to be sent out). Go back and look at the bottom right of the image above. That white network cable in the corner. Does it appear to be sticking out a bit too much? How about on this closeup?
Yup. That was it. Pushed it in a couple of millimeters and the NIC status went to "Up", and shortly thereafter, email worked. I had to change some configuration on postfix (the software I use for email) to make it listen on the proper interface, but the major part was the cable. Go figure.
DNS then. Oh my what a mess DNS is. I won't go through all my problems here; safe to say I got to spend quite some time on configuring BIND (the DNS software I, and most of the Internet, use). Compared to the previous configuration (also with BIND), I now have ACLs and Views set up to be able to separate my different networks (I'll detail the network setup in another blog post).
Finally, the firewall. This has been a long journey over the last week. I've spent all my spare time with firewall configuration, trying to get things set up. The core problem came from me switching what Firewall software I use. Before, I was using a Linux package called "Shorewall", a configuration interface on top of the built-in "IP Tables" that comes with Linux. Icepelt, the new virtual machine that does firewalling (and DNS and DHCP) is running pfSense, a FreeBSD based very very popular firewall suite.
pfSense works very differently from Shorewall (or IPTables in general), which has taken me many many hours (40+) to get a grip of, trying various configurations and restarting over and over again until I ended up with a setup I'm comfortable with.
It's not a bad platform; in fact, it's nice to have a GUI this time around instead of having to look at an endless list of firewall rules. It's quite different though, and has plenty of strange quirks.
Even after the hours on end of preparation, I still had to do the final config in the actual environment of the site. This took a while to sort out, but this "live work" has simply reinforced my belief that my understanding of pfSense is sound and works well.
So what's next?
Well, I still have some residual stuff to work on. My primary concern for now is that "Hyper-V Manager" refuses to connect to (new) deeproot. This is something that worked before I moved deeproot into place, but doesn't anymore. I have no clue why at this point but will be poking around with until I get it working again. It's technically possible to do it all using powershell, but I'd rather not :)
After that, I need to document this new network setup I have, so I know where things are connected and what's what.
And then it's back to FUMBBL code; getting BowlBot on Discord to announce Blackbox draws is high on my list.
(Ok, so that took like an hour to write.. Why can't things be quicker? :)