We're less than a week into the new year and the January fundraiser I posted about recently, but I wanted to post an update about the fundraiser as there has been some things going on. Those of you who frequent our
Discord, or dropped by my
Twitch channel the other day, will know that I've been building and installing one of the two servers I am aiming for.
I can't express how thankful I am of the support so far, and how the community steps up every time there is a need for things like this. Thank you everyone!
I'll go through some tech stuff below for those of you who are interested (again, some of this is a repeat from the last post), but before I get into that I want to address what's next for this fundraiser. I am still trying to raise additional funds to purchase the second machine (a storage server), and anyone who donates up until the end of the month will have their account marked with the "Saved Karina" icon. We're honestly not super far away from the second server (roughly $900 at the time I'm writing this), and even if this goal isn't reached by the end of the month, donations done beyond the end of the month will go towards that goal.
Anyway, moving on to the techy stuff:
The first machine, which was already funded and purchased, is specced as follows:
- Intel 13700K
- 128GB DDR5 memory
- 2x 250GB M.2 NVME drives in a mirrored software raid setup (Specifically WD Blue SN580)
- 4x 2TB SATA SSD drives for VM storage (PNY CS900)
- 2x 1TB M.2 NVME PCIe Gen 5 to increase write speed (Crucial T700), more on this later.
As you can see, this isn't enterprise grade (which I mentioned in the previous post as well), which is a way to cut costs and keep power consumption and noise to a minimum as the servers run in my home (cue questions about considering cloud; short answer is "try speccing up a machine with 128GB RAM and 4TB storage and see how much that costs).
On this machine, I have installed a hypervisor software called XCP-ng. After a couple of days of playing around with it, I am quite happy with how it works and performs. There are plenty of features remaining for me to fully understand, but what I've seen so far looks great. It's easy to set up new virtual machines, it's easy to manage them and move them around between different storage systems and overall setup and configuration feels solid.
Since the second server I'm aiming for is intended to be for storage, I have a temporary setup in the current machine. While it's a temporary solution, it's set up in a way that is going to be easy to transfer to the second new server I'm aiming for.
Storage is always a fairly complicated consideration for me, because resilience is absolutely critical. I need the site to continue running if/when a drive dies, without having to resort to restoring from backups. Historically, I have been running various different solutions for this (for example software raid, Windows Storage Spaces, and BTRFS), but with this new setup I wanted to try ZFS. ZFS is very popular and has been around for a long time, and adds some features which aren't available on traditional RAID.
One such feature is data checksumming which allows the system to correctly handle situations where one drive is nearly failing and has corrupt data. With traditional RAID, there is no way for the system to know which set of data is correct, and you can in worst case lose your data. With ZFS, there is a checksum involved, which means that the system can look at the two datasets and identify which is correct and which isn't, and can correctly handle the error.
Another benefit with ZFS is that is has multiple levels of caching and performance features. For example, it caches data in system memory so that the most commonly read data is basically instant and doesn't require touching the drives at all (this is one of the reasons it's a good idea to have a separate storage server). There is also what's called a "Log" device which can be used to speed up writes. This is what those two PCIe gen 5 drives are doing in my setup. They're an order of magnitude faster than the SATA drives, and allow the system to write data in a safe way without just buffering data in RAM (which is possible to configure as well, but is a bad idea if you lose power unexpectedly). You'll notice that I picked up two of these drives, which is for resiliency again. They're working in a mirrored pair so if one of them fails, there is no data loss. Similarly, the 4 SATA drives are set up in two pairs of two drives (in RAID-terms, this is often called RAID-10).
As a last example of features, ZFS supports compression (LZ4, which isn't super effective in terms of compression, but it's very very fast). Early numbers indicate that this is giving me roughly 1.7x the storage capacity of the physical drives.
While all of this was new tech to me, I did enough "homework" before buying anything that I had a general understanding of how I wanted to configure this. I feel confident that the new setup will serve FUMBBL well in the future, and will allow smoother and easier hardware migrations in the future (XCP-ng allows me to move virtual machines across both storage systems and hypervisors in a transparent way, which is incredibly cool).
Thank you again for your immense support!