Blog

  • Project Luna

    A lot can happen over the course of 3 months. My original 7-year hardware freeze starting June 1st, 2025 quickly evolved into a 9-year plan due to an attempt to achieve “high availability” in the homelab. Some aspects of this were already in place – two separate network-attached storage (NAS) virtual appliances, for instance. But a combination of wanting to test the VMware (now Broadcom) Virtual Cloud Foundation stack and comparing VSAN to Ceph cluster storage (more to come on this in future blog posts) in pursuit of high availability of VMs lead to yet more changes at the hardware level. As of this blog post on September 1st, 2025 – three months since the last post – the homelab looks a lot different, but I think is much better for the long haul.

    The idea of waiting 9 years (to roughly coincide with my 50th birthday) is to stretch the original 7 years out. The core hardware and software stack will, for the most part, remain in a supported software state through those next 9 years. That meant pushing the envelope to get as new of hardware now as reasonable (still in the grey area between prosumer and enterprise) while still optimizing cost and energy consumption along the way. No jet-engine rack servers or enterprise network switches for me. This hardware is running about 10 feet from my office desk, so it has to be nearly silent. The constant whine of fans or huge energy bills are non-starters.

    But the combination of “high availability” as a general design goal alongside maintainability lead to an interesting path. All of my designs had been – with a few exceptions – focused on my own individual ability to maintain a working homelab. But what if the homelab was designed with maintenance and troubleshooting tasks could be performed by a spouse or kids? “The Internet is down again” is something any homelabber sharing a network connection with others in a household has dreaded. What broke this time? Often relatively easy to fix for the more technically focused, wouldn’t it be great to have the same level of self-service troubleshooting for other members of one’s household that already exists for the standard consumer router (turn if off and back on). That’s the genesis of Project Luna – a moonshot to see if I could get a highly available and easy to troubleshoot homelab that I could essentially hand off to others for maintenance for day-to-day operations. Not the experimental bits, just the resources needed by others in the household (Internet, Plex, etc.)

    Project Luna is a redesign of the homelab, segmenting the “home” and “lab” sections of the hardware and networking. When an experiment goes awry, it shouldn’t take down Plex. When the Internet connection needs reset, it shouldn’t involve a reboot of the virtual machine host running TrueNAS. When equipment needs power cycled, it shouldn’t involve tracing cables or pulling plugs out of the backs of devices. When a piece of equipment fails (permanently), is there a backup system ready to take its place right away?

    Next post will outline the redundant network and router design, with a focus on energy efficiency and failure domains.

  • The Great Reset

    The past decade has been a whirlwind learning experience for me when it comes to technology. This blog post on June 1, 2025 marks the conclusion of that phase of learning for me. What started off as a way to host iTunes music and movies for sharing at home turned into a self-hosted homelab that consumed an ever increasing amount of time and effort to learn and optimize. Fortunately, the power bill to sustain this learning process has largely remained consistently manageable.

    The choices I made for the final iteration of the homelab won’t make much sense without some context, so here’s the background in a nutshell:

    • Interested in computers and tech in general as a kid
    • First home computer: Macintosh Performa 636 CD
    • Dell Dimension running Windows ME was first home PC
    • Started college with a 12″ PowerBook G4
    • Built first DIY Windows PC in college (based on Athlon 64)
    • Worked in student tech support center in college (part-time)
    • Used Unix in Astronomy undergrad program for data analysis
    • Started Astronomy PhD. but changed career track to tech
    • Worked as contractor, then university employee for help desk
    • Earned an MBA for career advancement while working
    • Started job as IT manager at large real estate company in 2013
    • Started new job in government as IT manager in 2016

    It was after starting the IT manager job in 2016 that I got to see the full scope of the IT operations environment. Previously, I had just worked in one silo in a larger organization. The other parts of the IT infrastructure were always “handled by another team.” The scale or the organization I started working for in 2016 was responsible for the entire IT infrastructure stack – network, servers, email, business applications, desktop support, etc.

    With the responsibility to manage such a wide scope and varied areas of technical expertise, it became necessary (at least to my own perception) to learn as much as I could about those different areas to best lead a team and manage an IT infrastructure that supported important government functions. This is really the point where casual tinkering with computers (my longtime hobby and career) took on a new form of urgency to learn more and faster. The homelab quickly became the place where I sank an increasing number of hours trying different hardware and software combinations.

    For those unfamiliar with the terms, a quick aside on the terminology (I learned of this distinction only recently, and at the time of writing can’t find the original source to provide attribution, unfortunately):

    • Homelab – a place to learn skills for IT jobs by building and maintaining a mirror or scaled-down version of an enterprise IT operation.
    • Self-hosted – rather that utilizing (primarily) paid or free cloud-hosted services, building and maintaining services that are self-managed and controlled (e.g. self-hosted Plex server vs. Netflix).

    The homelab I have today originally started (with iTunes Home Sharing) as just a way to self-host. In part, this was a cost-saving measure to avoid having to rent / pay-per-view media. I could build my own collection of media and have both (in my own opinion) a better selection of media and not have to pay for the privilege of watching it. Before the age of streaming services (like Netflix), this was a major difference in how content was consumed. Most normal people would just buy DVDs or Blu-rays and watch them by inserting each disc to a player on a single TV. With iTunes and the alternatives like XBox Media Center (XBMC, which later became Plex), one could convert the physical DVD and Blu-ray media into digital files that could then be streamed to other devices in your home – sometimes to more than one device at a time!

    While a quaint idea now, it was a strong motivation / incentive for the tech-curious to eschew the mainstream media platforms and build something to accomplish the same end goal. Media self-hosting was what originally got me involved, but there are a multitude of other self-hosted services, many of which I took on as additional projects over time. All of those self-hosted services I eventually began to use became the imaginary / fabricated “critical business services” that my homelab had to support. It was this continued delicate balance (and copious amounts of patience from my spouse for maintenance and troubleshooting outages) of experimenting with work-related or work-adjacent technology while still maintaining those self-hosted services that really explains the full motivation for the past decade of my homelab.

    A chronology of every homelab iteration I’ve used over time would be tedious to read, let alone write. So for the purposes of this blog, I am simply going to showcase the current “best approximation of an imperfect ideal” and then looking back on all the attempts didn’t work or were replaced for one reason or another. The central critique of anything I write here is going to be the assumption that I’ve somehow magically arrived at an optimal point at all. With technology improving all the time, why would I suddenly want to freeze my homelab in amber? With time and frankly getting older, the never-ending experiment (the tagline for this blog) has to have some finite end or at least a way to scale back the time investment.

    Unlike most relevant individual tech blogs, this blog is more a retrospective / memoir than an ongoing web-log (blog). There will still be (I hope) new things I can learn and experiment with, adding those experiences to this blog. But I’ve made a promise to myself to avoid any hardware changes for as long as I can – the general goal I have set for myself to leave things in amber (on the hardware side at least) is 2032, roughly 7 years from now. Security updates / software upgrades can’t be left unattended for 7 years, so those will continue at a normal pace. But my hope for the current homelab is to have enough hardware / redundancy in place to last 7 years without any major overhauls or hardware upgrades. For a tinkerer such as myself, this is going to be a challenge by itself.

    The title of this blog post, “The Great Reset”, is less about changing anything on the homelab itself – that’s been iterated on up until now – but rather a reset in priorities and a focus on other things in life, generally. The original plan for this blog post or something like it was actually supposed to happen in January 2025 (as a New Year’s Resolution of sorts). The past 6 months of “one more thing” or “maybe if I change it to work this way” is evidence of just how hard it was and will be for me to hit pause on the hardware tinkering. But here we are, and hopefully my resolve to stay the course with my current homelab configuration will last longer that previous attempts (which rarely made it a month or more before having everything rebuilt / reconfigured yet again).