Today I wondered what makes a good computing infrastructure setup? Of course, the context matters here. Does one measure good infrastructure in terms of “enterprise-grade” or prefer just a well working home setup? Or can one actually transfer ideas from enterprises to its smaller scale sister?

Just to clarify: I will focus on the environment at home, since these kind of fast-moving thought experiments don’t go well with enterprises.

What does one require?

I had to think about what kind of requirements do I have? Which kind of infrastructure do I use at home?

  1. Basic internet access for
  • Performing study-related task
  • Entertainment
  • Whatever you need the internet for
  • Both my desktop and mobile devices
  1. Basic local infrastructure for
    • Performing backups of both my desktop and mobile machine
    • Streaming audio to my analogue HiFi system
  2. Advanced local infrastructure for
    • Running and testing computing projects
    • Learning about the latest hype cycle subject
    • Fun and profit

Another important global requirement is energy-efficiency: power costs are quite high in Germany.

Requirement decomposition

Let’s break down the different tiers into what should actually be accomplished and how.

Tier 1: Basic internet access

  • DHCP service
  • Traffic routing/packet forwarding to my ISP
  • Local DNS resolver
  • Wireless- and Ethernet-based network ports to connect my devices to

Tier 1 is instantly solved by tons of off-the-shelf components also called Home routers. I opted for a very basic one by Ubiquiti and it actually does it job very reliably, no hiccups yet. Sadly, I can’t really say that about my WiFi access point (also why I’m not listing it here, it’s running OpenWrt and branded as a travel router). Since I’ve been very satisfied with my current Ubiquiti router, I have been looking into the UniFi AP FlexHD, which is still quite pricy, but has an awesome form-factor, looks good on a desk and is powered via POE, or UniFi AP AC Lite, which is less expensive, a bit slower on 5 GHz and can be powered via POE passthrough by the ER-X.

Tier 2: Basic local infrastructure

  • NFS / TimeMachine storage server
  • AirPlay-supporting digital-to-analogue interface

The first requirement is accomplished by my Synology DS213+ I grabbed from Ebay (and modded a bit by replacing the default fan). The AirPlay streaming service is provided by an Raspberry Pi 4 with a Hifiberry extension board and an awesome tool called shairport-sync.

The only problem with the current configuration is that backing up my Mac over WiFi is quite slow due the aforementioned access point.

Tier 3: Advanced local infrastructure

I have two machines that serve as the basis for the compute analysis with a total of 16 Cores, 32 GB of RAM and about 1TB storage.

  • Dynamically allocate storage and compute as needed
  • Flexible networking
  • Low configuration and maintenance overhead
  • Support for multiple tenants for sharing with friends and family
  • Setting up reference routes in the local network using DNS
  • Only-run-what-you-use, when no compute is required it should not use a lot of power

The only thing reliably working on this list is the reference route aspect, meaning my DNS resolver on my home network is split from the ER-X router and also running on the RPI mentioned in the T2 section. The RPI runs a CoreDNS instance sourcing configuration from local storage and a frontend application I built called Koala.

Dynamically allocating storage and compute as needed can be solved either from an application perspective, where I just supply application blueprints to my compute environment, or dynamic provisioning of virtual machines, which is more flexible but also complicated. If one would choose dynamic application provisioning I guess you could probably just set up a Kubernetes cluster. But since running virtual machines is fun (and also more flexible, did I mention that?), a good off-the-shelf solution seems to be Proxmox. A few years back, I also tried to built a basic provisioning frontend but never finished it. Maybe its time to look into fjell again, since I would love to have a more refined DigitalOcean-like user experience at home.

Flexible networking is a hard one to solve. Managing virtual networks via VXLAN manually has been a horrible task in my experience, but maybe Proxmox can help me with that too.

Multi-tenancy should I be able to accomplish by setting up identity provider infrastructure. I looked into the ORY ecosystem but it seemed quite heavyweight and a bit too much for the use case I’m looking for here. At some point I found dex, a project by the folks from CoreOS (now RedHat now IBM). While I still need to look into the templating and theming (‘cause consistent user experience, duh), I have tried to set it up with a local version of Koala and it seemed to work quite well. And the connector interface seems fairly straightforward.

Only-run-what-you-use should probably be renamed to power over ethernet.

Low maintenance overhead can probably quite easy since I got only 2 machines, the possiblity of any kind of hardware failure is quite low compared to any large-scale system.


I should probably go buy a better WiFi access point. And maybe building a DigitalOcean-like interface for provisioning virtual machines, networks and storage at home is worth it?