- Jinja 100%
| roles | ||
| vars | ||
| cloud-init.yaml | ||
| inventory.ini | ||
| LICENCE.md | ||
| main.yml | ||
| README.md | ||
| setup.yml | ||
Fully Automated Luxury Servers
Ansible playbooks to set up my servers.
Topology
It's very simple. So simple in fact that I won't even bother with a diagram. There are three servers:
- Orca: Home server that hosts my private services.
- Plankton: A small VPS somewhere that runs public services, like my blog, code hosting, and email sending.
- Oyster: A Beelink ME Mini that acts as a remote backup server, and lives at my parents' place.
Orca and Plankton each run the current version of Fedora Server (current changes when OpenZFS package for that version), and Oyster runs FreeBSD
Usage
The tasks in here should all be tagged, but otherwise are fairly simple
playbooks. I configure my SSH to run on an alternative port, and make everything
available through Tailscale. That is handled in a separate playbook, with manual
inventories passed in. This basic stuff is handled in setup.yml. Assuming that
you can connect to this host via SSH:
ansible-playbook \
-JK \
-i "198.55.100.71," \
setup.yml \
-e "tailscale_auth_key=tskey-auth-..."
Once setup, the main.yml playbook can take over. It is highly customised for
my exact needs, but you might find something useful in the roles/ directory.
Roles
My hope is that all of the roles in the roles/ are readable and
straightforward enough, but here is a primer.
There are some fundamental roles that are written to be imported, and others that are called directly by the overall playbook. The former are helpers, and the latter actually install things that I might want to use. The helpers are:
acme_proxy: Configures a virtual host in an existing Caddy installation to
proxy only ACME requests down to a private (but accessible from the Caddy
instance) host. This is the workhorse behind me getting real TLS certificates
for my local services.
containerised_service: Sets up a container image to be run by Podman in a given user account. Mostly a passthrough for the existing Podman modules, but it supports some Systemd defaults that I use.service_user: Creates a Linux user account, and ensures that they can run Systemd units on startup by enabling linger.volume: Navigates configuring a folder to be used as a mounted volume on a container under SELinux.yum_repo: Installs and enables a yum repo for custom software.zfs: Basic wrappers to create a ZFS dataset and mount it somewhere.
There are several services that I run on bare metal because they concern themselves with platform-wide matters:
caddy: Used as the TLS-terminating reverse proxy for basically every back end service hosted here.clickhouse: I don't run databases in containers. I use Clickhouse as a storage backend for observability data. For now, Gigapipe is the only real client.firewall: Ensures that firewalld is installed and working. I open a very few ports and set the default zone to public.node_exporter: For basic metrics about my hosts. This only runs node_exporter on each host and on a particular port. Scraping is handled separately.opendkim: Configures DKIM for a Postfix installation. It's very strictly set up for my exact use case, but the components are all here for something a little more generic.podman: Pretty much just installs Podman and any other required tools.postfix: I can send email from@max.plumbingaddresses. The email is sent from Plankton, and all DNS/whatever is configured appropriately. Orca is configured as a relay service. This role takes an arbitrary map of config options and does the right thing. You can see the differences inmain.yml.postgresql: I still don't run databases in containers. and as much as I can, I prefer my downstream services to talk to this single PostgreSQL cluster that I can manage, observe, and back up centrally.restic: Systemwide backup. This role installs and configures Restic, and exposes atarget.ymlto allow downstream services to add directories to the backup target list.selinux: Just makes sure we are enforcing, and that the management utilities that Ansible requires are installed.tailscale: Installs and configures Tailscale. This role expects the variabletailscale_auth_keyto be passed in from the command line as in the example above.zrepl: I use Zrepl for backups. This role installs and configures zrepl according to configuration passed in its parameters.
Most of the services I run end up looking fairly similar. They create a user, create configuration files, create volumes, configure the service, and then add some reverse-proxy information to the Caddy config. These use the above helper roles with some templates. There are a few that deserve speical mention:
coredns: A small handy Go DNS server. Runs in HA across multiple hosts and is configured within Tailscale to respond tomax.plumbingrequests, which is a real life domain that I actually bought for this.time_machine: Acompletelyworking Linux Samba config for Time Machine with multiple macOS devices. As of early 2026, I have disabled this on my computer because it kept restarting the backup and eating all of my disk space. It works just fine on my partner's laptop.user_host: Sets up a directory in a user's home directory to host a static website. This powers my blog. Ensures that the home directory is traversable, and that SELinux and Systemd all play nicely.vmagent: On FreeBSD hosts, this role installs and runs it as a bare-metal service. On Linux hosts, it runs in a container. It collects metrics from predetermined ports, and ships them out to a centralised service.vector: Vector runs on Orca, and is the centralised service mentioned above. It pushes to whatever backends I so choose.
Services
- FreshRSS
- Forgejo
- Gigapipe
- Grafana
- FoundryVTT
- Linkwarden
- Nextcloud
- Pocket ID
- Woodpecker
Not Quite Fully Automated, Actually
When you're managing a fleet of three nodes, there are certain decisions that land differently to how they might if you're managing a fleet orders of magnitude larger. For example:
- I have made very few steps to automate OS installation. As computers I actually own, Orca and Oyster were installed manually. Plankton is treated as disposable, and so there is a cloud-init file in this repo. You can't have it, but it's not exotic I promise.
- No file system stuff. My home server is complex! It's got a ZFS pool and an XFS boot volume. There are mounts and things in fstab that Ansible is not going anywhere near.
- No public DNS management. Even dealing with this manually I've screwed it up twice already. My private DNS is fully automated, though.
These steps should be performed before running anything listed in main.yml,
but after running things in setup.yml.
Plankton
This section is easy. Nothing is changed. Everything is as it comes. You can set up a VPS somewhere, give it your SSH key, point this at it and you'll have a copy of a server that does what I need.
Orca
This is where things get just a touch more complex. Primarily around storage. Orca has an NVMe SSD and two 12TB spinning HDDs. The SSD is managed exclusively by LVM.
ZFS
To configure my ZPool, I installed ZFS following the instructions in the OpenZFS docs, and then:
zpool create storage mirror sda sdb
Since initial setup, I did in fact build a zfs role that provisions datasets
on the pool with options. I didn't retrofit anything, but it handles
encryptiont the way I want, too.
PostgreSQL
For PostgreSQL backups, Fedora gives us the default directory
/var/lib/pgsql/backups. I saw no reason to change this:
zfs create storage/pgsql-backups
zfs set mountpoint=/var/lib/pgsql/backups storage/pgsql-backups
zfs set atime=off storage/pgsql-backups
zfs set recordsize=1M storage/pgsql-backups
chown -R postgres:postgres /var/lib/pgsql/backups
chmod -R 770 /var/lib/pgsql/backups
After this, all ZFS stuff has been automated in Ansible.
In addition to this, I moved my PostgreSQL raw storage off my boot volume:
lvcreate -L 64G fedora -n pgsql-data
mkfs.xfs /dev/fedora/pgsql-data
Once I had this set up, I configured /etc/fstab to mount it automatically. To
do this, I found the UUID of the device by running lsblk -f. From there, I
added an entry to my fstab:
UUID={{uuid}} /var/lib/pgsql/data xfs defaults 0 2
After this, Systemd needs to update:
systemctl daemon-reload
systemctl start var-lib-pgsql-data.mount
I manage database accounts databases manually. I have no need for automation or
idempotence anywhere near that. To do this, log in with psql while logged in
as postgres. Then:
CREATE ROLE dbname LOGIN;
\password dbname
<enter password; save it>
CREATE DATABASE dbname OWNER dbname;
Clickhouse
As with PostgreSQL, Clickhouse gets a ZFS dataset. However rather than just
being for backups, it's also part of a tiered storage arrangement. In addition
to the ZFS dataset, there is also a logical volume just for Clickhouse data.
These are mounted at /var/lib/clickhouse/data/zfs and
/var/lib/clickhouse/data/fast respectively.
Also as with Postgres, accounts are managed directly:
CREATE DATABASE dbname;
CREATE USER dbname IDENTIFIED WITH sha256_password BY '<enter password>';
GRANT ALL ON "dbname".* TO dbname;