Ansible playbooks for my servers
Find a file
2025-10-17 23:53:18 -04:00
roles Add Linkwarden 2025-10-17 23:53:18 -04:00
vars Add Linkwarden 2025-10-17 23:53:18 -04:00
cloud-init.yaml Add email! 2025-10-06 22:39:45 -04:00
inventory.ini wip 2025-10-04 16:59:18 -04:00
LICENCE.md Initial commit 2025-07-28 12:37:43 -07:00
main.yml Add Linkwarden 2025-10-17 23:53:18 -04:00
README.md Update README even more 2025-10-06 23:17:03 -04:00
setup.yml wip 2025-10-04 16:59:18 -04:00

Fully Automated Luxury Servers

Ansible playbooks to set up my servers.

Topology

It's very simple. So simple in fact that I won't even bother with a diagram. There are two servers:

  • Orca: Home server that hosts my private services.
  • Plankton: A small VPS somewhere that runs public services, like my blog, code hosting, and email sending.

Both of these run a currently supported version of Fedora Server. The Ansible code contained within assumes this and is not tested for portability anywhere else.

Usage

The tasks in here should all be tagged, but otherwise are fairly simple playbooks. I configure my SSH to run on an alternative port, and make everything available through Tailscale. That is handled in a separate playbook, with manual inventories passed in. This basic stuff is handled in setup.yml. Assuming that you can connect to this host via SSH:

ansible-playbook \
    -JK \
    -i "198.55.100.71," \
    setup.yml \
    -e "tailscale_auth_key=tskey-auth-..."

Once setup, the main.yml playbook can take over. It is highly customised for my exact needs, but you might find something useful in the roles/ directory.

Roles

My hope is that all of the roles in the roles/ are readable and straightforward enough, but here is a primer:

  • acme_proxy: Configures a virtual host in an existing Caddy installation to proxy only ACME requests down to a private (but accessible from the Caddy instance) host. This is the workhorse behind me getting real TLS certificates for my local services.
  • caddy: Installs Caddy with a really basic Caddyfile that just imports everything from conf.d. Later services add stuff here. This is not a container, this runs on the host.
  • coredns: Installs CoreDNS, which is a small handy Go DNS server. It runs as a rootless podman container (using the volume and service_user roles) and exposes itself on a custom port. It then uses the socat role to proxy that to :53 without needing that cap itself.
  • firewall: Sets up Firewalld. Makes sure the default zone is public, adds some essential services.
  • forgejo: Installs and configures Forgejo as a rootless Podman container, exposing both HTTP and SSH on a custom port. It registers a virtual host with Caddy, and uses the socat role to proxy SSH from port 22.
  • nextcloud: Installs and configured NextCloud as a rootless Podman container. This runs the fpm-alpine image built by the community, and vends its own virtual host for Caddy. It uses slirp4netns to allow host loopback, enabling connections to PostgreSQL and Postfix (for sending email).
  • opendkim: Configures DKIM for a Postfix installation. It's very strictly set up for my exact use case, but the components are all here for something a little more generic.
  • podman: Pretty much just installs Podman and any other required tools.
  • postfix: Installs and configures Postfix to run on host.
  • postgresql: Installs and configures PostgreSQL (17) to run on host. It inits the database, and configures backups with WAL archiving.
  • selinux: Ensures that SELinux is enforcing and that the package Ansible needs to manage it is installed.
  • service_user: Creates a user with a home directory and with linger enabled, so they can run Systemd units.
  • socat: Sets up a system-level Systemd service to run socat, to allow port forwarding from standard low-number ports to custom high-number ports.
  • ssh_close: Closes the default port 22 for SSH, and close the firewall. This should be run after ssh_open in a separate Ansible playbook.
  • ssh_open: Opens a custom port for SSH access. This allows connections on the new port, which then allows ssh_close to be run.
  • tailscale: Installs and configures Tailscale. This role expects the variable tailscale_auth_key to be passed in from the command line as in the example above.
  • user_host: Sets up a directory in a user's home directory to host a static website. This powers my blog. Ensures that the home directory is traversable, and that SELinux and Systemd all play nicely.
  • volume: Sets up a directory to be used as a volume mount. This is more particular than the :z/:Z labels Podman gives you. It explicitly sets the types and contets of all files, as well as ownership. This took me so long to get right.

Not Quite Fully Automated, Actually

When you're managing a fleet of two nodes, there are certain decisions that fall differently to how they might if you're managing a fleet orders of magnitude larger. For example:

  • I have made no steps to automate OS installation: there's no cloud-init, there's no pre-made images. I just installed Fedora on my home server and set up a VPS.
  • No file system stuff. My home server is complex! It's got a ZFS pool and an XFS boot volume. There are mounts and things in fstab that Ansible is not going anywhere near.
  • No public DNS management. Even dealing with this manually I've screwed it up twice already. My private DNS is fully automated, though.

These steps should be performed before running anything listed in main.yml, but after running things in setup.yml.

Plankton

This section is easy. Nothing is changed. Everything is as it comes. You can set up a VPS somewhere, give it your SSH key, point this at it and you'll have a copy of a server that does what I need.

Orca

This is where things get just a touch more complex. Primarily around storage. Orca has an NVMe SSD and two 12TB spinning HDDs. The SSD is managed exclusively by LVM.

ZFS

To configure my ZPool, I installed ZFS following the instructions in the OpenZFS docs, and then:

zpool create storage mirror sda sdb

PostgreSQL

For PostgreSQL backups, Fedora gives us the default directory /var/lib/pgsql/backups. I saw no reason to change this:

zfs create storage/pgsql-backups
zfs set mountpoint=/var/lib/pgsql/backups storage/pgsql-backups
zfs set atime=off storage/pgsql-backups
zfs set recordsize=1M storage/pgsql-backups
chown -R postgres:postgres /var/lib/pgsql/backups
chmod -R 770 /var/lib/pgsql/backups

After this, all ZFS stuff has been automated in Ansible.

In addition to this, I moved my PostgreSQL raw storage off my boot volume:

lvcreate -L 64G fedora -n pgsql-data
mkfs.xfs /dev/fedora/pgsql-data

Once I had this set up, I configured /etc/fstab to mount it automatically. To do this, I found the UUID of the device by running lsblk -f. From there, I added an entry to my fstab:

UUID={{uuid}} /var/lib/pgsql/data     xfs     defaults        0 2

After this, Systemd needs to update:

systemctl daemon-reload
systemctl start var-lib-pgsql-data.mount

I manage database accounts databases manually. I have no need for automation or idempotence anywhere near that. To do this, log in with psql while logged in as postgres. Then:

CREATE ROLE dbname LOGIN;
\password dbname
<enter password; save it>
CREATE DATABASE dbname OWNER dbname;