Ian Fijolek
0d208b7394
Otherwise, if a client is not running any stateful services, the task will fail and Nomad will eventually stop retrying. If a service gets relocated to the host, the task is not restarted. This makes sure the task will cover moved services and make it more easy to determine that backups are healthy. |
||
---|---|---|
acls | ||
ansible_galaxy | ||
ansible_playbooks | ||
backups | ||
core | ||
databases | ||
scripts | ||
services | ||
storage_plugins | ||
.gitignore | ||
.pre-commit-config.yaml | ||
.secrets-baseline | ||
.terraform.lock.hcl | ||
.tflint.hcl | ||
ansible.cfg | ||
main.tf | ||
Makefile | ||
providers.tf | ||
README.md | ||
requirements.txt | ||
root.tf | ||
service.nomad | ||
vars.tf |
Homelab Nomad
My configuration for creating my home Nomad cluster and deploying services to it.
This repo is not designed as general purpose templates, but rather to fit my specific needs. That said, I have made an effort for things to be as useful as possible for someone wanting to use or modify this.
Running
make all
Design
Both Ansible and Terraform are used as part of this configuration. All hosts must be reachable over SSH prior to running any of this configuration.
To begin, Ansible runs a playbook to setup the cluster. This includes installing Nomad, bootstrapping the cluster and ACLs, setting up NFS shares, creating Nomad Host Volumes, and setting up Wesher as a Wireguard mesh between hosts.
After this is complete, Nomad variables must be set for services to access and configure correctly. This depends on variables to be set based on the sample file.
Finally, the Terraform configuration can be applied setting up all services deployed on the cluster.
The configuration of new services is intended to be as templated as possible and to avoid requiring changes in multiple places. For example, most services are configured with a template that provides reverse proxy, DNS records, database tunnels, database bootstrapping, metrics scraping, and authentication. The only real exception is backups, which requires a distinct job file, for now.
What does it do?
- Nomad cluster for scheduling and configuring all services
- Blocky DNS servers with integrated ad blocking. This also provides service discovery
- Prometheus with autodiscovery of service metrics
- Loki and Promtail aggregating logs
- Minitor for service availability checks
- Grafana providing dashboards, alerting, and log searching
- Photoprism for photo management
- Remote and shared volumes over NFS
- Authelia for OIDC and Proxy based authentication with 2FA
- Sonarr and Lidarr for multimedia management
- Automated block based backups using Restic
Step by step
- Update hosts in
ansible_playbooks/ansible_hosts.yml
- Update
ansible_playbook/setup-cluster.yml
- Update backup DNS server
- Update NFS shares from NAS
- Update volumes to make sure they are valid paths
- Create
ansible_playbooks/vars/nomad_vars.yml
based on the sample file. TODO: This is quite specific and probably impossible without more documentation - Run
make all
- Update your network DNS settings to use the new servers IP addresses