Compare commits

...

62 Commits
lego ... main

Author SHA1 Message Date
IamTheFij bdfde48bec Add some more monitors to nomad minitor 2024-05-06 14:29:17 -07:00
IamTheFij 9af55580e7 Update diun config to read from task socket 2024-05-01 10:18:54 -07:00
IamTheFij b9c35bf18f Add ability to set task identities for service module 2024-05-01 10:18:24 -07:00
IamTheFij e7f740a2d9 Add languagetool server 2024-05-01 09:43:28 -07:00
IamTheFij 57efee14e9 Update Ansible inventory to split node roles
Splits servers and clients to their own groups so that plays can target
specific roles.

Prior, everything was "both", but i want to and another server for
recovery purposes but not host containers on it.
2024-05-01 09:40:21 -07:00
IamTheFij c711c25737 Always use CF for dns when renewing lego certs
Makes it more resilient if my servers are down, but also cuts out a hop
because CF is the nameserver as well.
2024-04-27 19:33:10 -07:00
IamTheFij 24122c2a3e Split fixers to their own groups
Allow them to deploy as different allocs on different hosts
2024-04-22 09:07:03 -07:00
IamTheFij 13121862ec Add new host on qnap nas 2024-04-22 09:06:33 -07:00
IamTheFij 28da3f425b Move nomad default interface to host vars 2024-04-22 09:06:11 -07:00
IamTheFij 2d59886378 Update diun to include ability to read nomad socket 2024-04-17 10:46:28 -07:00
IamTheFij da0f52dab3 Improve change detection for cluster bootstrap 2024-04-17 10:46:10 -07:00
IamTheFij beac302a53 Upgrade nomad to 1.7.6 2024-04-17 10:45:27 -07:00
IamTheFij 5edcb86e7e Remove traefik grafana dashboard
Now in data backups rather than git.
2024-03-26 14:56:14 -07:00
IamTheFij 3dcd4c44b3 Tune memory after reviewing grafana 2024-03-26 09:48:31 -07:00
IamTheFij e6653f6495 Migrate sonarr to postgresql
And increase postgresql memory to accomodate
2024-03-25 16:05:58 -07:00
IamTheFij a9a919b8f2 Increase priority for sevices with highee resources
Photoprism requires lots if mem and sonar a specific volume
2024-03-22 21:09:19 -07:00
IamTheFij cc66bfdbcb Update photoprism 2024-03-22 21:07:55 -07:00
IamTheFij b02050112e Tune some service memeory 2024-03-22 21:07:07 -07:00
IamTheFij d5c2a0d185 Use default diun for syslogng 2024-03-22 21:05:53 -07:00
IamTheFij 6a3ae49d8e Update terraform modules 2024-03-11 22:02:07 -07:00
IamTheFij 75ee09d7e6 Remove bazarr
Plex does this automatically now
2024-02-20 10:13:40 -08:00
IamTheFij 8b90aa0d74 Add 1.1.1.1 dns back to blocky for better resiliance 2024-02-20 10:10:41 -08:00
IamTheFij 62e120ce51 Add radarr 2024-02-20 10:09:48 -08:00
IamTheFij 5fb510202d Fix indent for Authelia rules 2024-02-20 10:05:25 -08:00
IamTheFij 64a085ef80 Reatart failing services
Restart services that fail checks
2024-02-18 07:49:16 -08:00
IamTheFij f2f415aeac Fix traefik metrics 2024-02-18 07:47:31 -08:00
IamTheFij bb291b1f01 Move databases to their own tf files and improve first start 2024-02-13 12:05:55 -08:00
IamTheFij 056eac976c lldap: Make it work on first bootstrap
Can't use the job id for creating the variables and permissions because we end up
with circular dependencies. The job won't return until it's successful in Nomad and it won't
start in nomad without access to varibles
2024-02-13 12:05:21 -08:00
IamTheFij 198f96f3f7 Add back other traefik ports and metrics 2024-02-13 12:03:03 -08:00
IamTheFij 6b5adbdf39 Remove 404 block list 2024-02-13 12:02:35 -08:00
IamTheFij 77ef4b4167 Use quad9 encrypted dns 2024-02-13 12:02:14 -08:00
IamTheFij b35b8cecd5 Blocky: Remove mysql and redis configs from stunnel if server isn't found 2024-02-13 12:01:45 -08:00
IamTheFij b9dfeff6d8 Have blocky use router for upstream in nomad 2024-02-13 12:01:08 -08:00
IamTheFij 2ff954b4b5 Bump nomad 2024-02-13 12:00:43 -08:00
IamTheFij 2528dafcc6 Make nomad restart playbook more resilient 2024-02-13 12:00:24 -08:00
IamTheFij 0e168376b8 Add terraform destroy to makefile 2024-02-13 11:59:47 -08:00
IamTheFij a16dc204fe Run dummy backup more frequently to make graphs easier to read 2024-01-24 20:10:14 -08:00
IamTheFij 93d340c182 Make sure gitea ingress uses system wesher config
It was always using wesher
2024-01-23 12:09:59 -08:00
IamTheFij 37ee67b2e6 fix: Add job_id output to services
This should be earlier in history
2024-01-23 12:09:29 -08:00
IamTheFij 35dfeb3093 Add service healthchecks 2024-01-23 12:08:47 -08:00
IamTheFij 0a2eace3dd Fix lldap secrets 2024-01-23 12:07:42 -08:00
IamTheFij 6fe1b200f2 Update loki 2024-01-23 12:06:25 -08:00
IamTheFij c5d5ab42b8 Add some nomad actions for backups to test different formatting 2024-01-23 12:05:56 -08:00
IamTheFij efe7864cc9 Delay shutdowns of backup jobs to reduce killing those in progress 2024-01-23 12:05:20 -08:00
IamTheFij 9ba74ce698 Use return vars for service acl 2024-01-16 14:16:21 -08:00
IamTheFij 4fe3d46d5f Add external service acls for authelia 2024-01-16 14:15:56 -08:00
IamTheFij cf8bde7920 Add external traefik routes to nomad vars 2024-01-16 14:15:18 -08:00
IamTheFij bc87688f1a Move ldap secrets 2024-01-16 14:14:39 -08:00
IamTheFij 3491c1f679 Add refresh make target 2024-01-16 14:04:44 -08:00
IamTheFij 7b019e0787 Add auth to sonarr 2024-01-08 14:57:06 -08:00
IamTheFij 0f19e2433f Upgrade sonarr to version 4 2024-01-08 10:14:53 -08:00
IamTheFij c01d45c7a2 Upgrade grafana to version 10 2024-01-08 10:11:42 -08:00
IamTheFij d07afe2319 Update traffic routes to handle null IPs
Eg. 0.0.0.0 for blocked domains
2024-01-06 16:23:45 -08:00
IamTheFij b025e4a87e Add repo unlock via Nomad action to backups 2024-01-06 16:22:20 -08:00
IamTheFij 9be16fef1f Upgrade traefik to 2.10 2024-01-04 13:25:10 -08:00
IamTheFij c26da678b3 Small traefik cleanup
Remove fallback DNS since we only care about internal DNS

Use loopback address for accessing Nomad UI
2024-01-04 13:24:49 -08:00
IamTheFij 6b9533ef71 Run traefik on multiple hosts 2024-01-04 13:24:15 -08:00
IamTheFij 0bd995ec2b Traefik: Use nomad vars for dynamic certs
Rather than having Traefik handle cert fetching, instead
it is delegated to a separate job so that multiple Traefik
instances can share certs
2024-01-04 10:55:49 -08:00
IamTheFij 0d340f3349 Periodic job to renew lego certs and store them in Nomad Variables
This will allow multiple instance of Traefik to serve certs.
2024-01-04 10:53:25 -08:00
IamTheFij bcad131aa7 Use job id for lldap acls 2024-01-04 10:53:23 -08:00
IamTheFij cda2842f8f Switch to image containing stunnel
Rather than installing on container startup, using an image with
stunnel pre-installed. This avoids issues with DNS breaking
the container on startup.
2024-01-03 13:50:49 -08:00
IamTheFij 9544222961 Bump to 1.7.2 2023-12-29 20:47:58 -08:00
54 changed files with 1120 additions and 1381 deletions

View File

@ -132,7 +132,7 @@
"filename": "core/authelia.yml",
"hashed_secret": "a32b08d97b1615dc27f58b6b17f67624c04e2c4f",
"is_verified": false,
"line_number": 185,
"line_number": 189,
"is_secret": false
}
],
@ -187,5 +187,5 @@
}
]
},
"generated_at": "2023-08-24T20:00:24Z"
"generated_at": "2024-02-20T18:04:29Z"
}

View File

@ -2,39 +2,39 @@
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/nomad" {
version = "2.0.0"
version = "2.2.0"
hashes = [
"h1:lIHIxA6ZmfyTGL3J9YIddhxlfit4ipSS09BLxkwo6L0=",
"zh:09b897d64db293f9a904a4a0849b11ec1e3fff5c638f734d82ae36d8dc044b72",
"zh:435cc106799290f64078ec24b6c59cb32b33784d609088638ed32c6d12121199",
"zh:7073444bd064e8c4ec115ca7d9d7f030cc56795c0a83c27f6668bba519e6849a",
"h1:BAjqzVkuXxHtRKG+l9unaZJPk2kWZpSTCEcQPRcl2so=",
"zh:052f909d25121e93dc799290216292fca67943ccde12ba515068b838a6ff8c66",
"zh:20e29aeb9989f7a1e04bb4093817c7acc4e1e737bb21a3066f3ea46f2001feff",
"zh:2326d101ef427599b72cce30c0e0c1d18ae783f1a897c20f2319fbf54bab0a61",
"zh:3420cbe4fd19cdc96d715d0ae8e79c272608023a76033bbf582c30637f6d570f",
"zh:41ec570f87f578f1c57655e2e4fbdb9932d94cf92dc9cd11828cccedf36dd4a4",
"zh:5f90dcc58e3356ffead82ea211ecb4a2d7094d3c2fbd14ff85527c3652a595a2",
"zh:64aaa48609d2db868fcfd347490df0e12c6c3fcb8e4f12908c5d52b1a0adf73f",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:79d238c35d650d2d83a439716182da63f3b2767e72e4cbd0b69cb13d9b1aebfc",
"zh:7ef5f49344278fe0bbc5447424e6aa5425ff1821d010d944a444d7fa2c751acf",
"zh:92179091638c8ba03feef371c4361a790190f9955caea1fa59de2055c701a251",
"zh:a8a34398851761368eb8e7c171f24e55efa6e9fdbb5c455f6dec34dc17f631bc",
"zh:b38fd5338625ebace5a4a94cea1a28b11bd91995d834e318f47587cfaf6ec599",
"zh:b71b273a2aca7ad5f1e07c767b25b5a888881ba9ca93b30044ccc39c2937f03c",
"zh:cd14357e520e0f09fb25badfb4f2ee37d7741afdc3ed47c7bcf54c1683772543",
"zh:e05e025f4bb95138c3c8a75c636e97cd7cfd2fc1525b0c8bd097db8c5f02df6e",
"zh:86b4923e10e6ba407d1d2aab83740b702058e8b01460af4f5f0e4008f40e492c",
"zh:ae89dcba33097af33a306344d20e4e25181f15dcc1a860b42db5b7199a97c6a6",
"zh:ce56d68cdfba60891765e94f9c0bf69eddb985d44d97db9f91874bea027f08e2",
"zh:e993bcde5dbddaedf3331e3014ffab904f98ab0f5e8b5d6082b7ca5083e0a2f1",
]
}
provider "registry.terraform.io/hashicorp/random" {
version = "3.5.1"
version = "3.6.0"
hashes = [
"h1:VSnd9ZIPyfKHOObuQCaKfnjIHRtR7qTw19Rz8tJxm+k=",
"zh:04e3fbd610cb52c1017d282531364b9c53ef72b6bc533acb2a90671957324a64",
"zh:119197103301ebaf7efb91df8f0b6e0dd31e6ff943d231af35ee1831c599188d",
"zh:4d2b219d09abf3b1bb4df93d399ed156cadd61f44ad3baf5cf2954df2fba0831",
"zh:6130bdde527587bbe2dcaa7150363e96dbc5250ea20154176d82bc69df5d4ce3",
"zh:6cc326cd4000f724d3086ee05587e7710f032f94fc9af35e96a386a1c6f2214f",
"h1:R5Ucn26riKIEijcsiOMBR3uOAjuOMfI1x7XvH4P6B1w=",
"zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d",
"zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211",
"zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829",
"zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d",
"zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:b6d88e1d28cf2dfa24e9fdcc3efc77adcdc1c3c3b5c7ce503a423efbdd6de57b",
"zh:ba74c592622ecbcef9dc2a4d81ed321c4e44cddf7da799faa324da9bf52a22b2",
"zh:c7c5cde98fe4ef1143bd1b3ec5dc04baf0d4cc3ca2c5c7d40d17c0e9b2076865",
"zh:dac4bad52c940cd0dfc27893507c1e92393846b024c5a9db159a93c534a3da03",
"zh:de8febe2a2acd9ac454b844a4106ed295ae9520ef54dc8ed2faf29f12716b602",
"zh:eab0d0495e7e711cca367f7d4df6e322e6c562fc52151ec931176115b83ed014",
"zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17",
"zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21",
"zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839",
"zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0",
"zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c",
"zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e",
]
}

View File

@ -87,6 +87,16 @@ apply:
-auto-approve \
-var "nomad_secret_id=$(shell jq -r .SecretID nomad_bootstrap.json)" \
.PHONY: refresh
refresh:
@terraform refresh \
-var "nomad_secret_id=$(shell jq -r .SecretID nomad_bootstrap.json)" \
.PHONY: destroy
destroy:
@terraform destroy \
-var "nomad_secret_id=$(shell jq -r .SecretID nomad_bootstrap.json)" \
.PHONY: clean
clean:
env VIRTUAL_ENV=$(VENV) $(VENV)/bin/ansible-playbook -vv \

View File

@ -1,59 +1,72 @@
---
all:
children:
servers:
hosts:
n1.thefij:
nomad_node_role: both
nomad_reserved_memory: 1024
# nomad_meta:
# hw_transcode.device: /dev/dri
# hw_transcode.type: intel
nfs_mounts:
- src: 10.50.250.2:/srv/volumes
path: /srv/volumes/moxy
opts: proto=tcp,rw
nomad_unique_host_volumes:
- name: mysql-data
path: /srv/volumes/mysql
owner: "999"
group: "100"
mode: "0755"
read_only: false
- name: postgres-data
path: /srv/volumes/postgres
owner: "999"
group: "999"
mode: "0755"
read_only: false
n2.thefij:
nomad_node_role: both
nomad_node_class: ingress
nomad_reserved_memory: 1024
nfs_mounts:
- src: 10.50.250.2:/srv/volumes
path: /srv/volumes/moxy
opts: proto=tcp,rw
nomad_unique_host_volumes:
- name: nextcloud-data
path: /srv/volumes/nextcloud
owner: "root"
group: "bin"
mode: "0755"
read_only: false
- name: sonarr-data
path: /srv/volumes/sonarr
owner: "root"
group: "bin"
mode: "0755"
read_only: false
pi4:
nomad_node_role: both
nomad_reserved_memory: 512
nomad_meta:
hw_transcode.device: /dev/video11
hw_transcode.type: raspberry
hosts:
n1.thefij:
nomad_node_class: ingress
nomad_reserved_memory: 1024
# nomad_meta:
# hw_transcode.device: /dev/dri
# hw_transcode.type: intel
nfs_mounts:
- src: 10.50.250.2:/srv/volumes
path: /srv/volumes/moxy
opts: proto=tcp,rw
nomad_unique_host_volumes:
- name: mysql-data
path: /srv/volumes/mysql
owner: "999"
group: "100"
mode: "0755"
read_only: false
- name: postgres-data
path: /srv/volumes/postgres
owner: "999"
group: "999"
mode: "0755"
read_only: false
n2.thefij:
nomad_node_class: ingress
nomad_reserved_memory: 1024
nfs_mounts:
- src: 10.50.250.2:/srv/volumes
path: /srv/volumes/moxy
opts: proto=tcp,rw
nomad_unique_host_volumes:
- name: nextcloud-data
path: /srv/volumes/nextcloud
owner: "root"
group: "bin"
mode: "0755"
read_only: false
pi4:
nomad_node_class: ingress
nomad_reserved_memory: 512
nomad_meta:
hw_transcode.device: /dev/video11
hw_transcode.type: raspberry
qnomad.thefij:
ansible_host: 192.168.2.234
nomad_reserved_memory: 1024
# This VM uses a non-standard interface
nomad_network_interface: ens3
nomad_instances:
children:
servers: {}
nomad_instances:
vars:
nomad_network_interface: eth0
children:
nomad_servers: {}
nomad_clients: {}
nomad_servers:
hosts:
nonopi.thefij:
ansible_host: 192.168.2.170
n1.thefij: {}
n2.thefij: {}
pi4: {}
qnomad.thefij: {}
nomad_clients:
hosts:
n1.thefij: {}
n2.thefij: {}
pi4: {}
qnomad.thefij: {}

View File

@ -14,8 +14,14 @@
state: restarted
become: true
- name: Start Docker
systemd:
name: docker
state: started
become: true
- name: Start Nomad
systemd:
name: nomad
state: stopped
state: started
become: true

View File

@ -1,6 +1,6 @@
---
- name: Recover Nomad
hosts: nomad_instances
hosts: nomad_servers
any_errors_fatal: true
tasks:

View File

@ -14,7 +14,7 @@
line: "nameserver {{ non_nomad_dns }}"
- name: Install Docker
hosts: nomad_instances
hosts: nomad_clients
become: true
vars:
docker_architecture_map:
@ -44,7 +44,7 @@
# state: present
- name: Create NFS mounts
hosts: nomad_instances
hosts: nomad_clients
become: true
vars:
shared_nfs_mounts:
@ -112,9 +112,15 @@
- name: nzbget-config
path: /srv/volumes/nas-container/nzbget
read_only: false
- name: sonarr-config
path: /srv/volumes/nas-container/sonarr
read_only: false
- name: lidarr-config
path: /srv/volumes/nas-container/lidarr
read_only: false
- name: radarr-config
path: /srv/volumes/nas-container/radarr
read_only: false
- name: bazarr-config
path: /srv/volumes/nas-container/bazarr
read_only: false
@ -131,9 +137,10 @@
roles:
- name: ansible-nomad
vars:
nomad_version: "1.7.1-1"
nomad_version: "1.7.6-1"
nomad_install_upgrade: true
nomad_allow_purge_config: true
nomad_node_role: "{% if 'nomad_clients' in group_names %}{% if 'nomad_servers' in group_names %}both{% else %}client{% endif %}{% else %}server{% endif %}"
# Where nomad gets installed to
nomad_bin_dir: /usr/bin
@ -187,7 +194,8 @@
nomad_bind_address: 0.0.0.0
# Default interface for binding tasks
nomad_network_interface: eth0
# This is now set at the inventory level
# nomad_network_interface: eth0
# Create networks for binding task ports
nomad_host_networks:
@ -206,7 +214,7 @@
enabled: true
- name: Bootstrap Nomad ACLs and scheduler
hosts: nomad_instances
hosts: nomad_servers
tasks:
- name: Start Nomad
@ -236,6 +244,7 @@
run_once: true
ignore_errors: true
register: bootstrap_result
changed_when: bootstrap_result is succeeded
- name: Save bootstrap result
copy:
@ -267,13 +276,15 @@
- list
environment:
NOMAD_TOKEN: "{{ read_secretid.stdout }}"
run_once: true
register: policies
run_once: true
changed_when: false
- name: Copy policy
copy:
src: ../acls/nomad-anon-policy.hcl
dest: /tmp/anonymous.policy.hcl
delegate_to: "{{ play_hosts[0] }}"
run_once: true
register: anon_policy
@ -293,6 +304,18 @@
delegate_to: "{{ play_hosts[0] }}"
run_once: true
- name: Read scheduler config
command:
argv:
- nomad
- operator
- scheduler
- get-config
- -json
run_once: true
register: scheduler_config
changed_when: false
- name: Enable service scheduler preemption
command:
argv:
@ -300,12 +323,24 @@
- operator
- scheduler
- set-config
- -preempt-system-scheduler=true
- -preempt-service-scheduler=true
environment:
NOMAD_TOKEN: "{{ read_secretid.stdout }}"
delegate_to: "{{ play_hosts[0] }}"
run_once: true
when: (scheduler_config.stdout | from_json)["SchedulerConfig"]["PreemptionConfig"]["ServiceSchedulerEnabled"] is false
- name: Enable system scheduler preemption
command:
argv:
- nomad
- operator
- scheduler
- set-config
- -preempt-system-scheduler=true
environment:
NOMAD_TOKEN: "{{ read_secretid.stdout }}"
run_once: true
when: (scheduler_config.stdout | from_json)["SchedulerConfig"]["PreemptionConfig"]["SystemSchedulerEnabled"] is false
# - name: Set up Nomad backend and roles in Vault
# community.general.terraform:

View File

@ -9,8 +9,6 @@ nomad/jobs/authelia:
db_user: VALUE
email_sender: VALUE
jwt_secret: VALUE
lldap_admin_password: VALUE
lldap_admin_user: VALUE
oidc_clients: VALUE
oidc_hmac_secret: VALUE
oidc_issuer_certificate_chain: VALUE
@ -92,17 +90,15 @@ nomad/jobs/immich:
db_name: VALUE
db_pass: VALUE
db_user: VALUE
nomad/jobs/ipdvr/radarr:
db_pass: VALUE
db_user: VALUE
nomad/jobs/lego:
acme_email: VALUE
domain_lego_dns: VALUE
usersfile: VALUE
nomad/jobs/lidarr:
db_name: VALUE
db_pass: VALUE
db_user: VALUE
nomad/jobs/lldap:
admin_email: VALUE
admin_password: VALUE
admin_user: VALUE
db_name: VALUE
db_pass: VALUE
db_user: VALUE
@ -123,21 +119,32 @@ nomad/jobs/photoprism:
nomad/jobs/postgres-server:
superuser: VALUE
superuser_pass: VALUE
nomad/jobs/radarr:
db_name: VALUE
db_pass: VALUE
db_user: VALUE
nomad/jobs/redis-authelia:
allowed_psks: VALUE
nomad/jobs/redis-blocky:
allowed_psks: VALUE
nomad/jobs/rediscommander:
redis_stunnel_psk: VALUE
nomad/jobs/sonarr:
db_name: VALUE
db_pass: VALUE
db_user: VALUE
nomad/jobs/traefik:
acme_email: VALUE
domain_lego_dns: VALUE
external: VALUE
usersfile: VALUE
nomad/jobs/unifi-traffic-route-ips:
unifi_password: VALUE
unifi_username: VALUE
nomad/oidc:
secret: VALUE
secrets/ldap:
admin_email: VALUE
admin_password: VALUE
admin_user: VALUE
secrets/mysql:
mysql_root_password: VALUE
secrets/postgres:

View File

@ -62,6 +62,8 @@ job "backup%{ if batch_node != null }-oneoff-${batch_node}%{ endif }" {
task "backup" {
driver = "docker"
shutdown_delay = "5m"
volume_mount {
volume = "all-volumes"
destination = "/data"
@ -69,7 +71,7 @@ job "backup%{ if batch_node != null }-oneoff-${batch_node}%{ endif }" {
}
config {
image = "iamthefij/resticscheduler:0.3.1"
image = "iamthefij/resticscheduler:0.4.0"
ports = ["metrics"]
args = [
%{ if batch_node != null ~}
@ -85,6 +87,21 @@ job "backup%{ if batch_node != null }-oneoff-${batch_node}%{ endif }" {
]
}
action "unlockenv" {
command = "sh"
args = ["-c", "/bin/resticscheduler -once -unlock all $${NOMAD_TASK_DIR}/node-jobs.hcl"]
}
action "unlocktmpl" {
command = "/bin/resticscheduler"
args = ["-once", "-unlock", "all", "{{ env 'NOMAD_TASK_DIR' }}/node-jobs.hcl"]
}
action "unlockhc" {
command = "/bin/resticscheduler"
args = ["-once", "-unlock", "all", "/local/node-jobs.hcl"]
}
env = {
RCLONE_CHECKERS = "2"
RCLONE_TRANSFERS = "2"
@ -139,7 +156,7 @@ ${file("${module_path}/${job_file}")}
# Dummy job to keep task healthy on node without any stateful services
job "Dummy" {
schedule = "0 0 1 1 0"
schedule = "@daily"
config {
repo = "/local/dummy-repo"
@ -173,8 +190,8 @@ job "Dummy" {
}
config {
image = "alpine:3.17"
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
}
resources {
@ -182,15 +199,6 @@ job "Dummy" {
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel {{ env "NOMAD_TASK_DIR" }}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no

64
backups/jobs/radarr.hcl Normal file
View File

@ -0,0 +1,64 @@
job "radarr" {
schedule = "@daily"
config {
repo = "s3://backups-minio.agnosticfront.thefij:8443/nomad/radarr"
passphrase = env("BACKUP_PASSPHRASE")
options {
InsecureTls = true
}
}
task "Backup main database" {
postgres "Backup database" {
hostname = env("POSTGRES_HOST")
port = env("POSTGRES_PORT")
username = env("POSTGRES_USER")
password = env("POSTGRES_PASSWORD")
database = "radarr"
no_tablespaces = true
dump_to = "/data/nas-container/radarr/Backups/dump-radarr.sql"
}
}
task "Backup logs database" {
postgres "Backup database" {
hostname = env("POSTGRES_HOST")
port = env("POSTGRES_PORT")
username = env("POSTGRES_USER")
password = env("POSTGRES_PASSWORD")
database = "radarr-logs"
no_tablespaces = true
dump_to = "/data/nas-container/radarr/Backups/dump-radarr-logs.sql"
}
}
backup {
paths = ["/data/nas-container/radarr"]
backup_opts {
Exclude = [
"radarr_backup_*.zip",
"/data/nas-container/radarr/MediaCover",
"/data/nas-container/radarr/logs",
]
Host = "nomad"
}
restore_opts {
Host = ["nomad"]
# Because path is absolute
Target = "/"
}
}
forget {
KeepLast = 2
KeepDaily = 30
KeepWeekly = 8
KeepMonthly = 6
KeepYearly = 2
Prune = true
}
}

View File

@ -11,25 +11,37 @@ job "sonarr" {
}
task "Backup main database" {
sqlite "Backup database" {
path = "/data/sonarr/sonarr.db"
dump_to = "/data/sonarr/Backups/sonarr.db.bak"
postgres "Backup database" {
hostname = env("POSTGRES_HOST")
port = env("POSTGRES_PORT")
username = env("POSTGRES_USER")
password = env("POSTGRES_PASSWORD")
database = "sonarr"
no_tablespaces = true
dump_to = "/data/nas-container/sonarr/Backups/dump-sonarr.sql"
}
}
task "Backup logs database" {
sqlite "Backup database" {
path = "/data/sonarr/logs.db"
dump_to = "/data/sonarr/Backups/logs.db.bak"
postgres "Backup database" {
hostname = env("POSTGRES_HOST")
port = env("POSTGRES_PORT")
username = env("POSTGRES_USER")
password = env("POSTGRES_PASSWORD")
database = "sonarr-logs"
no_tablespaces = true
dump_to = "/data/nas-container/sonarr/Backups/dump-sonarr-logs.sql"
}
}
backup {
paths = ["/data/sonarr"]
paths = ["/data/nas-container/sonarr"]
backup_opts {
Exclude = [
"sonarr_backup_*.zip",
"/data/nas-container/sonarr/MediaCover",
"/data/nas-container/sonarr/logs",
"*.db",
"*.db-shm",
"*.db-wal",

View File

@ -2,39 +2,39 @@
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/nomad" {
version = "2.0.0"
version = "2.1.1"
hashes = [
"h1:lIHIxA6ZmfyTGL3J9YIddhxlfit4ipSS09BLxkwo6L0=",
"zh:09b897d64db293f9a904a4a0849b11ec1e3fff5c638f734d82ae36d8dc044b72",
"zh:435cc106799290f64078ec24b6c59cb32b33784d609088638ed32c6d12121199",
"zh:7073444bd064e8c4ec115ca7d9d7f030cc56795c0a83c27f6668bba519e6849a",
"h1:liQBgBXfQEYmwpoGZUfSsu0U0t/nhvuRZbMhaMz7wcQ=",
"zh:28bc6922e8a21334568410760150d9d413d7b190d60b5f0b4aab2f4ef52efeeb",
"zh:2d4283740e92ce1403875486cd5ff2c8acf9df28c190873ab4d769ce37db10c1",
"zh:457e16d70075eae714a7df249d3ba42c2176f19b6750650152c56f33959028d9",
"zh:49ee88371e355c00971eefee6b5001392431b47b3e760a5c649dda76f59fb8fa",
"zh:614ad3bf07155ed8a5ced41dafb09042afbd1868490a379654b3e970def8e33d",
"zh:75be7199d76987e7549e1f27439922973d1bf27353b44a593bfbbc2e3b9f698f",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:79d238c35d650d2d83a439716182da63f3b2767e72e4cbd0b69cb13d9b1aebfc",
"zh:7ef5f49344278fe0bbc5447424e6aa5425ff1821d010d944a444d7fa2c751acf",
"zh:92179091638c8ba03feef371c4361a790190f9955caea1fa59de2055c701a251",
"zh:a8a34398851761368eb8e7c171f24e55efa6e9fdbb5c455f6dec34dc17f631bc",
"zh:b38fd5338625ebace5a4a94cea1a28b11bd91995d834e318f47587cfaf6ec599",
"zh:b71b273a2aca7ad5f1e07c767b25b5a888881ba9ca93b30044ccc39c2937f03c",
"zh:cd14357e520e0f09fb25badfb4f2ee37d7741afdc3ed47c7bcf54c1683772543",
"zh:e05e025f4bb95138c3c8a75c636e97cd7cfd2fc1525b0c8bd097db8c5f02df6e",
"zh:888e14a24410d56b37212fbea373a3e0401d0ff8f8e4f4dd00ba8b29de9fed39",
"zh:aa261925e8b152636a0886b3a2149864707632d836e98a88dacea6cfb6302082",
"zh:ac10cefb4064b3bb63d4b0379624a416a45acf778eac0004466f726ead686196",
"zh:b1a3c8b4d5b2dc9b510eac5e9e02665582862c24eb819ab74f44d3d880246d4f",
"zh:c552e2fe5670b6d3ad9a5faf78e3a27197eeedbe2b13928d2c491fa509bc47c7",
]
}
provider "registry.terraform.io/hashicorp/random" {
version = "3.5.1"
version = "3.6.0"
hashes = [
"h1:VSnd9ZIPyfKHOObuQCaKfnjIHRtR7qTw19Rz8tJxm+k=",
"zh:04e3fbd610cb52c1017d282531364b9c53ef72b6bc533acb2a90671957324a64",
"zh:119197103301ebaf7efb91df8f0b6e0dd31e6ff943d231af35ee1831c599188d",
"zh:4d2b219d09abf3b1bb4df93d399ed156cadd61f44ad3baf5cf2954df2fba0831",
"zh:6130bdde527587bbe2dcaa7150363e96dbc5250ea20154176d82bc69df5d4ce3",
"zh:6cc326cd4000f724d3086ee05587e7710f032f94fc9af35e96a386a1c6f2214f",
"h1:R5Ucn26riKIEijcsiOMBR3uOAjuOMfI1x7XvH4P6B1w=",
"zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d",
"zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211",
"zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829",
"zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d",
"zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:b6d88e1d28cf2dfa24e9fdcc3efc77adcdc1c3c3b5c7ce503a423efbdd6de57b",
"zh:ba74c592622ecbcef9dc2a4d81ed321c4e44cddf7da799faa324da9bf52a22b2",
"zh:c7c5cde98fe4ef1143bd1b3ec5dc04baf0d4cc3ca2c5c7d40d17c0e9b2076865",
"zh:dac4bad52c940cd0dfc27893507c1e92393846b024c5a9db159a93c534a3da03",
"zh:de8febe2a2acd9ac454b844a4106ed295ae9520ef54dc8ed2faf29f12716b602",
"zh:eab0d0495e7e711cca367f7d4df6e322e6c562fc52151ec931176115b83ed014",
"zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17",
"zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21",
"zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839",
"zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0",
"zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c",
"zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e",
]
}

View File

@ -49,7 +49,7 @@ module "authelia" {
mount = false
},
{
data = "{{ with nomadVar \"nomad/jobs/authelia\" }}{{ .lldap_admin_password }}{{ end }}"
data = "{{ with nomadVar \"secrets/ldap\" }}{{ .admin_password }}{{ end }}"
dest_prefix = "$${NOMAD_SECRETS_DIR}"
dest = "ldap_password.txt"
mount = false
@ -105,6 +105,43 @@ module "authelia" {
]
}
resource "nomad_acl_policy" "authelia" {
name = "authelia"
description = "Give access to shared authelia variables"
rules_hcl = <<EOH
namespace "default" {
variables {
path "authelia/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = module.authelia.job_id
}
}
# Give access to ldap secrets
resource "nomad_acl_policy" "authelia_ldap_secrets" {
name = "authelia-secrets-ldap"
description = "Give access to LDAP secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/ldap" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = module.authelia.job_id
}
}
resource "nomad_acl_auth_method" "nomad_authelia" {
name = "authelia"
type = "OIDC"

View File

@ -89,8 +89,8 @@ authentication_backend:
groups_filter: (member={dn})
## The username and password of the admin user.
{{ with nomadVar "nomad/jobs/authelia" }}
user: uid={{ .lldap_admin_user }},ou=people,{{ with nomadVar "nomad/jobs" }}{{ .ldap_base_dn }}{{ end }}
{{ with nomadVar "secrets/ldap" }}
user: uid={{ .admin_user }},ou=people,{{ with nomadVar "nomad/jobs" }}{{ .ldap_base_dn }}{{ end }}
{{ end }}
# password set using secrets file
# password: <secret>
@ -151,6 +151,10 @@ access_control:
networks: 192.168.5.0/24
rules:
{{ range nomadVarList "authelia/access_control/service_rules" }}{{ with nomadVar .Path }}
- domain: '{{ .name }}.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{{ end }}'
{{ .rule.Value | indent 6 }}
{{ end }}{{ end }}
## Rules applied to everyone
- domain: '*.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{{ end }}'
networks:

View File

@ -32,7 +32,9 @@ job "blocky" {
dns {
# Set expclicit DNS servers because tasks, by default, use this task
servers = ["1.1.1.1", "1.0.0.1"]
servers = [
"192.168.2.1",
]
}
}
@ -74,8 +76,8 @@ job "blocky" {
resources {
cpu = 50
memory = 50
memory_max = 100
memory = 75
memory_max = 150
}
template {
@ -116,9 +118,9 @@ job "blocky" {
}
config {
image = "alpine:3.17"
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
ports = ["tls"]
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
}
resources {
@ -126,36 +128,27 @@ job "blocky" {
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel {{ env "NOMAD_TASK_DIR" }}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no
foreground = yes
delay = yes
{{ range nomadService 1 (env "NOMAD_ALLOC_ID") "mysql-tls" -}}
[mysql_client]
client = yes
accept = 127.0.0.1:3306
{{ range nomadService 1 (env "NOMAD_ALLOC_ID") "mysql-tls" -}}
connect = {{ .Address }}:{{ .Port }}
{{- end }}
PSKsecrets = {{ env "NOMAD_SECRETS_DIR" }}/mysql_stunnel_psk.txt
{{- end }}
{{ range nomadService 1 (env "NOMAD_ALLOC_ID") "redis-blocky" -}}
[redis_client]
client = yes
accept = 127.0.0.1:6379
{{ range nomadService 1 (env "NOMAD_ALLOC_ID") "redis-blocky" -}}
connect = {{ .Address }}:{{ .Port }}
{{- end }}
PSKsecrets = {{ env "NOMAD_SECRETS_DIR" }}/stunnel_psk.txt
{{- end }}
EOF
destination = "$${NOMAD_TASK_DIR}/stunnel.conf"
}

View File

@ -5,12 +5,24 @@ ports:
bootstrapDns:
- upstream: 1.1.1.1
- upstream: 1.0.0.1
- upstream: 9.9.9.9
- upstream: 149.112.112.112
upstreams:
groups:
default:
- https://dns.quad9.net/dns-query
- tcp-tls:dns.quad9.net
- https://one.one.one.one/dns-query
- tcp-tls:one.one.one.one
cloudflare:
- 1.1.1.1
- 1.0.0.1
- 2606:4700:4700::1111
- 2606:4700:4700::1001
- https://one.one.one.one/dns-query
- tcp-tls:one.one.one.one
quad9:
- 9.9.9.9
- 149.112.112.112
@ -18,6 +30,13 @@ upstreams:
- 2620:fe::9
- https://dns.quad9.net/dns-query
- tcp-tls:dns.quad9.net
quad9-secured:
- 9.9.9.11
- 149.112.112.11
- 2620:fe::11
- 2620:fe::fe:11
- https://dns11.quad9.net/dns-query
- tcp-tls:dns11.quad9.net
quad9-unsecured:
- 9.9.9.10
- 149.112.112.10
@ -53,7 +72,7 @@ blocking:
- http://sysctl.org/cameleon/hosts
- https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
- https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt
- https://hosts-file.net/ad_servers.txt
# - https://hosts-file.net/ad_servers.txt
smarttv:
- https://perflyst.github.io/PiHoleBlocklist/SmartTV.txt
# - https://perflyst.github.io/PiHoleBlocklist/regex.list

View File

@ -40,8 +40,8 @@ job "grafana" {
}
config {
image = "alpine:3.17"
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
}
resources {
@ -49,15 +49,6 @@ job "grafana" {
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel {{ env "NOMAD_TASK_DIR" }}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no
@ -142,7 +133,7 @@ SELECT 'NOOP';
driver = "docker"
config {
image = "grafana/grafana:9.4.2"
image = "grafana/grafana:10.0.10"
args = ["--config", "$${NOMAD_ALLOC_DIR}/config/grafana.ini"]
ports = ["web"]
}

View File

@ -270,6 +270,10 @@ api_url = https://authelia.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{
login_attribute_path = preferred_username
groups_attribute_path = groups
name_attribute_path = name
# Role attribute path is not working
role_attribute_path = contains(groups[*], 'admin') && 'Admin' || contains(groups[*], 'grafana-admin') && 'Admin' || contains(groups[*], 'grafana-editor') && 'Editor' || contains(groups[*], 'developer') && 'Editor'
allow_assign_grafana_admin = true
skip_org_role_sync = true
use_pkce = true
;team_ids =

View File

@ -1,783 +0,0 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "7.5.5"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "panel",
"id": "piechart",
"name": "Pie chart v2",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "singlestat",
"name": "Singlestat",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"description": "Traefik dashboard prometheus",
"editable": true,
"gnetId": 4475,
"graphTooltip": 0,
"id": null,
"iteration": 1620932097756,
"links": [],
"panels": [
{
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 10,
"title": "$backend stats",
"type": "row"
},
{
"cacheTimeout": null,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"decimals": 0,
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 1
},
"id": 2,
"interval": null,
"links": [],
"maxDataPoints": 3,
"options": {
"displayLabels": [],
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"values": [
"value",
"percent"
]
},
"pieType": "pie",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"text": {}
},
"targets": [
{
"exemplar": true,
"expr": "traefik_service_requests_total{service=\"$service\"}",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{method}} : {{code}}",
"refId": "A"
}
],
"title": "$service return code",
"type": "piechart"
},
{
"cacheTimeout": null,
"colorBackground": false,
"colorValue": false,
"colors": [
"#299c46",
"rgba(237, 129, 40, 0.89)",
"#d44a3a"
],
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {},
"overrides": []
},
"format": "ms",
"gauge": {
"maxValue": 100,
"minValue": 0,
"show": false,
"thresholdLabels": false,
"thresholdMarkers": true
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 1
},
"id": 4,
"interval": null,
"links": [],
"mappingType": 1,
"mappingTypes": [
{
"name": "value to text",
"value": 1
},
{
"name": "range to text",
"value": 2
}
],
"maxDataPoints": 100,
"nullPointMode": "connected",
"nullText": null,
"postfix": "",
"postfixFontSize": "50%",
"prefix": "",
"prefixFontSize": "50%",
"rangeMaps": [
{
"from": "null",
"text": "N/A",
"to": "null"
}
],
"sparkline": {
"fillColor": "rgba(31, 118, 189, 0.18)",
"full": false,
"lineColor": "rgb(31, 120, 193)",
"show": true
},
"tableColumn": "",
"targets": [
{
"exemplar": true,
"expr": "sum(traefik_service_request_duration_seconds_sum{service=\"$service\"}) / sum(traefik_service_requests_total{service=\"$service\"}) * 1000",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "",
"refId": "A"
}
],
"thresholds": "",
"title": "$service response time",
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
{
"op": "=",
"text": "N/A",
"value": "null"
}
],
"valueName": "avg"
},
{
"aliasColors": {},
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 8
},
"hiddenSeries": false,
"id": 3,
"legend": {
"alignAsTable": true,
"avg": true,
"current": false,
"max": true,
"min": true,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": false,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.5.5",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"exemplar": true,
"expr": "sum(rate(traefik_service_requests_total{service=\"$service\"}[5m]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "Total requests $service",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Total requests over 5min $service",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": false,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 15
},
"id": 12,
"panels": [],
"title": "Global stats",
"type": "row"
},
{
"aliasColors": {},
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 16
},
"hiddenSeries": false,
"id": 5,
"legend": {
"alignAsTable": true,
"avg": false,
"current": true,
"max": true,
"min": true,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": false,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.5.5",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "rate(traefik_entrypoint_requests_total{entrypoint=~\"$entrypoint\",code=\"200\"}[5m])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{method}} : {{code}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Status code 200 over 5min",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 16
},
"hiddenSeries": false,
"id": 6,
"legend": {
"alignAsTable": true,
"avg": false,
"current": true,
"max": true,
"min": true,
"rightSide": true,
"show": true,
"total": false,
"values": true
},
"lines": false,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.5.5",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "rate(traefik_entrypoint_requests_total{entrypoint=~\"$entrypoint\",code!=\"200\"}[5m])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{ method }} : {{code}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Others status code over 5min",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"cacheTimeout": null,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"decimals": 0,
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 23
},
"id": 7,
"interval": null,
"links": [],
"maxDataPoints": 3,
"options": {
"displayLabels": [],
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"values": [
"value"
]
},
"pieType": "pie",
"reduceOptions": {
"calcs": [
"sum"
],
"fields": "",
"values": false
},
"text": {}
},
"targets": [
{
"exemplar": true,
"expr": "sum(rate(traefik_service_requests_total[5m])) by (service) ",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{ service }}",
"refId": "A"
}
],
"title": "Requests by service",
"type": "piechart"
},
{
"cacheTimeout": null,
"datasource": "${DS_PROMETHEUS}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"decimals": 0,
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 23
},
"id": 8,
"interval": null,
"links": [],
"maxDataPoints": 3,
"options": {
"displayLabels": [],
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"values": [
"value"
]
},
"pieType": "pie",
"reduceOptions": {
"calcs": [
"sum"
],
"fields": "",
"values": false
},
"text": {}
},
"targets": [
{
"exemplar": true,
"expr": "sum(rate(traefik_entrypoint_requests_total{entrypoint =~ \"$entrypoint\"}[5m])) by (entrypoint) ",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{ entrypoint }}",
"refId": "A"
}
],
"title": "Requests by protocol",
"type": "piechart"
}
],
"schemaVersion": 27,
"style": "dark",
"tags": [
"traefik",
"prometheus"
],
"templating": {
"list": [
{
"allValue": null,
"current": {},
"datasource": "${DS_PROMETHEUS}",
"definition": "label_values(service)",
"description": null,
"error": null,
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "service",
"options": [],
"query": {
"query": "label_values(service)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {},
"datasource": "${DS_PROMETHEUS}",
"definition": "",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": null,
"multi": true,
"name": "entrypoint",
"options": [],
"query": {
"query": "label_values(entrypoint)",
"refId": "Prometheus-entrypoint-Variable-Query"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "Traefik",
"uid": "qPdAviJmz",
"version": 10
}

97
core/lego.nomad Normal file
View File

@ -0,0 +1,97 @@
variable "lego_version" {
default = "4.14.2"
type = string
}
variable "nomad_var_dirsync_version" {
default = "0.0.2"
type = string
}
job "lego" {
type = "batch"
periodic {
cron = "@weekly"
prohibit_overlap = true
}
group "main" {
network {
dns {
servers = ["1.1.1.1", "1.0.0.1"]
}
}
task "main" {
driver = "exec"
config {
# image = "alpine:3"
command = "/bin/bash"
args = ["${NOMAD_TASK_DIR}/start.sh"]
}
artifact {
source = "https://github.com/go-acme/lego/releases/download/v${var.lego_version}/lego_v${var.lego_version}_linux_${attr.cpu.arch}.tar.gz"
}
artifact {
source = "https://git.iamthefij.com/iamthefij/nomad-var-dirsync/releases/download/v${var.nomad_var_dirsync_version}/nomad-var-dirsync-linux-${attr.cpu.arch}.tar.gz"
}
template {
data = <<EOH
#! /bin/sh
set -ex
cd ${NOMAD_TASK_DIR}
echo "Read certs from nomad vars"
${NOMAD_TASK_DIR}/nomad-var-dirsync-linux-{{ env "attr.cpu.arch" }} -root-var=secrets/certs read .
action=run
if [ -f /.lego/certificates/_.thefij.rocks.crt ]; then
action=renew
fi
echo "Attempt to $action certificates"
${NOMAD_TASK_DIR}/lego \
--accept-tos --pem \
--email=iamthefij@gmail.com \
--domains="*.thefij.rocks" \
--dns="cloudflare" \
$action \
--$action-hook="${NOMAD_TASK_DIR}/nomad-var-dirsync-linux-{{ env "attr.cpu.arch" }} -root-var=secrets/certs write .lego" \
EOH
destination = "${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOH
{{ with nomadVar "nomad/jobs/lego" -}}
CF_DNS_API_TOKEN={{ .domain_lego_dns }}
CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
{{- end }}
EOH
destination = "secrets/cloudflare.env"
env = true
}
env = {
NOMAD_ADDR = "unix:///secrets/api.sock"
}
identity {
env = true
}
resources {
cpu = 50
memory = 100
}
}
}
}

23
core/lego.tf Normal file
View File

@ -0,0 +1,23 @@
resource "nomad_job" "lego" {
jobspec = file("${path.module}/lego.nomad")
}
resource "nomad_acl_policy" "secrets_certs_write" {
name = "secrets-certs-write"
description = "Write certs to secrets store"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/certs/*" {
capabilities = ["write", "read"]
}
path "secrets/certs" {
capabilities = ["write", "read"]
}
}
}
EOH
job_acl {
job_id = "lego/*"
}
}

View File

@ -3,31 +3,27 @@ auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
common:
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
replication_factor: 1
path_prefix: /tmp/loki
schema_config:
configs:
- from: 2018-04-15
store: boltdb
- from: 2020-05-15
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
period: 24h
storage_config:
boltdb:
directory: {{ env "NOMAD_TASK_DIR" }}/index
boltdb_shipper:
active_index_directory: {{ env "NOMAD_TASK_DIR" }}/index
filesystem:
directory: {{ env "NOMAD_TASK_DIR" }}/chunks
@ -38,8 +34,8 @@ limits_config:
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
max_look_back_period: 168h
table_manager:
retention_deletes_enabled: false
retention_period: 0s
retention_deletes_enabled: true
retention_period: 168h

View File

@ -3,15 +3,17 @@ module "loki" {
detach = false
name = "loki"
image = "grafana/loki:2.2.1"
image = "grafana/loki:2.8.7"
args = ["--config.file=$${NOMAD_TASK_DIR}/loki-config.yml"]
service_port = 3100
ingress = true
use_wesher = var.use_wesher
service_check = {
path = "/ready"
}
sticky_disk = true
# healthcheck = "/ready"
templates = [
{
data = file("${path.module}/loki-config.yml")

View File

@ -24,7 +24,8 @@ job "nomad-client-stalker" {
resources {
cpu = 10
memory = 10
memory = 15
memory_max = 30
}
}
}

View File

@ -27,9 +27,7 @@ job "syslogng" {
driver = "docker"
meta = {
"diun.sort_tags" = "semver"
"diun.watch_repo" = true
"diun.include_tags" = "^[0-9]+\\.[0-9]+\\.[0-9]+$"
}
config {

View File

@ -2,20 +2,20 @@
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/nomad" {
version = "1.4.17"
version = "2.1.0"
hashes = [
"h1:iPylWr144mqXvM8NBVMTm+MS6JRhqIihlpJG91GYDyA=",
"zh:146f97eacd9a0c78b357a6cfd2cb12765d4b18e9660a75500ee3e748c6eba41a",
"zh:2eb89a6e5cee9aea03a96ea9f141096fe3baf219b2700ce30229d2d882f5015f",
"zh:3d0f971f79b615c1014c75e2f99f34bd4b4da542ca9f31d5ea7fadc4e9de39c1",
"zh:46099a750c752ce05aa14d663a86478a5ad66d95aff3d69367f1d3628aac7792",
"zh:71e56006b013dcfe1e4e059b2b07148b44fcd79351ae2c357e0d97e27ae0d916",
"zh:74febd25d776688f0558178c2f5a0e6818bbf4cdaa2e160d7049da04103940f0",
"h1:ek0L7fA+4R1/BXhbutSRqlQPzSZ5aY/I2YfVehuYeEU=",
"zh:39ba4d4fc9557d4d2c1e4bf866cf63973359b73e908cce237c54384512bdb454",
"zh:40d2b66e3f3675e6b88000c145977c1d5288510c76b702c6c131d9168546c605",
"zh:40fbe575d85a083f96d4703c6b7334e9fc3e08e4f1d441de2b9513215184ebcc",
"zh:42ce6db79e2f94557fae516ee3f22e5271f0b556638eb45d5fbad02c99fc7af3",
"zh:4acf63dfb92f879b3767529e75764fef68886521b7effa13dd0323c38133ce88",
"zh:72cf35a13c2fb542cd3c8528826e2390db9b8f6f79ccb41532e009ad140a3269",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:af18c064a5f0dd5422d6771939274841f635b619ab392c73d5bf9720945fdb85",
"zh:c133d7a862079da9f06e301c530eacbd70e9288fa2276ec0704df907270ee328",
"zh:c894cf98d239b9f5a4b7cde9f5c836face0b5b93099048ee817b0380ea439c65",
"zh:c918642870f0cafdbe4d7dd07c909701fc3ddb47cac8357bdcde1327bf78c11d",
"zh:f8f5655099a57b4b9c0018a2d49133771e24c7ff8262efb1ceb140fd224aa9b6",
"zh:8b8bcc136c05916234cb0c3bcc3d48fda7ca551a091ad8461ea4ab16fb6960a3",
"zh:8e1c2f924eae88afe7ac83775f000ae8fd71a04e06228edf7eddce4df2421169",
"zh:abc6e725531fc06a8e02e84946aaabc3453ecafbc1b7a442ea175db14fd9c86a",
"zh:b735fcd1fb20971df3e92f81bb6d73eef845dcc9d3d98e908faa3f40013f0f69",
"zh:ce59797282505d872903789db8f092861036da6ec3e73f6507dac725458a5ec9",
]
}

View File

@ -20,7 +20,7 @@ job "traefik" {
}
group "traefik" {
count = 1
count = 2
network {
port "web" {
@ -39,12 +39,13 @@ job "traefik" {
static = 2222
}
port "metrics" {}
dns {
servers = [
"192.168.2.101",
"192.168.2.102",
"192.168.2.30",
"192.168.2.170",
]
}
}
@ -54,39 +55,42 @@ job "traefik" {
sticky = true
}
service {
name = "traefik"
provider = "nomad"
port = "web"
check {
type = "http"
path = "/ping"
port = "web"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.traefik.entryPoints=websecure",
"traefik.http.routers.traefik.service=api@internal",
]
}
task "traefik" {
driver = "docker"
meta = {
"diun.sort_tags" = "semver"
"diun.watch_repo" = true
"diun.include_tags" = "^[0-9]+\\.[0-9]+$"
service {
name = "traefik"
provider = "nomad"
port = "web"
check {
type = "http"
path = "/ping"
interval = "10s"
timeout = "2s"
}
tags = [
"traefik.enable=true",
"traefik.http.routers.traefik.entryPoints=websecure",
"traefik.http.routers.traefik.service=api@internal",
]
}
service {
name = "traefik-metrics"
provider = "nomad"
port = "metrics"
tags = [
"prometheus.scrape",
]
}
config {
image = "traefik:2.9"
image = "traefik:2.10"
ports = ["web", "websecure"]
ports = ["web", "websecure", "syslog", "gitssh", "metrics"]
network_mode = "host"
mount {
@ -100,6 +104,12 @@ job "traefik" {
target = "/etc/traefik/usersfile"
source = "secrets/usersfile"
}
mount {
type = "bind"
target = "/etc/traefik/certs"
source = "secrets/certs"
}
}
template {
@ -122,12 +132,9 @@ job "traefik" {
[entryPoints.websecure]
address = ":443"
[entryPoints.websecure.http.tls]
certResolver = "letsEncrypt"
[[entryPoints.websecure.http.tls.domains]]
main = "*.<< with nomadVar "nomad/jobs" >><< .base_hostname >><< end >>"
[entryPoints.metrics]
address = ":8989"
address = ":<< env "NOMAD_PORT_metrics" >>"
[entryPoints.syslogtcp]
address = ":514"
@ -157,31 +164,9 @@ job "traefik" {
exposedByDefault = false
defaultRule = "Host(`{{normalize .Name}}.<< with nomadVar "nomad/jobs" >><< .base_hostname >><< end >>`)"
[providers.nomad.endpoint]
address = "http://<< env "attr.unique.network.ip-address" >>:4646"
<< if nomadVarExists "nomad/jobs/traefik" ->>
[certificatesResolvers.letsEncrypt.acme]
email = "<< with nomadVar "nomad/jobs/traefik" >><< .acme_email >><< end >>"
# Store in /local because /secrets doesn't persist with ephemeral disk
storage = "/local/acme.json"
[certificatesResolvers.letsEncrypt.acme.dnsChallenge]
provider = "cloudflare"
resolvers = ["1.1.1.1:53", "8.8.8.8:53"]
delayBeforeCheck = 0
<<- end >>
address = "http://127.0.0.1:4646"
EOH
destination = "local/config/traefik.toml"
}
template {
data = <<EOH
{{ with nomadVar "nomad/jobs/traefik" -}}
CF_DNS_API_TOKEN={{ .domain_lego_dns }}
CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
{{- end }}
EOH
destination = "secrets/cloudflare.env"
env = true
destination = "${NOMAD_TASK_DIR}/config/traefik.toml"
}
template {
@ -192,23 +177,39 @@ CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
entryPoints = ["websecure"]
service = "nomad"
rule = "Host(`nomad.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{{ end }}`)"
[http.routers.hass]
{{ with nomadVar "nomad/jobs/traefik" }}{{ with .external }}{{ with .Value | parseYAML -}}
{{ range $service, $url := . }}
[http.routers.{{ $service }}]
entryPoints = ["websecure"]
service = "hass"
rule = "Host(`hass.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{{ end }}`)"
service = "{{ $service }}"
rule = "Host(`{{ $service }}.{{ with nomadVar "nomad/jobs" }}{{ .base_hostname }}{{ end }}`)"
{{ end }}
{{- end }}{{ end }}{{ end }}
[http.services]
[http.services.nomad]
[http.services.nomad.loadBalancer]
[[http.services.nomad.loadBalancer.servers]]
url = "http://127.0.0.1:4646"
[http.services.hass]
[http.services.hass.loadBalancer]
[[http.services.hass.loadBalancer.servers]]
url = "http://192.168.3.65:8123"
{{ with nomadVar "nomad/jobs/traefik" }}{{ with .external }}{{ with .Value | parseYAML -}}
{{ range $service, $url := . }}
[http.services.{{ $service }}]
[http.services.{{ $service }}.loadBalancer]
[[http.services.{{ $service }}.loadBalancer.servers]]
url = "{{ $url }}"
{{ end }}
{{- end }}{{ end }}{{ end }}
EOH
destination = "local/config/conf/route-hashi.toml"
destination = "${NOMAD_TASK_DIR}/config/conf/route-hashi.toml"
change_mode = "noop"
splay = "1m"
wait {
min = "10s"
max = "20s"
}
}
template {
@ -244,7 +245,39 @@ CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
{{ end -}}
{{- end }}
EOH
destination = "local/config/conf/route-syslog-ng.toml"
destination = "${NOMAD_TASK_DIR}/config/conf/route-syslog-ng.toml"
change_mode = "noop"
splay = "1m"
wait {
min = "10s"
max = "20s"
}
}
template {
data = <<EOF
{{- with nomadVar "secrets/certs/_lego/certificates/__thefij_rocks_crt" }}{{ .contents }}{{ end -}}"
EOF
destination = "${NOMAD_SECRETS_DIR}/certs/_.thefij.rocks.crt"
change_mode = "noop"
}
template {
data = <<EOF
{{- with nomadVar "secrets/certs/_lego/certificates/__thefij_rocks_key" }}{{ .contents }}{{ end -}}"
EOF
destination = "${NOMAD_SECRETS_DIR}/certs/_.thefij.rocks.key"
change_mode = "noop"
}
template {
data = <<EOH
[[tls.certificates]]
certFile = "/etc/traefik/certs/_.thefij.rocks.crt"
keyFile = "/etc/traefik/certs/_.thefij.rocks.key"
EOH
destination = "${NOMAD_TASK_DIR}/config/conf/dynamic-tls.toml"
change_mode = "noop"
}
@ -258,7 +291,7 @@ CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
{{- end }}
{{- end }}
EOH
destination = "local/config/conf/middlewares.toml"
destination = "${NOMAD_TASK_DIR}/config/conf/middlewares.toml"
change_mode = "noop"
}
@ -268,7 +301,7 @@ CF_ZONE_API_TOKEN={{ .domain_lego_dns }}
{{ .usersfile }}
{{- end }}
EOH
destination = "secrets/usersfile"
destination = "${NOMAD_SECRETS_DIR}/usersfile"
change_mode = "noop"
}

View File

@ -1,3 +1,23 @@
resource "nomad_job" "traefik" {
jobspec = file("${path.module}/traefik.nomad")
}
resource "nomad_acl_policy" "treafik_secrets_certs_read" {
name = "traefik-secrets-certs-read"
description = "Read certs to secrets store"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/certs/*" {
capabilities = ["read"]
}
path "secrets/certs" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = resource.nomad_job.traefik.id
}
}

View File

@ -70,10 +70,12 @@ job "lldap" {
data = <<EOH
ldap_base_dn = "{{ with nomadVar "nomad/jobs" }}{{ .ldap_base_dn }}{{ end }}"
{{ with nomadVar "nomad/jobs/lldap" -}}
{{ with nomadVar "secrets/ldap" -}}
ldap_user_dn = "{{ .admin_user }}"
ldap_user_email = "{{ .admin_email }}"
{{ end -}}
{{ with nomadVar "nomad/jobs/lldap" -}}
[smtp_options]
from = "{{ .smtp_from }}"
reply_to = "{{ .smtp_reply_to }}"
@ -109,7 +111,7 @@ user = "{{ .user }}"
}
template {
data = "{{ with nomadVar \"nomad/jobs/lldap\" }}{{ .admin_password }}{{ end }}"
data = "{{ with nomadVar \"secrets/ldap\" }}{{ .admin_password }}{{ end }}"
destination = "$${NOMAD_SECRETS_DIR}/user_pass.txt"
change_mode = "restart"
}
@ -193,9 +195,9 @@ SELECT 'NOOP';
}
config {
image = "alpine:3.17"
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
ports = ["tls"]
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
}
resources {
@ -203,15 +205,6 @@ SELECT 'NOOP';
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel {{ env "NOMAD_TASK_DIR" }}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no
@ -222,7 +215,7 @@ delay = yes
accept = {{ env "NOMAD_PORT_tls" }}
connect = 127.0.0.1:{{ env "NOMAD_PORT_ldap" }}
ciphers = PSK
PSKsecrets = {{ env "NOMAD_TASK_DIR" }}/stunnel_psk.txt
PSKsecrets = {{ env "NOMAD_SECRETS_DIR" }}/stunnel_psk.txt
[mysql_client]
client = yes
@ -241,7 +234,7 @@ PSKsecrets = {{ env "NOMAD_SECRETS_DIR" }}/mysql_stunnel_psk.txt
{{ with nomadVar .Path }}{{ .psk }}{{ end }}
{{ end -}}
EOF
destination = "$${NOMAD_TASK_DIR}/stunnel_psk.txt"
destination = "$${NOMAD_SECRETS_DIR}/stunnel_psk.txt"
}
template {

View File

@ -9,6 +9,42 @@ resource "nomad_job" "lldap" {
detach = false
}
# Give access to ldap secrets
resource "nomad_acl_policy" "lldap_ldap_secrets" {
name = "lldap-secrets-ldap"
description = "Give access to LDAP secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/ldap/*" {
capabilities = ["read"]
}
path "secrets/ldap" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
# job_id = resource.nomad_job.lldap.id
job_id = "lldap"
}
}
# Create self-scoped psk so that config is valid at first start
resource "random_password" "lldap_ldap_psk" {
length = 32
override_special = "!@#%&*-_="
}
resource "nomad_variable" "lldap_ldap_psk" {
path = "secrets/ldap/allowed_psks/ldap"
items = {
psk = "lldap:${resource.random_password.lldap_ldap_psk.result}"
}
}
# Give access to smtp secrets
resource "nomad_acl_policy" "lldap_smtp_secrets" {
name = "lldap-secrets-smtp"
@ -24,6 +60,7 @@ namespace "default" {
EOH
job_acl {
# job_id = resource.nomad_job.lldap.id
job_id = "lldap"
group = "lldap"
task = "lldap"
@ -45,6 +82,7 @@ namespace "default" {
EOH
job_acl {
# job_id = resource.nomad_job.lldap.id
job_id = "lldap"
group = "lldap"
task = "bootstrap"
@ -77,27 +115,9 @@ namespace "default" {
EOH
job_acl {
# job_id = resource.nomad_job.lldap.id
job_id = "lldap"
group = "lldap"
task = "stunnel"
}
}
# Give access to all ldap secrets
resource "nomad_acl_policy" "secrets_ldap" {
name = "secrets-ldap"
description = "Give access to Postgres secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/ldap/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = resource.nomad_job.lldap.id
}
}

View File

@ -1,62 +0,0 @@
resource "nomad_job" "mysql-server" {
jobspec = file("${path.module}/mysql.nomad")
# Block until deployed as there are servics dependent on this one
detach = false
}
resource "nomad_acl_policy" "secrets_mysql" {
name = "secrets-mysql"
description = "Give access to MySQL secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/mysql/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = resource.nomad_job.mysql-server.id
}
}
resource "nomad_job" "postgres-server" {
jobspec = file("${path.module}/postgres.nomad")
# Block until deployed as there are servics dependent on this one
detach = false
}
resource "nomad_acl_policy" "secrets_postgres" {
name = "secrets-postgres"
description = "Give access to Postgres secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/postgres/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
job_id = resource.nomad_job.postgres-server.id
}
}
resource "nomad_job" "redis" {
for_each = toset(["blocky", "authelia"])
jobspec = templatefile("${path.module}/redis.nomad",
{
name = each.key,
}
)
# Block until deployed as there are servics dependent on this one
detach = false
}

View File

@ -81,9 +81,9 @@ MYSQL_ROOT_PASSWORD={{ .mysql_root_password }}
driver = "docker"
config {
image = "alpine:3.17"
image = "iamthefij/stunnel:latest"
args = ["${NOMAD_TASK_DIR}/stunnel.conf"]
ports = ["tls"]
args = ["/bin/sh", "${NOMAD_TASK_DIR}/start.sh"]
}
resources {
@ -91,15 +91,6 @@ MYSQL_ROOT_PASSWORD={{ .mysql_root_password }}
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel ${NOMAD_TASK_DIR}/stunnel.conf
EOF
destination = "${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no

41
databases/mysql.tf Normal file
View File

@ -0,0 +1,41 @@
resource "nomad_job" "mysql-server" {
jobspec = file("${path.module}/mysql.nomad")
# Block until deployed as there are servics dependent on this one
detach = false
}
resource "nomad_acl_policy" "secrets_mysql" {
name = "secrets-mysql"
description = "Give access to MySQL secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/mysql" {
capabilities = ["read"]
}
path "secrets/mysql/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
# job_id = resource.nomad_job.mysql-server.id
job_id = "mysql-server"
}
}
# Create self-scoped psk so that config is valid at first start
resource "random_password" "mysql_mysql_psk" {
length = 32
override_special = "!@#%&*-_="
}
resource "nomad_variable" "mysql_mysql_psk" {
path = "secrets/mysql/allowed_psks/mysql"
items = {
psk = "mysql:${resource.random_password.mysql_mysql_psk.result}"
}
}

View File

@ -73,7 +73,8 @@ POSTGRES_PASSWORD={{ .superuser_pass }}
resources {
cpu = 500
memory = 500
memory = 700
memory_max = 1200
}
}
@ -81,9 +82,9 @@ POSTGRES_PASSWORD={{ .superuser_pass }}
driver = "docker"
config {
image = "alpine:3.17"
image = "iamthefij/stunnel:latest"
args = ["${NOMAD_TASK_DIR}/stunnel.conf"]
ports = ["tls"]
args = ["/bin/sh", "${NOMAD_TASK_DIR}/start.sh"]
}
resources {
@ -91,15 +92,6 @@ POSTGRES_PASSWORD={{ .superuser_pass }}
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel ${NOMAD_TASK_DIR}/stunnel.conf
EOF
destination = "${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no

41
databases/postgres.tf Normal file
View File

@ -0,0 +1,41 @@
resource "nomad_job" "postgres-server" {
jobspec = file("${path.module}/postgres.nomad")
# Block until deployed as there are servics dependent on this one
detach = false
}
resource "nomad_acl_policy" "secrets_postgres" {
name = "secrets-postgres"
description = "Give access to Postgres secrets"
rules_hcl = <<EOH
namespace "default" {
variables {
path "secrets/postgres" {
capabilities = ["read"]
}
path "secrets/postgres/*" {
capabilities = ["read"]
}
}
}
EOH
job_acl {
# job_id = resource.nomad_job.postgres-server.id
job_id = "postgres-server"
}
}
# Create self-scoped psk so that config is valid at first start
resource "random_password" "postgres_postgres_psk" {
length = 32
override_special = "!@#%&*-_="
}
resource "nomad_variable" "postgres_postgres_psk" {
path = "secrets/postgres/allowed_psks/postgres"
items = {
psk = "postgres:${resource.random_password.postgres_postgres_psk.result}"
}
}

View File

@ -35,7 +35,7 @@ job "redis-${name}" {
resources {
cpu = 100
memory = 128
memory = 64
memory_max = 512
}
}
@ -44,23 +44,14 @@ job "redis-${name}" {
driver = "docker"
config {
image = "alpine:3.17"
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
ports = ["tls"]
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
}
resources {
cpu = 100
memory = 100
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel $${NOMAD_TASK_DIR}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
cpu = 50
memory = 15
}
template {

12
databases/redis.tf Normal file
View File

@ -0,0 +1,12 @@
resource "nomad_job" "redis" {
for_each = toset(["blocky", "authelia"])
jobspec = templatefile("${path.module}/redis.nomad",
{
name = each.key,
}
)
# Block until deployed as there are servics dependent on this one
detach = false
}

View File

@ -2,39 +2,39 @@
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/nomad" {
version = "2.0.0"
version = "2.1.1"
hashes = [
"h1:lIHIxA6ZmfyTGL3J9YIddhxlfit4ipSS09BLxkwo6L0=",
"zh:09b897d64db293f9a904a4a0849b11ec1e3fff5c638f734d82ae36d8dc044b72",
"zh:435cc106799290f64078ec24b6c59cb32b33784d609088638ed32c6d12121199",
"zh:7073444bd064e8c4ec115ca7d9d7f030cc56795c0a83c27f6668bba519e6849a",
"h1:liQBgBXfQEYmwpoGZUfSsu0U0t/nhvuRZbMhaMz7wcQ=",
"zh:28bc6922e8a21334568410760150d9d413d7b190d60b5f0b4aab2f4ef52efeeb",
"zh:2d4283740e92ce1403875486cd5ff2c8acf9df28c190873ab4d769ce37db10c1",
"zh:457e16d70075eae714a7df249d3ba42c2176f19b6750650152c56f33959028d9",
"zh:49ee88371e355c00971eefee6b5001392431b47b3e760a5c649dda76f59fb8fa",
"zh:614ad3bf07155ed8a5ced41dafb09042afbd1868490a379654b3e970def8e33d",
"zh:75be7199d76987e7549e1f27439922973d1bf27353b44a593bfbbc2e3b9f698f",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:79d238c35d650d2d83a439716182da63f3b2767e72e4cbd0b69cb13d9b1aebfc",
"zh:7ef5f49344278fe0bbc5447424e6aa5425ff1821d010d944a444d7fa2c751acf",
"zh:92179091638c8ba03feef371c4361a790190f9955caea1fa59de2055c701a251",
"zh:a8a34398851761368eb8e7c171f24e55efa6e9fdbb5c455f6dec34dc17f631bc",
"zh:b38fd5338625ebace5a4a94cea1a28b11bd91995d834e318f47587cfaf6ec599",
"zh:b71b273a2aca7ad5f1e07c767b25b5a888881ba9ca93b30044ccc39c2937f03c",
"zh:cd14357e520e0f09fb25badfb4f2ee37d7741afdc3ed47c7bcf54c1683772543",
"zh:e05e025f4bb95138c3c8a75c636e97cd7cfd2fc1525b0c8bd097db8c5f02df6e",
"zh:888e14a24410d56b37212fbea373a3e0401d0ff8f8e4f4dd00ba8b29de9fed39",
"zh:aa261925e8b152636a0886b3a2149864707632d836e98a88dacea6cfb6302082",
"zh:ac10cefb4064b3bb63d4b0379624a416a45acf778eac0004466f726ead686196",
"zh:b1a3c8b4d5b2dc9b510eac5e9e02665582862c24eb819ab74f44d3d880246d4f",
"zh:c552e2fe5670b6d3ad9a5faf78e3a27197eeedbe2b13928d2c491fa509bc47c7",
]
}
provider "registry.terraform.io/hashicorp/random" {
version = "3.5.1"
version = "3.6.0"
hashes = [
"h1:VSnd9ZIPyfKHOObuQCaKfnjIHRtR7qTw19Rz8tJxm+k=",
"zh:04e3fbd610cb52c1017d282531364b9c53ef72b6bc533acb2a90671957324a64",
"zh:119197103301ebaf7efb91df8f0b6e0dd31e6ff943d231af35ee1831c599188d",
"zh:4d2b219d09abf3b1bb4df93d399ed156cadd61f44ad3baf5cf2954df2fba0831",
"zh:6130bdde527587bbe2dcaa7150363e96dbc5250ea20154176d82bc69df5d4ce3",
"zh:6cc326cd4000f724d3086ee05587e7710f032f94fc9af35e96a386a1c6f2214f",
"h1:R5Ucn26riKIEijcsiOMBR3uOAjuOMfI1x7XvH4P6B1w=",
"zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d",
"zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211",
"zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829",
"zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d",
"zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055",
"zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3",
"zh:b6d88e1d28cf2dfa24e9fdcc3efc77adcdc1c3c3b5c7ce503a423efbdd6de57b",
"zh:ba74c592622ecbcef9dc2a4d81ed321c4e44cddf7da799faa324da9bf52a22b2",
"zh:c7c5cde98fe4ef1143bd1b3ec5dc04baf0d4cc3ca2c5c7d40d17c0e9b2076865",
"zh:dac4bad52c940cd0dfc27893507c1e92393846b024c5a9db159a93c534a3da03",
"zh:de8febe2a2acd9ac454b844a4106ed295ae9520ef54dc8ed2faf29f12716b602",
"zh:eab0d0495e7e711cca367f7d4df6e322e6c562fc52151ec931176115b83ed014",
"zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17",
"zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21",
"zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839",
"zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0",
"zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c",
"zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e",
]
}

View File

@ -1,58 +0,0 @@
module "bazarr" {
source = "./service"
name = "bazarr"
image = "lscr.io/linuxserver/bazarr:1.2.4"
resources = {
cpu = 150
memory = 400
}
ingress = true
service_port = 6767
use_wesher = var.use_wesher
use_postgres = true
postgres_bootstrap = {
enabled = true
}
env = {
PGID = 100
PUID = 1001
TZ = "America/Los_Angeles"
}
host_volumes = [
{
name = "bazarr-config"
dest = "/config"
read_only = false
},
{
name = "media-write"
dest = "/media"
read_only = false
},
]
templates = [
{
data = <<EOF
{{ with nomadVar "nomad/jobs/bazarr" -}}
POSTGRES_ENABLED=True
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_DATABASE={{ .db_name }}
POSTGRES_USERNAME={{ .db_user }}
POSTGRES_PASSWORD={{ .db_pass }}
{{- end }}
EOF
dest_prefix = "$${NOMAD_SECRETS_DIR}/"
dest = "env"
env = true
mount = false
},
]
}

View File

@ -2,7 +2,7 @@ module "diun" {
source = "./service"
name = "diun"
image = "crazymax/diun:4.26"
image = "crazymax/diun:4.27"
args = ["serve", "--log-level=debug"]
sticky_disk = true
@ -16,10 +16,13 @@ module "diun" {
DIUN_DEFAULTS_INCLUDETAGS = "^\\d+(\\.\\d+){0,2}$"
# Nomad API
# TODO: Use socket in $NOMAD_SECRETS_DIR/api.sock when we can assign workload ACLs with Terraform to
# allow read access. Will need to update template to allow passing token by env
NOMAD_ADDR = "http://$${attr.unique.network.ip-address}:4646/"
DIUN_PROVIDERS_NOMAD = true
NOMAD_ADDR = "unix:///secrets/api.sock"
DIUN_PROVIDERS_NOMAD = true
DIUN_PROVIDERS_NOMAD_SECRETID = "$${NOMAD_TOKEN}"
}
task_identity = {
env = true
}
templates = [
@ -36,3 +39,16 @@ module "diun" {
},
]
}
resource "nomad_acl_policy" "diun_query_jobs" {
name = "diun-query-jobs"
description = "Allow diun to query jobs"
rules_hcl = <<EOH
namespace "default" {
capabilities = ["list-jobs", "read-job"]
}
EOH
job_acl {
job_id = module.diun.job_id
}
}

View File

@ -17,12 +17,16 @@ module "gitea" {
ingress = true
service_port = 3000
use_wesher = var.use_wesher
ports = [
{
name = "ssh"
to = 22
}
]
service_check = {
path = "/api/healthz"
}
custom_services = [
{

25
services/languagetool.tf Normal file
View File

@ -0,0 +1,25 @@
module "languagetool" {
source = "./service"
name = "languagetool"
image = "ghcr.io/erikvl87/docker-languagetool/languagetool:4.8"
ingress = true
service_port = 8010
use_wesher = var.use_wesher
env = {
Java_Xmx = "512m"
}
service_check = {
path = "/v2/healthcheck"
}
# Possibility to use a volume over nfs to host n-gram datasets
# https://github.com/Erikvl87/docker-languagetool/pkgs/container/docker-languagetool%2Flanguagetool#using-n-gram-datasets
resources = {
cpu = 100
memory = 512
}
}

View File

@ -40,9 +40,4 @@ module "lidarr" {
cpu = 500
memory = 1500
}
stunnel_resources = {
cpu = 100
memory = 100
}
}

View File

@ -21,15 +21,6 @@ monitors:
- '/app/scripts/curl_ok.sh'
- 'https://grafana.thefij.rocks'
- name: Plex
command:
- 'curl'
- '--silent'
- '--show-error'
- '-o'
- '/dev/null'
- 'http://192.168.2.10:32400'
- name: NZBget
command:
- '/app/scripts/curl_ok.sh'
@ -45,6 +36,11 @@ monitors:
- '/app/scripts/curl_ok.sh'
- 'https://lidarr.thefij.rocks'
- name: Radarr
command:
- '/app/scripts/curl_ok.sh'
- 'https://radarr.thefij.rocks'
- name: Authelia
command:
- '/app/scripts/curl_ok.sh'
@ -55,6 +51,20 @@ monitors:
- '/app/scripts/curl_ok.sh'
- 'https://photoprism.thefij.rocks'
- name: Prometheus
command:
- '/app/scripts/curl_ok.sh'
- 'https://prometheus.thefij.rocks'
- name: Plex
command:
- 'curl'
- '--silent'
- '--show-error'
- '-o'
- '/dev/null'
- 'http://192.168.2.10:32400'
alerts:
log:
command:

View File

@ -1,12 +1,13 @@
module "minitor" {
source = "./service"
name = "minitor"
image = "iamthefij/minitor-go:1.4.1"
args = ["-metrics", "-config=$${NOMAD_TASK_DIR}/config.yml"]
service_port = 8080
use_wesher = var.use_wesher
prometheus = true
name = "minitor"
image = "iamthefij/minitor-go:1.4.1"
args = ["-metrics", "-config=$${NOMAD_TASK_DIR}/config.yml"]
service_port = 8080
service_check = null
use_wesher = var.use_wesher
prometheus = true
env = {
TZ = "America/Los_Angeles",

View File

@ -7,8 +7,7 @@ job "fixers" {
prohibit_overlap = true
}
group "main" {
group "orphaned_services" {
task "orphaned_services" {
driver = "docker"
@ -25,8 +24,15 @@ job "fixers" {
identity {
env = true
}
}
resources {
cpu = 50
memory = 100
}
}
}
group "missing_services" {
task "missing_services" {
driver = "docker"
@ -43,6 +49,11 @@ job "fixers" {
identity {
env = true
}
resources {
cpu = 50
memory = 100
}
}
}
}

View File

@ -2,27 +2,24 @@ module "photoprism_module" {
source = "./service"
name = "photoprism"
image = "photoprism/photoprism:221118-jammy"
image = "photoprism/photoprism:231128"
image_pull_timeout = "10m"
# constraints = [{
# attribute = "$${meta.hw_transcode.type}"
# # operator = "is_set"
# value = "raspberry"
# }]
priority = 60
# docker_devices = [{
# host_path = "$${meta.hw_transcode.device}"
# container_path = "$${meta.hw_transcode.device}"
# }]
resources = {
cpu = 2000
memory = 2500
cpu = 1500
memory = 2200
memory_max = 4000
}
stunnel_resources = {
cpu = 100
memory = 100
}
sticky_disk = true
host_volumes = [
{

59
services/radarr.tf Normal file
View File

@ -0,0 +1,59 @@
module "radarr" {
source = "./service"
name = "radarr"
image = "lscr.io/linuxserver/radarr:5.2.6"
ingress = true
service_port = 7878
use_wesher = var.use_wesher
ingress_middlewares = [
"authelia@nomad"
]
use_postgres = true
postgres_bootstrap = {
enabled = true
databases = [
"radarr",
"radarr-logs",
]
}
env = {
PGID = 100
PUID = 1001
TZ = "America/Los_Angeles"
}
host_volumes = [
{
name = "radarr-config"
dest = "/config"
read_only = false
},
{
name = "media-write"
dest = "/media"
read_only = false
},
]
resources = {
cpu = 500
memory = 500
memory_max = 700
}
}
resource "nomad_variable" "authelia_service_rules_radarr" {
path = "authelia/access_control/service_rules/radarr"
items = {
name = "radarr"
rule = <<EOH
policy: bypass
resources:
- '^/api([/?].*)?$'
EOH
}
}

View File

@ -8,6 +8,7 @@ resource "nomad_job" "service" {
args = var.args
env = var.env
task_meta = var.task_meta
task_identity = var.task_identity
group_meta = var.group_meta
job_meta = var.job_meta
constraints = var.constraints
@ -15,6 +16,7 @@ resource "nomad_job" "service" {
service_port = var.service_port
service_port_static = var.service_port_static
service_check = var.service_check
ports = var.ports
sticky_disk = var.sticky_disk
resources = var.resources
@ -59,7 +61,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = "mysql-bootstrap"
}
@ -97,7 +99,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = "stunnel"
}
@ -119,7 +121,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = "postgres-bootstrap"
}
@ -157,7 +159,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = "stunnel"
}
@ -195,7 +197,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = "stunnel"
}
@ -217,7 +219,7 @@ namespace "default" {
EOH
job_acl {
job_id = var.name
job_id = resource.nomad_job.service.id
group = var.name
task = var.name
}

View File

@ -0,0 +1,3 @@
output "job_id" {
value = resource.nomad_job.service.id
}

View File

@ -73,43 +73,7 @@ job "${name}" {
source = "${host_volume.name}"
}
%{~ endfor ~}
%{~ if service_port != null }
service {
name = "${replace(name, "_", "-")}"
provider = "nomad"
port = "main"
tags = [
%{~ if prometheus == true ~}
"prometheus.scrape",
%{~ endif ~}
%{~ if ingress ~}
"traefik.enable=true",
"traefik.http.routers.${name}.entryPoints=websecure",
%{~ if try(ingress_rule, null) != null ~}
"traefik.http.routers.${name}.rule=${ingress_rule}",
%{~ endif ~}
%{~ for middleware in ingress_middlewares ~}
"traefik.http.routers.${name}.middlewares=${middleware}",
%{~ endfor ~}
%{~ endif ~}
%{~ for tag in service_tags ~}
"${tag}",
%{~ endfor ~}
]
}
%{~ endif ~}
%{~ for custom_service in custom_services ~}
service {
name = "${custom_service.name}"
provider = "nomad"
port = "${custom_service.port}"
tags = ${jsonencode(custom_service.tags)}
}
%{~ endfor ~}
task "${name}" {
driver = "docker"
%{~ if length(task_meta) > 0 }
@ -119,7 +83,63 @@ job "${name}" {
%{ endfor ~}
}
%{~ endif ~}
%{~ if service_port != null }
service {
name = "${replace(name, "_", "-")}"
provider = "nomad"
port = "main"
tags = [
%{~ if prometheus == true ~}
"prometheus.scrape",
%{~ endif ~}
%{~ if ingress ~}
"traefik.enable=true",
"traefik.http.routers.${name}.entryPoints=websecure",
%{~ if try(ingress_rule, null) != null ~}
"traefik.http.routers.${name}.rule=${ingress_rule}",
%{~ endif ~}
%{~ for middleware in ingress_middlewares ~}
"traefik.http.routers.${name}.middlewares=${middleware}",
%{~ endfor ~}
%{~ endif ~}
%{~ for tag in service_tags ~}
"${tag}",
%{~ endfor ~}
]
%{~ if service_check != null ~}
check {
%{~ if service_check.name != "" ~}
name = "${service_check.name}"
%{~ endif ~}
%{~ if service_check.name != "" ~}
port = "${service_check.port}"
%{~ endif ~}
type = "${service_check.type}"
path = "${service_check.path}"
interval = "${service_check.interval}"
timeout = "${service_check.timeout}"
check_restart {
limit = 5
grace = "90s"
}
}
%{~ endif ~}
}
%{~ endif ~}
%{~ for custom_service in custom_services ~}
service {
name = "${custom_service.name}"
provider = "nomad"
port = "${custom_service.port}"
tags = ${jsonencode(custom_service.tags)}
}
%{~ endfor ~}
config {
image = "${image}"
%{~if image_pull_timeout != null ~}
@ -205,6 +225,12 @@ EOF
%{~ endif ~}
}
%{~ endif ~}
%{~ if task_identity != null }
identity {
env = ${task_identity.env}
file = ${task_identity.file}
}
%{~ endif ~}
}
%{~ if mysql_bootstrap != null }
task "mysql-bootstrap" {
@ -354,8 +380,8 @@ $$;
}
config {
image = "alpine:3.17"
args = ["/bin/sh", "$${NOMAD_TASK_DIR}/start.sh"]
image = "iamthefij/stunnel:latest"
args = ["$${NOMAD_TASK_DIR}/stunnel.conf"]
}
resources {
@ -366,15 +392,6 @@ $$;
%{~ endif ~}
}
template {
data = <<EOF
set -e
apk add stunnel
exec stunnel {{ env "NOMAD_TASK_DIR" }}/stunnel.conf
EOF
destination = "$${NOMAD_TASK_DIR}/start.sh"
}
template {
data = <<EOF
syslog = no

View File

@ -21,7 +21,6 @@ variable "priority" {
description = "Scheduler priority of the service"
}
variable "image" {
type = string
description = "Image that should be run"
@ -39,6 +38,15 @@ variable "task_meta" {
description = "Meta attributes to attach to the task"
}
variable "task_identity" {
description = "Task workload identity"
type = object({
env = optional(bool, false)
file = optional(bool, false)
})
default = null
}
variable "group_meta" {
type = map(string)
default = {}
@ -110,7 +118,7 @@ variable "stunnel_resources" {
default = {
cpu = 50
memory = 50
memory = 15
memory_max = null
}
@ -268,3 +276,17 @@ variable "use_wesher" {
description = "Indicates whether or not services should expose themselves on the wesher network"
default = true
}
variable "service_check" {
description = "Health check for main ingress service"
type = object({
name = optional(string, "")
port = optional(string, "")
path = optional(string, "/")
interval = optional(string, "30s")
timeout = optional(string, "2s")
type = optional(string, "http")
})
default = {}
}

View File

@ -2,11 +2,25 @@ module "sonarr" {
source = "./service"
name = "sonarr"
image = "lscr.io/linuxserver/sonarr:3.0.10"
image = "lscr.io/linuxserver/sonarr:4.0.2"
priority = 55
ingress = true
service_port = 8989
use_wesher = var.use_wesher
ingress_middlewares = [
"authelia@nomad"
]
use_postgres = true
postgres_bootstrap = {
enabled = true
databases = [
"sonarr",
"sonarr-logs",
]
}
env = {
PGID = 100
@ -16,7 +30,7 @@ module "sonarr" {
host_volumes = [
{
name = "sonarr-data"
name = "sonarr-config"
dest = "/config"
read_only = false
},
@ -33,3 +47,15 @@ module "sonarr" {
memory_max = 700
}
}
resource "nomad_variable" "authelia_service_rules_sonarr" {
path = "authelia/access_control/service_rules/sonarr"
items = {
name = "sonarr"
rule = <<EOH
policy: bypass
resources:
- '^/api([/?].*)?$'
EOH
}
}

View File

@ -13,7 +13,7 @@ job "unifi-traffic-route-ips" {
driver = "docker"
config {
image = "iamthefij/unifi-traffic-routes:0.0.1"
image = "iamthefij/unifi-traffic-routes:0.0.2"
}
env = {