initial infra commit

This commit is contained in:
Jeremie Fraeys 2026-01-19 15:02:13 -05:00
parent 1d2f8e6141
commit 997aff6be3
No known key found for this signature in database
53 changed files with 3101 additions and 0 deletions

17
.env.example Normal file
View file

@ -0,0 +1,17 @@
ANSIBLE_PRIVATE_KEY_FILE=
TF_VAR_region=ca-central
TF_VAR_instance_type=g6-nanode-1
TF_VAR_image=linode/debian13
TF_VAR_ssh_port=22
TF_VAR_timezone=America/Toronto
TF_VAR_add_cloudflare_ips=false
TF_VAR_enable_cloudflare_dns=false
TF_VAR_enable_services_wildcard=true
TF_VAR_object_storage_bucket=
TF_VAR_object_storage_region=us-east-1
S3_BUCKET=
S3_REGION=us-east-1
S3_ENDPOINT=https://us-east-1.linodeobjects.com

22
.gitignore vendored Normal file
View file

@ -0,0 +1,22 @@
.terraform/
**/.terraform/
*.tfstate
*.tfstate.*
crash.log
terraform.tfvars
terraform/tfplan
.env
.env.*
!.env.example
.DS_Store
**/.DS_Store
.vault_pass
secrets/.vault_pass
inventory/hosts.yml
inventory/host_vars/web.yml
secrets/*
!secrets/vault.example.yml

201
README.md Normal file
View file

@ -0,0 +1,201 @@
# infra
## Overview
This repo manages two hosts:
- `web` (`jfraeys.com`)
- `services` (`services.jfraeys.com`)
The routing convention is `service.server.jfraeys.com`.
Examples:
- `grafana.jfraeys.com` -> services host
- `git.jfraeys.com` -> services host
Traefik runs on both servers and routes only the services running on that server.
## Quickstart
This repo is intended to be driven by `setup.sh`:
```bash
./setup.sh
```
What it does:
- Applies Terraform from `terraform/`
- Writes `inventory/hosts.yml` and `inventory/host_vars/web.yml` (gitignored)
- Runs `playbooks/services.yml` and `playbooks/app.yml`
If you want Terraform only:
```bash
./setup.sh --no-ansible
```
## Prereqs (local)
- `terraform`
- `ansible`
- SSH access to the hosts
If your SSH key is passphrase-protected, you must load it into your agent before running Ansible non-interactively:
```bash
ssh-add --apple-use-keychain ~/.ssh/id_ed25519
```
## DNS (Cloudflare)
Create A/CNAME records that point to the correct server IP.
Recommended:
- `jfraeys.com` -> A record to web server IPv4
- `services.jfraeys.com` -> A record to services server IPv4
- `grafana.jfraeys.com` -> A/CNAME to services
- `git.jfraeys.com` -> A/CNAME to services
## TLS
Traefik uses Lets Encrypt via Cloudflare DNS-01.
You must provide a Cloudflare API token in your local environment when running Ansible:
- `CF_DNS_API_TOKEN` (preferred)
- or `TF_VAR_cloudflare_api_token`
## SSO (Authelia OIDC)
Authelia is exposed at:
- `https://auth.jfraeys.com` (issuer)
- `https://auth.jfraeys.com/.well-known/openid-configuration` (discovery)
Grafana is configured via `roles/grafana` using the Generic OAuth provider.
Forgejo is configured via `roles/forgejo` using the Forgejo admin CLI with `--provider=openidConnect` and `--auto-discover-url`.
Note: Forgejo pages that ask for an "OpenID URI" are legacy OpenID 2.0 and are not used for OIDC.
## Secrets (Ansible Vault)
Secrets are stored in `secrets/vault.yml` (encrypted).
Create your vault from the template:
- `secrets/vault.example.yml` -> `secrets/vault.yml`
Run playbooks with either:
- `--ask-vault-pass`
- or a local password file (not committed): `--vault-password-file .vault_pass`
Notes:
- `secrets/vault.yml` is intentionally gitignored
- `inventory/hosts.yml` and `inventory/host_vars/web.yml` are generated by `setup.sh` and intentionally gitignored
## Playbooks
- `playbooks/services.yml`: deploy observability + forgejo on `services`
- `playbooks/app.yml`: deploy app-side dependencies on `web`
- `playbooks/test_config.yml`: smoke test host config and deployed stacks
- `playbooks/deploy.yml`: legacy/all-in-one deploy for the services host (no tags)
## Configuration split
- Vault (`secrets/vault.yml`): secrets (API tokens, passwords, access keys, and sensitive Terraform `TF_VAR_*` values)
- `.env`: non-secret configuration (still treated as sensitive), such as region/instance type and non-secret endpoints
## Linode Object Storage (demo apps)
If you already have a Linode Object Storage bucket, demo apps can use it via the S3-compatible API.
Recommended env vars (see `.env.example`):
- `S3_BUCKET`
- `S3_ENDPOINT` (example: `https://us-east-1.linodeobjects.com`)
- `S3_REGION`
Secrets (store in `secrets/vault.yml`):
- `S3_ACCESS_KEY_ID`
- `S3_SECRET_ACCESS_KEY`
Create a dedicated access key for demos and scope permissions as tightly as possible.
## Grafana provisioning
Grafana is provisioned with Prometheus and Loki datasources via the Grafana provisioning mechanism (no manual UI setup required).
## Host vars
Set `inventory/host_vars/web.yml`:
- `public_ipv4`: public IPv4 of `jfraeys.com`
This is used to allowlist Loki (`services:3100`) to only the web host.
## Forgejo Actions runner (web host)
A Forgejo runner is deployed on the `web` host (`roles/forgejo_runner`).
- Requires `FORGEJO_RUNNER_REGISTRATION_TOKEN` in `secrets/vault.yml`.
- Uses a single generic `docker` label by default.
- The role auto re-registers the runner if labels change.
To force re-register (e.g. after deleting the runner in Forgejo UI):
```bash
ansible-playbook playbooks/app.yml \
--vault-password-file secrets/.vault_pass \
--limit web \
--tags forgejo_runner \
-e forgejo_runner_force_reregister=true
```
## Deploy
Services:
```bash
ansible-playbook playbooks/services.yml --ask-vault-pass
```
Web:
```bash
ansible-playbook playbooks/app.yml --ask-vault-pass
```
## Terraform
`./setup.sh` will export `TF_VAR_*` from `secrets/vault.yml` (prompting for vault password if needed) and then run Terraform with a saved plan.
## Notes
- Loki is exposed on `services:3100` but allowlisted in UFW to `web` only.
- Watchtower is enabled with label-based updates.
- Airflow/Spark are intentionally optional and can be enabled later via `deploy_airflow` / `deploy_spark`.
## Role layout
Services host (`services`):
- `roles/traefik`
- `roles/exporters` (node-exporter + cAdvisor)
- `roles/prometheus`
- `roles/loki`
- `roles/grafana`
- `roles/forgejo`
- `roles/watchtower`
Web host (`web`):
- `roles/traefik`
- `roles/app_core` (optional shared Postgres/Redis)
- `roles/forgejo_runner`

10
ansible.cfg Normal file
View file

@ -0,0 +1,10 @@
[defaults]
inventory = inventory/
remote_user=ansible
host_key_checking=True
roles_path=roles
interpreter_python=/usr/bin/python3
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=~/.ansible/cp/ansible-ssh-%%h-%%p-%%r -o StrictHostKeyChecking=accept-new -o IdentitiesOnly=yes

View file

@ -0,0 +1,11 @@
traefik_acme_email: "admin@jfraeys.com"
traefik_certresolver: "cloudflare"
ansible_port: "{{ lookup('env', 'TF_VAR_ssh_port') | default(22, true) }}"
ansible_ssh_private_key_file: "{{ lookup('env', 'ANSIBLE_PRIVATE_KEY_FILE') | default('~/.ssh/id_ed25519', true) }}"
grafana_hostname: "grafana.jfraeys.com"
forgejo_hostname: "git.jfraeys.com"
auth_hostname: "auth.jfraeys.com"
lldap_base_dn: "dc=jfraeys,dc=com"

18
playbooks/app.yml Normal file
View file

@ -0,0 +1,18 @@
---
- hosts: web_hosts
become: true
pre_tasks:
- name: Load vault vars if present
include_vars:
file: "{{ playbook_dir }}/../secrets/vault.yml"
when: (lookup('ansible.builtin.fileglob', playbook_dir ~ '/../secrets/vault.yml', wantlist=True) | length) > 0
tags: always
roles:
- role: docker
tags: [docker]
- role: traefik
tags: [traefik]
- role: app_core
tags: [app_core]
- role: forgejo_runner
tags: [forgejo_runner]

26
playbooks/deploy.yml Normal file
View file

@ -0,0 +1,26 @@
---
- name: Deploy all services
hosts: services_hosts
become: true
pre_tasks:
- name: Load vault vars if present
include_vars:
file: ../secrets/vault.yml
ignore_errors: true
roles:
- docker
- traefik
- lldap
- authelia
- exporters
- prometheus
- loki
- grafana
- forgejo
- watchtower
- role: airflow
when: deploy_airflow | default(false)
- role: spark
when: deploy_spark | default(false)

6
playbooks/hardening.yml Normal file
View file

@ -0,0 +1,6 @@
---
- name: Hardening
hosts: all
become: true
roles:
- hardening

127
playbooks/services.yml Normal file
View file

@ -0,0 +1,127 @@
---
- hosts: services_hosts
become: true
pre_tasks:
- name: Load vault vars if present
include_vars:
file: "{{ playbook_dir }}/../secrets/vault.yml"
when: (lookup('ansible.builtin.fileglob', playbook_dir ~ '/../secrets/vault.yml', wantlist=True) | length) > 0
tags: always
roles:
- role: docker
tags: [docker]
- role: traefik
tags: [traefik]
- role: lldap
tags: [lldap]
- role: authelia
tags: [authelia]
- role: exporters
tags: [exporters]
- role: prometheus
tags: [prometheus]
- role: loki
tags: [loki]
- role: grafana
tags: [grafana]
- role: forgejo
tags: [forgejo]
- role: watchtower
tags: [watchtower]
post_tasks:
- name: Read Grafana Traefik router rule label
shell: |
set -euo pipefail
id=$(docker compose ps -q grafana)
docker inspect ${id} | python3 -c 'import json,sys; d=json.load(sys.stdin)[0]; print(d.get("Config",{}).get("Labels",{}).get("traefik.http.routers.grafana.rule",""))'
args:
chdir: /opt/grafana
register: grafana_router_rule
changed_when: false
tags: [grafana]
- name: Fail if Grafana Traefik router rule label is not configured as expected
assert:
that:
- grafana_router_rule.stdout == ("Host(`" ~ grafana_hostname ~ "`)")
fail_msg: "Grafana Traefik router rule label mismatch. expected=Host(`{{ grafana_hostname }}`) got={{ grafana_router_rule.stdout | default('') }}. If you used --start-at-task, rerun the play without it so docker compose can recreate the container with updated labels."
tags: [grafana]
- name: Trigger Traefik certificate request for Grafana hostname
command: curl -k -s -o /dev/null -w "%{http_code}" --resolve "{{ grafana_hostname }}:443:127.0.0.1" "https://{{ grafana_hostname }}/"
register: grafana_tls_warmup
changed_when: false
retries: 30
delay: 2
until: grafana_tls_warmup.stdout != '000'
tags: [grafana]
- name: Wait for Traefik certificate SAN to include Grafana hostname
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ grafana_hostname }}" -connect 127.0.0.1:443 2>/dev/null | openssl x509 -noout -text | grep -q "DNS:{{ grafana_hostname }}"
register: grafana_origin_tls
changed_when: false
retries: 90
delay: 5
until: grafana_origin_tls.rc == 0
tags: [grafana]
- name: Trigger Traefik certificate request for Forgejo hostname
command: curl -k -s -o /dev/null -w "%{http_code}" --resolve "{{ forgejo_hostname }}:443:127.0.0.1" "https://{{ forgejo_hostname }}/"
register: forgejo_tls_warmup
changed_when: false
retries: 30
delay: 2
until: forgejo_tls_warmup.stdout != '000'
tags: [forgejo]
- name: Read Forgejo Traefik router rule label
shell: |
set -euo pipefail
id=$(docker compose ps -q forgejo)
docker inspect ${id} | python3 -c 'import json,sys; d=json.load(sys.stdin)[0]; print(d.get("Config",{}).get("Labels",{}).get("traefik.http.routers.forgejo.rule",""))'
args:
chdir: /opt/forgejo
register: forgejo_router_rule
changed_when: false
tags: [forgejo]
- name: Fail if Forgejo Traefik router rule label is not configured as expected
assert:
that:
- forgejo_router_rule.stdout == ("Host(`" ~ forgejo_hostname ~ "`)")
fail_msg: "Forgejo Traefik router rule label mismatch. expected=Host(`{{ forgejo_hostname }}`) got={{ forgejo_router_rule.stdout | default('') }}. If you used --start-at-task, rerun the play without it so docker compose can recreate the container with updated labels."
tags: [forgejo]
- name: Wait for Traefik certificate SAN to include Forgejo hostname
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ forgejo_hostname }}" -connect 127.0.0.1:443 2>/dev/null | openssl x509 -noout -text | grep -q "DNS:{{ forgejo_hostname }}"
register: forgejo_origin_tls
changed_when: false
retries: 90
delay: 5
until: forgejo_origin_tls.rc == 0
tags: [forgejo]
- name: Trigger Traefik certificate request for Authelia hostname
command: curl -k -s -o /dev/null -w "%{http_code}" --resolve "{{ auth_hostname }}:443:127.0.0.1" "https://{{ auth_hostname }}/"
register: authelia_tls_warmup
changed_when: false
retries: 30
delay: 2
until: authelia_tls_warmup.stdout != '000'
tags: [authelia]
- name: Wait for Traefik certificate SAN to include Authelia hostname
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ auth_hostname }}" -connect 127.0.0.1:443 2>/dev/null | openssl x509 -noout -text | grep -q "DNS:{{ auth_hostname }}"
register: authelia_origin_tls
changed_when: false
retries: 90
delay: 5
until: authelia_origin_tls.rc == 0
tags: [authelia]

399
playbooks/test_config.yml Normal file
View file

@ -0,0 +1,399 @@
---
- name: Test Deployment Configuration
hosts: all
become: true
tasks:
- name: Load vault vars if present
include_vars:
file: "{{ playbook_dir }}/../secrets/vault.yml"
no_log: true
when: (lookup('ansible.builtin.fileglob', playbook_dir ~ '/../secrets/vault.yml', wantlist=True) | length) > 0
- name: Check SSH service status
command: systemctl is-active sshd
register: ssh_status
changed_when: false
- debug:
msg: "SSH service is {{ ssh_status.stdout | default('') }}"
- name: Check SSH Port Configuration
command: sshd -T
register: ssh_port
changed_when: false
failed_when: false
- debug:
msg: "SSH port configured as {{ (ssh_port.stdout | default('') | regex_search('(?m)^port\\s+([0-9]+)$', '\\1')) | default('Unknown') }}"
- name: Check Docker version
command: docker --version
register: docker_version
changed_when: false
- debug:
msg: "Docker Version: {{ docker_version.stdout }}"
- name: Check Docker Compose version (hyphen)
command: docker-compose --version
register: docker_compose_version_hyphen
failed_when: false
changed_when: false
- name: Check Docker Compose version (docker compose)
command: docker compose version
register: docker_compose_version_space
failed_when: false
changed_when: false
- name: Display Docker Compose version
debug:
msg: >
{% if docker_compose_version_hyphen.stdout %}
Docker Compose version (docker-compose): {{ docker_compose_version_hyphen.stdout }}
{% elif docker_compose_version_space.stdout %}
Docker Compose version (docker compose): {{ docker_compose_version_space.stdout }}
{% else %}
Docker Compose not found
{% endif %}
- name: Check Ansible version
command: ansible --version
register: ansible_version
changed_when: false
failed_when: false
- debug:
msg: "Ansible Version: {{ (ansible_version.stdout | default('')) .split('\n')[0] if (ansible_version.stdout | default('') | length) > 0 else 'Not installed' }}"
- name: Check UFW status
command: ufw status verbose
register: ufw_status
changed_when: false
- debug:
msg: "UFW Status: {{ ufw_status.stdout }}"
- name: Check Fail2ban service status
command: systemctl is-active fail2ban
register: fail2ban_status
changed_when: false
failed_when: false
- debug:
msg: "Fail2ban is {{ fail2ban_status.stdout }}"
- name: Display logrotate custom config
command: cat /etc/logrotate.d/custom
register: logrotate_config
changed_when: false
failed_when: false
- debug:
msg: "Logrotate custom config:\n{{ logrotate_config.stdout | default('No custom logrotate config found') }}"
- name: Check running Docker containers
command: docker ps
register: docker_ps
changed_when: false
- debug:
msg: "Docker containers:\n{{ docker_ps.stdout }}"
- name: Determine host role
set_fact:
is_services_host: "{{ 'services_hosts' in group_names }}"
is_web_host: "{{ 'web_hosts' in group_names }}"
- name: Define expected stacks for services host
set_fact:
expected_stacks:
- { name: traefik, dir: /opt/traefik }
- { name: lldap, dir: /opt/lldap }
- { name: authelia, dir: /opt/authelia }
- { name: exporters, dir: /opt/exporters }
- { name: prometheus, dir: /opt/prometheus }
- { name: loki, dir: /opt/loki }
- { name: grafana, dir: /opt/grafana }
- { name: forgejo, dir: /opt/forgejo }
- { name: watchtower, dir: /opt/watchtower }
when: is_services_host
- name: Define expected stacks for web host
set_fact:
expected_stacks:
- { name: traefik, dir: /opt/traefik }
- { name: app_core, dir: /opt/app }
when: is_web_host
- name: Check that expected compose directories exist
stat:
path: "{{ item.dir }}/docker-compose.yml"
register: compose_files
loop: "{{ expected_stacks | default([]) }}"
changed_when: false
- name: Fail if any compose file is missing
assert:
that:
- item.stat.exists
fail_msg: "Missing docker-compose.yml for {{ item.item.name }} at {{ item.item.dir }}/docker-compose.yml"
loop: "{{ compose_files.results | default([]) }}"
when: expected_stacks is defined
- name: Read expected services per stack
command: docker compose config --services
args:
chdir: "{{ item.dir }}"
register: stack_expected
loop: "{{ expected_stacks | default([]) }}"
changed_when: false
- name: Read service status/health per stack (docker inspect)
shell: |
set -euo pipefail
ids=$(docker compose ps -q)
if [ -z "${ids}" ]; then
exit 0
fi
{% raw %}docker inspect --format '{{ index .Config.Labels "com.docker.compose.service" }} {{ .State.Status }} {{ if .State.Health }}{{ .State.Health.Status }}{{ else }}none{{ end }}' ${ids}{% endraw %}
args:
chdir: "{{ item.dir }}"
register: stack_status
loop: "{{ expected_stacks | default([]) }}"
changed_when: false
failed_when: false
- name: Assert all services in each stack are running (and healthy if healthcheck exists)
assert:
that:
- (expected | difference(running_services)) | length == 0
- bad_health_services | length == 0
fail_msg: >-
Stack {{ stack.name }} service status unhealthy.
Missing running={{ expected | difference(running_services) }}.
Bad health={{ bad_health_services }}.
Expected={{ expected }}
Inspect={{ status_lines }}
loop: "{{ (expected_stacks | default([])) | zip(stack_expected.results, stack_status.results) | list }}"
vars:
stack: "{{ item.0 }}"
expected: "{{ item.1.stdout_lines | default([]) }}"
status_lines: "{{ item.2.stdout_lines | default([]) }}"
running_services: >-
{{ status_lines
| map('regex_findall', '^(\S+)\s+running\s+')
| select('truthy')
| map('first')
| list }}
ok_services: >-
{{ status_lines
| map('regex_findall', '^(\S+)\s+running\s+(?:healthy|none)\s*$')
| select('truthy')
| map('first')
| list }}
bad_health_services: >-
{{ (running_services | default([])) | difference(ok_services | default([])) }}
when: expected_stacks is defined
- name: Ensure proxy network exists
command: docker network inspect proxy
register: proxy_network
changed_when: false
- name: Ensure monitoring network exists on services host
command: docker network inspect monitoring
register: monitoring_network
changed_when: false
when: is_services_host
- name: Check Prometheus readiness on services host
command: docker compose exec -T prometheus wget -qO- http://127.0.0.1:9090/-/ready
args:
chdir: /opt/prometheus
register: prometheus_ready
changed_when: false
when: is_services_host
- name: Fail if Prometheus is not ready
assert:
that:
- prometheus_ready.stdout | default('') in ['Prometheus is Ready.', 'Prometheus Server is Ready.']
fail_msg: "Prometheus readiness check failed. Output={{ prometheus_ready.stdout | default('') }}"
when: is_services_host
- name: Check Grafana health on services host
command: docker compose exec -T grafana wget -qO- http://127.0.0.1:3000/api/health
args:
chdir: /opt/grafana
register: grafana_health
changed_when: false
failed_when: false
when: is_services_host
- name: Fail if Grafana health endpoint is not reachable
assert:
that:
- grafana_health.rc == 0
fail_msg: "Grafana health endpoint check failed (inside container). rc={{ grafana_health.rc }} output={{ grafana_health.stdout | default('') }}"
when: is_services_host
- name: Check Loki readiness on services host
uri:
url: http://127.0.0.1:3100/ready
method: GET
status_code: [200, 503]
register: loki_ready
until: loki_ready.status == 200
retries: 30
delay: 2
changed_when: false
when: is_services_host
- name: Check Traefik dynamic config contains Grafana router rule
shell: |
set -euo pipefail
grep -Fq 'Host(`{{ grafana_hostname }}`)' /opt/traefik/dynamic/base.yml
register: grafana_router_rule
changed_when: false
failed_when: false
when: is_services_host
- name: Fail if Grafana Traefik router rule is not configured as expected
assert:
that:
- grafana_router_rule.rc == 0
fail_msg: "Grafana Traefik router rule mismatch in /opt/traefik/dynamic/base.yml. expected=Host(`{{ grafana_hostname }}`)"
when: is_services_host
- name: Check Traefik dynamic config contains Forgejo router rule
shell: |
set -euo pipefail
grep -Fq 'Host(`{{ forgejo_hostname }}`)' /opt/traefik/dynamic/base.yml
register: forgejo_router_rule
changed_when: false
failed_when: false
when: is_services_host
- name: Fail if Forgejo Traefik router rule is not configured as expected
assert:
that:
- forgejo_router_rule.rc == 0
fail_msg: "Forgejo Traefik router rule mismatch in /opt/traefik/dynamic/base.yml. expected=Host(`{{ forgejo_hostname }}`)"
when: is_services_host
- name: Check Traefik dynamic config contains Authelia router rule
shell: |
set -euo pipefail
grep -Fq 'Host(`{{ auth_hostname }}`)' /opt/traefik/dynamic/base.yml
register: authelia_router_rule
changed_when: false
failed_when: false
when: is_services_host
- name: Fail if Authelia Traefik router rule is not configured as expected
assert:
that:
- authelia_router_rule.rc == 0
fail_msg: "Authelia Traefik router rule mismatch in /opt/traefik/dynamic/base.yml. expected=Host(`{{ auth_hostname }}`)"
when: is_services_host
- name: Check Traefik serves a valid TLS certificate for Grafana hostname (origin)
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ grafana_hostname }}" -connect 127.0.0.1:443 2>/dev/null | grep -q "Verify return code: 0 (ok)"
register: grafana_origin_tls
changed_when: false
retries: 30
delay: 2
until: grafana_origin_tls.rc == 0
when: is_services_host
- name: Check Traefik serves a valid TLS certificate for Forgejo hostname (origin)
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ forgejo_hostname }}" -connect 127.0.0.1:443 2>/dev/null | grep -q "Verify return code: 0 (ok)"
register: forgejo_origin_tls
changed_when: false
retries: 30
delay: 2
until: forgejo_origin_tls.rc == 0
when: is_services_host
- name: Check Traefik serves a valid TLS certificate for Authelia hostname (origin)
shell: |
set -euo pipefail
echo | openssl s_client -servername "{{ auth_hostname }}" -connect 127.0.0.1:443 2>/dev/null | grep -q "Verify return code: 0 (ok)"
register: authelia_origin_tls
changed_when: false
retries: 30
delay: 2
until: authelia_origin_tls.rc == 0
when: is_services_host
- name: Check Authelia OIDC discovery issuer (origin)
shell: |
set -euo pipefail
curl -k -sS --resolve "{{ auth_hostname }}:443:127.0.0.1" "https://{{ auth_hostname }}/.well-known/openid-configuration" \
| python3 -c 'import json,sys; print(json.load(sys.stdin).get("issuer",""))'
register: authelia_oidc_issuer
changed_when: false
retries: 30
delay: 2
until: authelia_oidc_issuer.stdout | default('') | length > 0
when: is_services_host
- name: Fail if Authelia OIDC discovery issuer is not configured as expected
assert:
that:
- authelia_oidc_issuer.stdout == ("https://" ~ auth_hostname)
fail_msg: "Authelia OIDC issuer mismatch. expected=https://{{ auth_hostname }} got={{ authelia_oidc_issuer.stdout | default('') }}"
when: is_services_host
- name: Check LLDAP web UI is reachable on services host
uri:
url: http://127.0.0.1:17170/
method: GET
status_code: [200, 302]
register: lldap_web
changed_when: false
when: is_services_host
- name: Read object storage configuration from controller environment
set_fact:
s3_bucket: "{{ lookup('env', 'S3_BUCKET') | default('', true) }}"
s3_region: "{{ lookup('env', 'S3_REGION') | default(lookup('env', 'TF_VAR_object_storage_region'), true) | default('us-east-1', true) }}"
changed_when: false
- name: Compute object storage endpoint from controller environment
set_fact:
s3_endpoint: "{{ lookup('env', 'S3_ENDPOINT') | default('https://' ~ s3_region ~ '.linodeobjects.com', true) }}"
changed_when: false
- name: Smoke test Linode Object Storage credentials (head-bucket)
command: >-
docker run --rm
-e AWS_ACCESS_KEY_ID
-e AWS_SECRET_ACCESS_KEY
-e AWS_DEFAULT_REGION
-e AWS_EC2_METADATA_DISABLED=true
amazon/aws-cli:2.15.57
s3api head-bucket --bucket {{ s3_bucket | quote }} --endpoint-url {{ s3_endpoint | quote }}
environment:
AWS_ACCESS_KEY_ID: "{{ S3_ACCESS_KEY_ID | default('') }}"
AWS_SECRET_ACCESS_KEY: "{{ S3_SECRET_ACCESS_KEY | default('') }}"
AWS_DEFAULT_REGION: "{{ s3_region }}"
register: s3_head_bucket
changed_when: false
no_log: true
when:
- (s3_bucket | default('') | length) > 0
- (S3_ACCESS_KEY_ID | default('') | length) > 0
- (S3_SECRET_ACCESS_KEY | default('') | length) > 0
- name: Fail if object storage smoke test failed
assert:
that:
- s3_head_bucket.rc == 0
fail_msg: "Object storage smoke test failed (head-bucket). Check S3_BUCKET/S3_REGION/S3_ENDPOINT and S3_ACCESS_KEY_ID/S3_SECRET_ACCESS_KEY in vault."
when:
- (s3_bucket | default('') | length) > 0
- (S3_ACCESS_KEY_ID | default('') | length) > 0
- (S3_SECRET_ACCESS_KEY | default('') | length) > 0
- name: Check Loki is reachable from web host (allowlist)
uri:
url: "http://{{ hostvars['services'].ansible_host }}:3100/ready"
method: GET
status_code: 200
register: loki_from_web_ready
when: is_web_host

View file

@ -0,0 +1,4 @@
---
- name: Airflow role placeholder
debug:
msg: "Airflow role is not implemented yet (deploy_airflow is optional)."

View file

@ -0,0 +1,59 @@
---
- name: Read Postgres password
set_fact:
app_core_postgres_password: "{{ POSTGRES_PASSWORD | default(lookup('env', 'POSTGRES_PASSWORD')) }}"
- name: Read S3 configuration (optional)
set_fact:
app_core_s3_bucket: "{{ S3_BUCKET | default(lookup('env', 'S3_BUCKET')) | default('') }}"
app_core_s3_region: "{{ S3_REGION | default(lookup('env', 'S3_REGION')) | default('us-east-1') }}"
app_core_s3_endpoint: "{{ S3_ENDPOINT | default(lookup('env', 'S3_ENDPOINT')) | default('') }}"
app_core_s3_access_key_id: "{{ S3_ACCESS_KEY_ID | default(lookup('env', 'S3_ACCESS_KEY_ID')) | default('') }}"
app_core_s3_secret_access_key: "{{ S3_SECRET_ACCESS_KEY | default(lookup('env', 'S3_SECRET_ACCESS_KEY')) | default('') }}"
no_log: true
- name: Fail if Postgres password is missing
fail:
msg: "POSTGRES_PASSWORD is required"
when: app_core_postgres_password | length == 0
- name: Create app directory
file:
path: /opt/app
state: directory
- name: Write app environment file (optional)
copy:
dest: /opt/app/app.env
mode: '0600'
content: |
S3_BUCKET={{ app_core_s3_bucket }}
S3_REGION={{ app_core_s3_region }}
S3_ENDPOINT={{ app_core_s3_endpoint | default('https://' ~ app_core_s3_region ~ '.linodeobjects.com') }}
S3_ACCESS_KEY_ID={{ app_core_s3_access_key_id }}
S3_SECRET_ACCESS_KEY={{ app_core_s3_secret_access_key }}
when:
- (app_core_s3_bucket | length) > 0
- (app_core_s3_access_key_id | length) > 0
- (app_core_s3_secret_access_key | length) > 0
no_log: true
- name: Copy Docker Compose file for app
template:
src: docker-compose.yml.j2
dest: /opt/app/docker-compose.yml
- name: Ensure app network exists
command: docker network inspect app
register: app_network
changed_when: false
failed_when: false
- name: Create app network if missing
command: docker network create app
when: app_network.rc != 0
- name: Deploy app stack
command: docker compose up -d
args:
chdir: /opt/app

View file

@ -0,0 +1,29 @@
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: "{{ app_core_postgres_password }}"
POSTGRES_USER: "app"
POSTGRES_DB: "app"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app
restart: unless-stopped
redis:
image: redis:7
command: ["redis-server", "--appendonly", "yes"]
volumes:
- redis_data:/data
networks:
- app
restart: unless-stopped
volumes:
postgres_data:
redis_data:
networks:
app:
external: true

View file

@ -0,0 +1,120 @@
---
- name: Read LLDAP admin password (for Authelia LDAP bind)
set_fact:
lldap_admin_password: "{{ LLDAP_ADMIN_PASSWORD | default(lookup('env', 'LLDAP_ADMIN_PASSWORD')) }}"
no_log: true
- name: Fail if LLDAP admin password is missing
fail:
msg: "LLDAP_ADMIN_PASSWORD is required"
when: lldap_admin_password | length == 0
- name: Read Authelia identity validation reset password JWT secret
set_fact:
authelia_reset_password_jwt_secret: "{{ AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET | default(lookup('env', 'AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET')) }}"
no_log: true
- name: Fail if Authelia identity validation reset password JWT secret is missing
fail:
msg: "AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET is required"
when: authelia_reset_password_jwt_secret | length == 0
- name: Read Authelia session secret
set_fact:
authelia_session_secret: "{{ AUTHELIA_SESSION_SECRET | default(lookup('env', 'AUTHELIA_SESSION_SECRET')) }}"
no_log: true
- name: Fail if Authelia session secret is missing
fail:
msg: "AUTHELIA_SESSION_SECRET is required"
when: authelia_session_secret | length == 0
- name: Read Authelia storage encryption key
set_fact:
authelia_storage_encryption_key: "{{ AUTHELIA_STORAGE_ENCRYPTION_KEY | default(lookup('env', 'AUTHELIA_STORAGE_ENCRYPTION_KEY')) }}"
no_log: true
- name: Fail if Authelia storage encryption key is missing
fail:
msg: "AUTHELIA_STORAGE_ENCRYPTION_KEY is required"
when: authelia_storage_encryption_key | length == 0
- name: Read Authelia OIDC HMAC secret
set_fact:
authelia_oidc_hmac_secret: "{{ AUTHELIA_OIDC_HMAC_SECRET | default(lookup('env', 'AUTHELIA_OIDC_HMAC_SECRET')) }}"
no_log: true
- name: Fail if Authelia OIDC HMAC secret is missing
fail:
msg: "AUTHELIA_OIDC_HMAC_SECRET is required"
when: authelia_oidc_hmac_secret | length == 0
- name: Read Authelia OIDC private key
set_fact:
authelia_oidc_private_key_pem: "{{ AUTHELIA_OIDC_PRIVATE_KEY_PEM | default(lookup('env', 'AUTHELIA_OIDC_PRIVATE_KEY_PEM')) }}"
no_log: true
- name: Fail if Authelia OIDC private key is missing
fail:
msg: "AUTHELIA_OIDC_PRIVATE_KEY_PEM is required"
when: authelia_oidc_private_key_pem | length == 0
- name: Read OIDC client secret for Grafana
set_fact:
authelia_oidc_grafana_client_secret_plain: "{{ AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET | default(lookup('env', 'AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET')) }}"
no_log: true
- name: Fail if OIDC client secret for Grafana is missing
fail:
msg: "AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET is required"
when: authelia_oidc_grafana_client_secret_plain | length == 0
- name: Read OIDC client secret for Forgejo
set_fact:
authelia_oidc_forgejo_client_secret_plain: "{{ AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET | default(lookup('env', 'AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET')) }}"
no_log: true
- name: Fail if OIDC client secret for Forgejo is missing
fail:
msg: "AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET is required"
when: authelia_oidc_forgejo_client_secret_plain | length == 0
- name: Generate OIDC client secret digest for Grafana
command: >-
docker run --rm authelia/authelia:latest authelia crypto hash generate pbkdf2 --variant sha512 --iterations 310000 --password {{ authelia_oidc_grafana_client_secret_plain | quote }}
register: authelia_oidc_grafana_client_secret_hash_cmd
changed_when: false
no_log: true
- name: Generate OIDC client secret digest for Forgejo
command: >-
docker run --rm authelia/authelia:latest authelia crypto hash generate pbkdf2 --variant sha512 --iterations 310000 --password {{ authelia_oidc_forgejo_client_secret_plain | quote }}
register: authelia_oidc_forgejo_client_secret_hash_cmd
changed_when: false
no_log: true
- name: Set OIDC client secret digests
set_fact:
authelia_oidc_grafana_client_secret_hash: "{{ authelia_oidc_grafana_client_secret_hash_cmd.stdout | trim | regex_replace('^Digest:\\s*', '') }}"
authelia_oidc_forgejo_client_secret_hash: "{{ authelia_oidc_forgejo_client_secret_hash_cmd.stdout | trim | regex_replace('^Digest:\\s*', '') }}"
no_log: true
- name: Create Authelia directory
file:
path: /opt/authelia
state: directory
- name: Copy Authelia configuration
template:
src: configuration.yml.j2
dest: /opt/authelia/configuration.yml
- name: Copy Docker Compose file for Authelia
template:
src: docker-compose.yml.j2
dest: /opt/authelia/docker-compose.yml
- name: Deploy Authelia
command: docker compose up -d --force-recreate
args:
chdir: /opt/authelia

View file

@ -0,0 +1,72 @@
server:
address: 'tcp://:9091/'
log:
level: 'info'
identity_validation:
reset_password:
jwt_secret: "{{ authelia_reset_password_jwt_secret }}"
session:
secret: "{{ authelia_session_secret }}"
cookies:
- domain: 'jfraeys.com'
authelia_url: 'https://{{ auth_hostname }}'
storage:
encryption_key: "{{ authelia_storage_encryption_key }}"
local:
path: '/config/db.sqlite3'
notifier:
filesystem:
filename: '/config/notification.txt'
authentication_backend:
ldap:
implementation: 'lldap'
address: 'ldap://lldap:3890'
base_dn: '{{ lldap_base_dn }}'
user: 'cn=admin,ou=people,{{ lldap_base_dn }}'
password: "{{ lldap_admin_password }}"
access_control:
default_policy: 'one_factor'
identity_providers:
oidc:
hmac_secret: "{{ authelia_oidc_hmac_secret }}"
jwks:
- algorithm: 'RS256'
use: 'sig'
key: |
{% for line in authelia_oidc_private_key_pem.splitlines() %}
{{ line }}
{% endfor %}
clients:
- client_id: 'grafana'
client_name: 'Grafana'
client_secret: "{{ authelia_oidc_grafana_client_secret_hash }}"
redirect_uris:
- 'https://{{ grafana_hostname }}/login/generic_oauth'
scopes:
- 'openid'
- 'profile'
- 'email'
- 'groups'
authorization_policy: 'one_factor'
require_pkce: true
- client_id: 'forgejo'
client_name: 'Forgejo'
client_secret: "{{ authelia_oidc_forgejo_client_secret_hash }}"
redirect_uris:
- 'https://{{ forgejo_hostname }}/user/oauth2/authelia/callback'
scopes:
- 'openid'
- 'email'
- 'profile'
- 'groups'
authorization_policy: 'one_factor'
require_pkce: true

View file

@ -0,0 +1,12 @@
services:
authelia:
image: authelia/authelia:latest
volumes:
- /opt/authelia:/config
networks:
- proxy
restart: unless-stopped
networks:
proxy:
external: true

174
roles/docker/tasks/main.yml Normal file
View file

@ -0,0 +1,174 @@
---
- name: Check if Docker is installed
command: docker --version
register: docker_installed
changed_when: false
failed_when: false
- name: Check if Docker Compose (v2) is installed
command: docker compose version
register: docker_compose_installed
changed_when: false
failed_when: false
when: ansible_facts['os_family'] == "Debian"
- name: Install Docker APT repo dependencies
apt:
name:
- ca-certificates
- curl
- gnupg
state: present
update_cache: true
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Determine Docker repository codename and architecture
set_fact:
docker_repo_codename: "{{ 'bookworm' if ansible_facts['distribution_release'] in ['trixie'] else ansible_facts['distribution_release'] }}"
docker_repo_arch: "{{ 'amd64' if ansible_facts['architecture'] == 'x86_64' else ('arm64' if ansible_facts['architecture'] in ['aarch64', 'arm64'] else ansible_facts['architecture']) }}"
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Ensure Docker apt keyrings directory exists
file:
path: /etc/apt/keyrings
state: directory
mode: "0755"
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Install Docker GPG key
get_url:
url: https://download.docker.com/linux/debian/gpg
dest: /etc/apt/keyrings/docker.asc
mode: "0644"
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Add Docker apt repository
apt_repository:
repo: "deb [arch={{ docker_repo_arch }} signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian {{ docker_repo_codename }} stable"
state: present
filename: docker
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Install Docker on Linux (Debian)
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
update_cache: true
register: docker_ce_install
ignore_errors: true
when: ansible_facts['os_family'] == "Debian" and (docker_installed.rc != 0 or (docker_compose_installed is defined and docker_compose_installed.rc != 0))
- name: Fallback - install Docker from Debian repos if docker-ce is unavailable
apt:
name:
- docker.io
state: present
update_cache: true
when: ansible_facts['os_family'] == "Debian" and (docker_ce_install is defined and docker_ce_install is failed)
- name: Ensure Docker CLI plugins directory exists
file:
path: /usr/local/lib/docker/cli-plugins
state: directory
mode: "0755"
when: ansible_facts['os_family'] == "Debian" and (docker_ce_install is defined and docker_ce_install is failed)
- name: Fallback - install Docker Compose v2 plugin binary
get_url:
url: "https://github.com/docker/compose/releases/download/v2.27.0/docker-compose-linux-{{ 'x86_64' if ansible_facts['architecture'] == 'x86_64' else 'aarch64' }}"
dest: /usr/local/lib/docker/cli-plugins/docker-compose
mode: "0755"
when: ansible_facts['os_family'] == "Debian" and (docker_ce_install is defined and docker_ce_install is failed)
- name: Check if Docker Desktop is running on macOS
command: >
osascript -e 'tell application "Docker" to get the running'
register: docker_desktop_running
ignore_errors: true
when: ansible_facts['os_family'] == "Darwin"
- name: Notify if Docker Desktop is not running
debug:
msg: "Docker Desktop is not running. Please start Docker Desktop."
when: ansible_facts['os_family'] == "Darwin" and docker_desktop_running is defined and docker_desktop_running.rc != 0
- name: Start and enable Docker service on Linux
service:
name: docker
state: started
enabled: true
when: ansible_facts['os_family'] == "Debian"
- name: Ensure /etc/docker exists
file:
path: /etc/docker
state: directory
mode: "0755"
when: ansible_facts['os_family'] == "Debian"
- name: Check if Docker daemon.json exists
stat:
path: /etc/docker/daemon.json
register: docker_daemon_json_stat
when: ansible_facts['os_family'] == "Debian"
- name: Read existing Docker daemon.json
slurp:
path: /etc/docker/daemon.json
register: docker_daemon_json_slurp
when:
- ansible_facts['os_family'] == "Debian"
- docker_daemon_json_stat.stat.exists
- name: Parse existing Docker daemon.json
set_fact:
docker_daemon_json_current: "{{ (docker_daemon_json_slurp.content | b64decode) | from_json }}"
when:
- ansible_facts['os_family'] == "Debian"
- docker_daemon_json_stat.stat.exists
- name: Set empty Docker daemon.json config when missing
set_fact:
docker_daemon_json_current: {}
when:
- ansible_facts['os_family'] == "Debian"
- not docker_daemon_json_stat.stat.exists
- name: Build desired Docker daemon.json config
set_fact:
docker_daemon_json_desired: >-
{{
docker_daemon_json_current
| combine({
'log-driver': 'json-file',
'log-opts': (docker_daemon_json_current['log-opts'] | default({}))
| combine({
'max-size': '10m',
'max-file': '5'
})
}, recursive=True)
}}
when: ansible_facts['os_family'] == "Debian"
- name: Write Docker daemon.json
copy:
dest: /etc/docker/daemon.json
content: "{{ docker_daemon_json_desired | to_nice_json }}"
owner: root
group: root
mode: "0644"
register: docker_daemon_json_write
when: ansible_facts['os_family'] == "Debian"
- name: Restart Docker when daemon.json changes
service:
name: docker
state: restarted
when:
- ansible_facts['os_family'] == "Debian"
- docker_daemon_json_write is changed

View file

@ -0,0 +1,25 @@
---
- name: Create exporters directory
file:
path: /opt/exporters
state: directory
- name: Ensure monitoring network exists
command: docker network inspect monitoring
register: monitoring_network
changed_when: false
failed_when: false
- name: Create monitoring network if missing
command: docker network create monitoring
when: monitoring_network.rc != 0
- name: Copy Docker Compose file for exporters
template:
src: docker-compose.yml.j2
dest: /opt/exporters/docker-compose.yml
- name: Deploy exporters
command: docker compose up -d
args:
chdir: /opt/exporters

View file

@ -0,0 +1,31 @@
services:
node-exporter:
image: prom/node-exporter:v1.7.0
command:
- --path.rootfs=/host
pid: host
volumes:
- /:/host:ro,rslave
networks:
- internal
restart: unless-stopped
labels:
- com.centurylinklabs.watchtower.enable=true
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.49.1
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
networks:
- internal
restart: unless-stopped
labels:
- com.centurylinklabs.watchtower.enable=true
networks:
internal:
external: true
name: monitoring

View file

@ -0,0 +1,55 @@
---
- name: Read OIDC client secret for Forgejo
set_fact:
forgejo_oidc_client_secret: "{{ AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET | default(lookup('env', 'AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET')) }}"
no_log: true
- name: Fail if OIDC client secret for Forgejo is missing
fail:
msg: "AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET is required"
when: forgejo_oidc_client_secret | length == 0
- name: Create Forgejo directory
file:
path: /opt/forgejo
state: directory
- name: Copy Docker Compose file for Forgejo
template:
src: docker-compose.yml.j2
dest: /opt/forgejo/docker-compose.yml
- name: Deploy Forgejo
command: docker compose up -d --force-recreate
args:
chdir: /opt/forgejo
- name: Run Forgejo database migrations
command: docker exec --user 1000:1000 forgejo-forgejo-1 forgejo migrate
changed_when: false
- name: Configure Forgejo OIDC auth source (Authelia)
shell: |
set -euo pipefail
cid=$(docker ps -q --filter name=forgejo-forgejo-1 | head -n1)
if [ -z "$cid" ]; then
exit 1
fi
if docker exec --user 1000:1000 "$cid" forgejo admin auth list | grep -q "authelia"; then
exit 0
fi
docker exec --user 1000:1000 "$cid" forgejo admin auth add-oauth \
--provider=openidConnect \
--name=authelia \
--key=forgejo \
--secret="$FORGEJO_OIDC_CLIENT_SECRET" \
--auto-discover-url=https://{{ auth_hostname }}/.well-known/openid-configuration \
--scopes='openid email profile groups' \
--group-claim-name=groups \
--admin-group=admins
changed_when: false
environment:
FORGEJO_OIDC_CLIENT_SECRET: "{{ forgejo_oidc_client_secret }}"
no_log: true

View file

@ -0,0 +1,38 @@
services:
forgejo:
image: codeberg.org/forgejo/forgejo:9
environment:
FORGEJO__server__DOMAIN: "{{ forgejo_hostname }}"
FORGEJO__server__ROOT_URL: "https://{{ forgejo_hostname }}/"
FORGEJO__server__SSH_DOMAIN: "{{ forgejo_hostname }}"
FORGEJO__server__SSH_PORT: "2222"
FORGEJO__server__DISABLE_SSH: "false"
FORGEJO__actions__ENABLED: "true"
FORGEJO__service__ALLOW_ONLY_EXTERNAL_REGISTRATION: "true"
FORGEJO__service__DISABLE_REGISTRATION: "false"
FORGEJO__service__SHOW_REGISTRATION_BUTTON: "false"
FORGEJO__database__DB_TYPE: sqlite3
volumes:
- forgejo_data:/data
ports:
- "2222:22"
networks:
- proxy
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.http.routers.forgejo.rule=Host(`{{ forgejo_hostname }}`)
- traefik.http.routers.forgejo.entrypoints=websecure
- traefik.http.routers.forgejo.tls=true
- traefik.http.routers.forgejo.tls.certresolver={{ traefik_certresolver }}
- traefik.http.routers.forgejo.middlewares=security-headers@file,compress@file
- traefik.http.services.forgejo.loadbalancer.server.port=3000
- com.centurylinklabs.watchtower.enable=true
volumes:
forgejo_data:
networks:
proxy:
external: true

View file

@ -0,0 +1,5 @@
---
forgejo_runner_force_reregister: false
forgejo_runner_labels:
- docker:docker://ghcr.io/catthehacker/ubuntu:act-22.04

View file

@ -0,0 +1,93 @@
---
- name: Read Forgejo runner registration token
set_fact:
forgejo_runner_registration_token: "{{ FORGEJO_RUNNER_REGISTRATION_TOKEN | default(lookup('env', 'FORGEJO_RUNNER_REGISTRATION_TOKEN')) }}"
no_log: true
- name: Compute Forgejo runner labels
set_fact:
forgejo_runner_labels_csv: "{{ forgejo_runner_labels | join(',') }}"
- name: Fail if Forgejo runner registration token is missing
fail:
msg: "FORGEJO_RUNNER_REGISTRATION_TOKEN is required"
when: forgejo_runner_registration_token | length == 0
- name: Create Forgejo runner directories
file:
path: "{{ item }}"
state: directory
owner: "1000"
group: "1000"
mode: "0775"
loop:
- /opt/forgejo-runner
- /opt/forgejo-runner/data
- /opt/forgejo-runner/data/.cache
- name: Copy Docker Compose file for Forgejo runner
template:
src: docker-compose.yml.j2
dest: /opt/forgejo-runner/docker-compose.yml
- name: Force runner re-registration (reset local registration state)
file:
path: "{{ item }}"
state: absent
loop:
- /opt/forgejo-runner/data/.runner
- /opt/forgejo-runner/data/.labels
when: forgejo_runner_force_reregister | bool
- name: Check whether Forgejo runner is already registered
stat:
path: /opt/forgejo-runner/data/.runner
register: forgejo_runner_registration
- name: Check whether Forgejo runner labels file exists
stat:
path: /opt/forgejo-runner/data/.labels
register: forgejo_runner_labels_file
- name: Read previously applied Forgejo runner labels (if any)
slurp:
src: /opt/forgejo-runner/data/.labels
register: forgejo_runner_labels_previous
when: forgejo_runner_labels_file.stat.exists
- name: Determine whether Forgejo runner labels changed
set_fact:
forgejo_runner_labels_changed: >-
{{ (forgejo_runner_labels_previous.content | default('') | b64decode | trim) != (forgejo_runner_labels_csv | trim) }}
- name: Remove runner registration when labels changed
file:
path: /opt/forgejo-runner/data/.runner
state: absent
when: forgejo_runner_labels_changed
- name: Register Forgejo runner (one-time)
command: >-
docker compose run --rm runner forgejo-runner register
--no-interactive
--instance https://{{ forgejo_hostname }}/
--token {{ forgejo_runner_registration_token }}
--name {{ inventory_hostname }}
--labels {{ forgejo_runner_labels_csv }}
args:
chdir: /opt/forgejo-runner
when: (not forgejo_runner_registration.stat.exists) or forgejo_runner_labels_changed
no_log: true
- name: Persist applied Forgejo runner labels
copy:
dest: /opt/forgejo-runner/data/.labels
content: "{{ forgejo_runner_labels_csv }}"
owner: "1000"
group: "1000"
mode: "0644"
- name: Deploy Forgejo runner
command: docker compose up -d --force-recreate
args:
chdir: /opt/forgejo-runner

View file

@ -0,0 +1,13 @@
services:
runner:
image: data.forgejo.org/forgejo/runner:11
environment:
DOCKER_HOST: unix:///var/run/docker.sock
user: "0:0"
volumes:
- ./data:/data
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
command: forgejo-runner daemon
labels:
- com.centurylinklabs.watchtower.enable=true

View file

@ -0,0 +1,44 @@
---
- name: Read Grafana admin password
set_fact:
grafana_admin_password: "{{ GRAFANA_ADMIN_PASSWORD | default(lookup('env', 'GRAFANA_ADMIN_PASSWORD')) }}"
- name: Fail if Grafana admin password is missing
fail:
msg: "GRAFANA_ADMIN_PASSWORD is required"
when: grafana_admin_password | length == 0
- name: Create Grafana directory
file:
path: /opt/grafana
state: directory
- name: Create Grafana provisioning directory
file:
path: /opt/grafana/provisioning/datasources
state: directory
- name: Ensure monitoring network exists
command: docker network inspect monitoring
register: monitoring_network
changed_when: false
failed_when: false
- name: Create monitoring network if missing
command: docker network create monitoring
when: monitoring_network.rc != 0
- name: Copy Docker Compose file for Grafana
template:
src: docker-compose.yml.j2
dest: /opt/grafana/docker-compose.yml
- name: Copy Grafana datasources provisioning
template:
src: datasources.yml.j2
dest: /opt/grafana/provisioning/datasources/datasources.yml
- name: Deploy Grafana
command: docker compose up -d --force-recreate
args:
chdir: /opt/grafana

View file

@ -0,0 +1,15 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: false
- name: Loki
type: loki
access: proxy
url: http://loki:3100
editable: false

View file

@ -0,0 +1,52 @@
services:
grafana:
image: grafana/grafana:10.2.3
environment:
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: "{{ grafana_admin_password }}"
GF_SERVER_ROOT_URL: "https://{{ grafana_hostname }}"
GF_AUTH_GENERIC_OAUTH_ENABLED: 'true'
GF_AUTH_GENERIC_OAUTH_NAME: 'Authelia'
GF_AUTH_GENERIC_OAUTH_ICON: 'signin'
GF_AUTH_GENERIC_OAUTH_CLIENT_ID: 'grafana'
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET: "{{ AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET | default(lookup('env', 'AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET')) }}"
GF_AUTH_GENERIC_OAUTH_SCOPES: 'openid profile email groups'
GF_AUTH_GENERIC_OAUTH_EMPTY_SCOPES: 'false'
GF_AUTH_GENERIC_OAUTH_AUTH_URL: 'https://{{ auth_hostname }}/api/oidc/authorization'
GF_AUTH_GENERIC_OAUTH_TOKEN_URL: 'https://{{ auth_hostname }}/api/oidc/token'
GF_AUTH_GENERIC_OAUTH_API_URL: 'https://{{ auth_hostname }}/api/oidc/userinfo'
GF_AUTH_OAUTH_ALLOW_INSECURE_EMAIL_LOOKUP: 'true'
GF_AUTH_GENERIC_OAUTH_LOGIN_ATTRIBUTE_PATH: 'preferred_username || sub'
GF_AUTH_GENERIC_OAUTH_GROUPS_ATTRIBUTE_PATH: 'groups'
GF_AUTH_GENERIC_OAUTH_NAME_ATTRIBUTE_PATH: 'name'
GF_AUTH_GENERIC_OAUTH_EMAIL_ATTRIBUTE_PATH: "email || (preferred_username && join('@', [preferred_username, 'jfraeys.com'])) || (sub && join('@', [sub, 'jfraeys.com']))"
GF_AUTH_GENERIC_OAUTH_USE_PKCE: 'true'
GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP: 'true'
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH: "contains(groups[*], 'admins') && 'Admin' || 'Viewer'"
volumes:
- grafana_data:/var/lib/grafana
- ./provisioning:/etc/grafana/provisioning:ro
networks:
- monitoring
- proxy
restart: unless-stopped
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.http.routers.grafana.rule=Host(`{{ grafana_hostname }}`)
- traefik.http.routers.grafana.entrypoints=websecure
- traefik.http.routers.grafana.tls=true
- traefik.http.routers.grafana.tls.certresolver={{ traefik_certresolver }}
- traefik.http.routers.grafana.middlewares=security-headers@file,compress@file
- traefik.http.services.grafana.loadbalancer.server.port=3000
- com.centurylinklabs.watchtower.enable=true
volumes:
grafana_data:
networks:
monitoring:
external: true
name: monitoring
proxy:
external: true

View file

@ -0,0 +1,5 @@
---
- name: Restart rsyslog
service:
name: rsyslog
state: restarted

View file

@ -0,0 +1,58 @@
---
- name: Install rsyslog
apt:
name: rsyslog
state: present
update_cache: true
- name: Ensure rsyslog is enabled and running
service:
name: rsyslog
state: started
enabled: true
- name: Configure rsyslog to write UFW kernel logs to /var/log/ufw.log
copy:
dest: /etc/rsyslog.d/20-ufw.conf
owner: root
group: root
mode: "0644"
content: |
:msg, contains, "[UFW " -/var/log/ufw.log
& stop
notify: Restart rsyslog
- name: Ensure /var/log/ufw.log exists
file:
path: /var/log/ufw.log
state: touch
owner: root
group: adm
mode: "0640"
- name: Configure logrotate for /var/log/ufw.log
copy:
dest: /etc/logrotate.d/ufw
owner: root
group: root
mode: "0644"
content: |
/var/log/ufw.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 root adm
sharedscripts
postrotate
systemctl reload rsyslog > /dev/null 2>&1 || true
endscript
}
- name: Set UFW logging level to low
command: ufw logging low
register: ufw_logging
changed_when: "'Logging enabled' in ufw_logging.stdout or 'Logging:' in ufw_logging.stdout"
failed_when: false

View file

@ -0,0 +1,45 @@
---
- name: Read LLDAP admin password
set_fact:
lldap_admin_password: "{{ LLDAP_ADMIN_PASSWORD | default(lookup('env', 'LLDAP_ADMIN_PASSWORD')) }}"
no_log: true
- name: Fail if LLDAP admin password is missing
fail:
msg: "LLDAP_ADMIN_PASSWORD is required"
when: lldap_admin_password | length == 0
- name: Read LLDAP JWT secret
set_fact:
lldap_jwt_secret: "{{ LLDAP_JWT_SECRET | default(lookup('env', 'LLDAP_JWT_SECRET')) }}"
no_log: true
- name: Fail if LLDAP JWT secret is missing
fail:
msg: "LLDAP_JWT_SECRET is required"
when: lldap_jwt_secret | length == 0
- name: Read LLDAP key seed
set_fact:
lldap_key_seed: "{{ LLDAP_KEY_SEED | default(lookup('env', 'LLDAP_KEY_SEED')) }}"
no_log: true
- name: Fail if LLDAP key seed is missing
fail:
msg: "LLDAP_KEY_SEED is required"
when: lldap_key_seed | length == 0
- name: Create LLDAP directory
file:
path: /opt/lldap
state: directory
- name: Copy Docker Compose file for LLDAP
template:
src: docker-compose.yml.j2
dest: /opt/lldap/docker-compose.yml
- name: Deploy LLDAP
command: docker compose up -d --force-recreate
args:
chdir: /opt/lldap

View file

@ -0,0 +1,23 @@
services:
lldap:
image: lldap/lldap:stable
environment:
LLDAP_JWT_SECRET: "{{ lldap_jwt_secret }}"
LLDAP_KEY_SEED: "{{ lldap_key_seed }}"
LLDAP_LDAP_BASE_DN: "{{ lldap_base_dn }}"
LLDAP_LDAP_USER_DN: "admin"
LLDAP_LDAP_USER_PASS: "{{ lldap_admin_password }}"
volumes:
- lldap_data:/data
ports:
- "127.0.0.1:17170:17170"
networks:
- proxy
restart: unless-stopped
volumes:
lldap_data:
networks:
proxy:
external: true

60
roles/loki/tasks/main.yml Normal file
View file

@ -0,0 +1,60 @@
---
- name: Read web public IPv4 from inventory
set_fact:
loki_web_public_ipv4: "{{ (hostvars.get('web', {})).get('public_ipv4', '') }}"
- name: Warn if web public IPv4 is not set (skipping Loki allowlist)
debug:
msg: "web public_ipv4 is not set in inventory; skipping Loki UFW allowlist/deny rules."
when: loki_web_public_ipv4 | length == 0
- name: Ensure UFW is installed
apt:
name: ufw
state: present
- name: Enable UFW
command: ufw --force enable
changed_when: false
- name: Allowlist Loki from web host (insert rule at top)
command: "ufw insert 1 allow from {{ loki_web_public_ipv4 }} to any port 3100 proto tcp"
register: ufw_allow_loki
changed_when: "'Rule inserted' in ufw_allow_loki.stdout or 'Rules updated' in ufw_allow_loki.stdout"
when: loki_web_public_ipv4 | length > 0
- name: Deny Loki from everyone else
command: ufw deny 3100/tcp
register: ufw_deny_loki
changed_when: "'Rule inserted' in ufw_deny_loki.stdout or 'Rules updated' in ufw_deny_loki.stdout"
when: loki_web_public_ipv4 | length > 0
- name: Create Loki directory
file:
path: /opt/loki
state: directory
- name: Ensure monitoring network exists
command: docker network inspect monitoring
register: monitoring_network
changed_when: false
failed_when: false
- name: Create monitoring network if missing
command: docker network create monitoring
when: monitoring_network.rc != 0
- name: Copy Loki configuration
template:
src: loki-config.yml.j2
dest: /opt/loki/loki-config.yml
- name: Copy Docker Compose file for Loki
template:
src: docker-compose.yml.j2
dest: /opt/loki/docker-compose.yml
- name: Deploy Loki
command: docker compose up -d
args:
chdir: /opt/loki

View file

@ -0,0 +1,22 @@
services:
loki:
image: grafana/loki:2.9.4
command: -config.file=/etc/loki/config.yml
ports:
- "3100:3100"
volumes:
- ./loki-config.yml:/etc/loki/config.yml:ro
- loki_data:/loki
networks:
- monitoring
restart: unless-stopped
labels:
- com.centurylinklabs.watchtower.enable=true
volumes:
loki_data:
networks:
monitoring:
external: true
name: monitoring

View file

@ -0,0 +1,31 @@
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
storage:
type: local
local:
directory: /loki/rules

View file

@ -0,0 +1,30 @@
---
- name: Create Prometheus directory
file:
path: /opt/prometheus
state: directory
- name: Ensure monitoring network exists
command: docker network inspect monitoring
register: monitoring_network
changed_when: false
failed_when: false
- name: Create monitoring network if missing
command: docker network create monitoring
when: monitoring_network.rc != 0
- name: Copy Prometheus configuration
template:
src: prometheus.yml.j2
dest: /opt/prometheus/prometheus.yml
- name: Copy Docker Compose file for Prometheus
template:
src: docker-compose.yml.j2
dest: /opt/prometheus/docker-compose.yml
- name: Deploy Prometheus
command: docker compose up -d
args:
chdir: /opt/prometheus

View file

@ -0,0 +1,23 @@
services:
prometheus:
image: prom/prometheus:v2.49.1
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=15d
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
networks:
- monitoring
restart: unless-stopped
labels:
- com.centurylinklabs.watchtower.enable=true
volumes:
prometheus_data:
networks:
monitoring:
external: true
name: monitoring

View file

@ -0,0 +1,15 @@
global:
scrape_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['prometheus:9090']
- job_name: services-node
static_configs:
- targets: ['node-exporter:9100']
- job_name: services-cadvisor
static_configs:
- targets: ['cadvisor:8080']

View file

@ -0,0 +1,4 @@
---
- name: Spark role placeholder
debug:
msg: "Spark role is not implemented yet (deploy_spark is optional)."

View file

@ -0,0 +1,5 @@
---
- name: Restart Traefik
docker_container:
name: traefik
state: restarted

View file

@ -0,0 +1,134 @@
---
- name: Determine Traefik directory
set_fact:
traefik_dir: >-
{{
'/opt/traefik' if not use_temp_dir else
(traefik_tempdir.path if use_temp_dir | default(false)
else '/opt/traefik')
}}
- name: Read Cloudflare DNS API token
set_fact:
traefik_cloudflare_dns_api_token: >-
{{
CF_DNS_API_TOKEN
| default(lookup('env', 'CF_DNS_API_TOKEN'))
| default(TF_VAR_cloudflare_api_token)
| default(lookup('env', 'TF_VAR_cloudflare_api_token'))
}}
- name: Fail if Cloudflare DNS API token is missing
fail:
msg: "CF_DNS_API_TOKEN (recommended) or TF_VAR_cloudflare_api_token is required for Traefik DNS-01"
when: traefik_cloudflare_dns_api_token | length == 0
- name: Create permanent directory for Traefik Docker Compose
file:
path: /opt/traefik
state: directory
when: not use_temp_dir
- name: Create temporary directory for Traefik Docker Compose (for testing)
tempfile:
state: directory
suffix: traefik
register: traefik_tempdir
when: use_temp_dir | default(false)
- name: Copy Docker Compose file for Traefik
template:
src: home-docker-compose.yml.j2
dest: "{{ traefik_dir }}/docker-compose.yml"
- name: Create Traefik subdirectories
file:
path: "{{ traefik_dir }}/{{ item }}"
state: directory
loop:
- letsencrypt
- dynamic
- name: Ensure ACME storage file exists
file:
path: "{{ traefik_dir }}/letsencrypt/acme.json"
state: touch
mode: "0600"
- name: Copy base dynamic configuration
copy:
dest: "{{ traefik_dir }}/dynamic/base.yml"
content: |
http:
routers:
authelia:
rule: "Host(`{{ auth_hostname }}`)"
entryPoints:
- websecure
tls:
certResolver: "{{ traefik_certresolver }}"
service: authelia
middlewares:
- security-headers
- compress
grafana:
rule: "Host(`{{ grafana_hostname }}`)"
entryPoints:
- websecure
tls:
certResolver: "{{ traefik_certresolver }}"
service: grafana
middlewares:
- security-headers
- compress
forgejo:
rule: "Host(`{{ forgejo_hostname }}`)"
entryPoints:
- websecure
tls:
certResolver: "{{ traefik_certresolver }}"
service: forgejo
middlewares:
- security-headers
- compress
services:
authelia:
loadBalancer:
servers:
- url: "http://authelia:9091"
grafana:
loadBalancer:
servers:
- url: "http://grafana:3000"
forgejo:
loadBalancer:
servers:
- url: "http://forgejo:3000"
middlewares:
security-headers:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "no-referrer"
compress:
compress: {}
- name: Ensure proxy network exists
command: docker network inspect proxy
register: proxy_network
changed_when: false
failed_when: false
- name: Create proxy network if missing
command: docker network create proxy
when: proxy_network.rc != 0
- name: Deploy Traefik container
command: docker compose up -d --force-recreate
args:
chdir: "{{ traefik_dir }}"

View file

@ -0,0 +1,31 @@
services:
traefik:
image: traefik:v2.11.10
command:
- --api.dashboard=true
- --providers.file.directory=/etc/traefik/dynamic
- --providers.file.watch=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --certificatesresolvers.{{ traefik_certresolver }}.acme.email={{ traefik_acme_email }}
- --certificatesresolvers.{{ traefik_certresolver }}.acme.storage=/letsencrypt/acme.json
- --certificatesresolvers.{{ traefik_certresolver }}.acme.dnschallenge=true
- --certificatesresolvers.{{ traefik_certresolver }}.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.{{ traefik_certresolver }}.acme.dnschallenge.resolvers=1.1.1.1:53,8.8.8.8:53
ports:
- "80:80"
- "443:443"
environment:
- CF_DNS_API_TOKEN={{ traefik_cloudflare_dns_api_token }}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- {{ traefik_dir }}/letsencrypt:/letsencrypt
- {{ traefik_dir }}/dynamic:/etc/traefik/dynamic
networks:
- proxy
restart: always
networks:
proxy:
external: true

View file

@ -0,0 +1,2 @@
---
use_temp_dir: "{{ inventory_hostname == 'localhost' }}"

View file

@ -0,0 +1,39 @@
---
- name: Create Watchtower directory
file:
path: /opt/watchtower
state: directory
- name: Copy Docker Compose file for Watchtower
template:
src: docker-compose.yml.j2
dest: /opt/watchtower/docker-compose.yml
- name: Deploy Watchtower
command: docker compose up -d
args:
chdir: /opt/watchtower
- name: Wait for Watchtower service to be running
command: docker compose ps --services --filter status=running
args:
chdir: /opt/watchtower
register: watchtower_running
changed_when: false
retries: 10
delay: 3
until: "'watchtower' in (watchtower_running.stdout | default(''))"
- name: Read Watchtower logs if not running
command: docker compose logs --no-color --tail=200
args:
chdir: /opt/watchtower
register: watchtower_logs
changed_when: false
failed_when: false
when: "'watchtower' not in (watchtower_running.stdout | default(''))"
- name: Fail if Watchtower is not running
fail:
msg: "Watchtower is not running. docker compose ps output: {{ watchtower_running.stdout | default('') }}\n\nLogs:\n{{ watchtower_logs.stdout | default('') }}\n{{ watchtower_logs.stderr | default('') }}"
when: "'watchtower' not in (watchtower_running.stdout | default(''))"

View file

@ -0,0 +1,9 @@
services:
watchtower:
image: containrrr/watchtower:1.7.1
command: --label-enable --cleanup --interval 3600
environment:
DOCKER_API_VERSION: "1.44"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped

View file

@ -0,0 +1,37 @@
#! /usr/bin/env bash
set -euo pipefail
rand_hex() {
local bytes="$1"
openssl rand -hex "${bytes}"
}
LLDAP_ADMIN_PASSWORD=$(rand_hex 16)
LLDAP_JWT_SECRET=$(rand_hex 32)
LLDAP_KEY_SEED=$(rand_hex 32)
AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET=$(rand_hex 32)
AUTHELIA_SESSION_SECRET=$(rand_hex 32)
AUTHELIA_STORAGE_ENCRYPTION_KEY=$(rand_hex 32)
AUTHELIA_OIDC_HMAC_SECRET=$(rand_hex 32)
AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET=$(rand_hex 20)
AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET=$(rand_hex 20)
OIDC_PRIVATE_KEY_PEM=$(openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 2>/dev/null)
cat <<EOF
---
LLDAP_ADMIN_PASSWORD: "${LLDAP_ADMIN_PASSWORD}"
LLDAP_JWT_SECRET: "${LLDAP_JWT_SECRET}"
LLDAP_KEY_SEED: "${LLDAP_KEY_SEED}"
AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET: "${AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET}"
AUTHELIA_SESSION_SECRET: "${AUTHELIA_SESSION_SECRET}"
AUTHELIA_STORAGE_ENCRYPTION_KEY: "${AUTHELIA_STORAGE_ENCRYPTION_KEY}"
AUTHELIA_OIDC_HMAC_SECRET: "${AUTHELIA_OIDC_HMAC_SECRET}"
AUTHELIA_OIDC_PRIVATE_KEY_PEM: |
$(printf '%s\n' "$OIDC_PRIVATE_KEY_PEM" | sed 's/^/ /')
AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET: "${AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET}"
AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET: "${AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET}"
EOF

22
secrets/vault.example.yml Normal file
View file

@ -0,0 +1,22 @@
---
GRAFANA_ADMIN_PASSWORD:
POSTGRES_PASSWORD:
S3_ACCESS_KEY_ID:
S3_SECRET_ACCESS_KEY:
TF_VAR_linode_token:
TF_VAR_root_pass:
TF_VAR_user_password:
TF_VAR_ssh_public_key:
TF_VAR_cloudflare_api_token:
TF_VAR_cloudflare_zone_id:
LLDAP_ADMIN_PASSWORD:
LLDAP_JWT_SECRET:
LLDAP_KEY_SEED:
AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET:
AUTHELIA_SESSION_SECRET:
AUTHELIA_STORAGE_ENCRYPTION_KEY:
AUTHELIA_OIDC_HMAC_SECRET:
AUTHELIA_OIDC_PRIVATE_KEY_PEM:
AUTHELIA_OIDC_GRAFANA_CLIENT_SECRET:
AUTHELIA_OIDC_FORGEJO_CLIENT_SECRET:
FORGEJO_RUNNER_REGISTRATION_TOKEN:

134
setup.sh Executable file
View file

@ -0,0 +1,134 @@
#! /usr/bin/env bash
set -euo pipefail
vault_args=()
temp_vault_pass_file=""
cleanup() {
if [[ -n "${temp_vault_pass_file}" ]] && [[ -f "${temp_vault_pass_file}" ]]; then
rm -f "${temp_vault_pass_file}"
fi
}
trap cleanup EXIT
ansible_extra_args=()
terraform_apply_args=()
terraform_passthrough=()
run_ansible=true
if [[ "${1:-}" == "--no-ansible" ]]; then
run_ansible=false
shift
fi
if [[ "${1:-}" == "--" ]]; then
shift
if [[ "${1:-}" == "terraform" ]]; then
shift
terraform_passthrough=("$@")
else
case "${1:-}" in
output|state|workspace|providers|version|validate|fmt|taint|untaint|graph|show|console|import)
terraform_passthrough=("$@")
;;
*)
terraform_apply_args=("$@")
;;
esac
fi
fi
if [[ -f ".env" ]]; then
set -a
source .env
set +a
fi
if [[ -f "secrets/vault.yml" ]]; then
if [[ -f "secrets/.vault_pass" ]]; then
vault_args+=(--vault-password-file "secrets/.vault_pass")
elif [[ -f ".vault_pass" ]]; then
vault_args+=(--vault-password-file ".vault_pass")
else
read -rsp "Vault password: " vault_password
echo
temp_vault_pass_file=$(mktemp)
chmod 600 "${temp_vault_pass_file}"
printf '%s' "${vault_password}" > "${temp_vault_pass_file}"
unset vault_password
vault_args+=(--vault-password-file "${temp_vault_pass_file}")
fi
if (( ${#vault_args[@]} )); then
vault_plain=$(ansible-vault view secrets/vault.yml "${vault_args[@]}")
else
vault_plain=$(ansible-vault view secrets/vault.yml)
fi
while IFS= read -r line; do
[[ -z "${line}" ]] && continue
[[ "${line}" == "---" ]] && continue
[[ "${line}" != TF_VAR_*:* ]] && [[ "${line}" != CF_DNS_API_TOKEN:* ]] && [[ "${line}" != S3_ACCESS_KEY_ID:* ]] && [[ "${line}" != S3_SECRET_ACCESS_KEY:* ]] && continue
key="${line%%:*}"
value="${line#*:}"
value="${value# }"
[[ -z "${value}" ]] && continue
escaped=$(printf '%q' "${value}")
eval "export ${key}=${escaped}"
done <<< "${vault_plain}"
if [[ -z "${CF_DNS_API_TOKEN:-}" ]] && [[ -n "${TF_VAR_cloudflare_api_token:-}" ]]; then
export CF_DNS_API_TOKEN="${TF_VAR_cloudflare_api_token}"
fi
fi
terraform -chdir=terraform init
if (( ${#terraform_passthrough[@]} )); then
terraform -chdir=terraform "${terraform_passthrough[@]}"
exit 0
fi
if (( ${#terraform_apply_args[@]} )); then
terraform -chdir=terraform apply "${terraform_apply_args[@]}"
else
terraform -chdir=terraform plan -out=tfplan
terraform -chdir=terraform apply tfplan
fi
rm -f terraform/tfplan
web_ipv4=$(terraform -chdir=terraform output -raw web_ip)
services_ipv4=$(terraform -chdir=terraform output -raw services_ip)
ssh_user=${TF_VAR_user:-ansible}
mkdir -p inventory/host_vars
cat > inventory/hosts.yml <<EOF
all:
children:
web_hosts:
hosts:
web:
ansible_host: ${web_ipv4}
ansible_port: ${TF_VAR_ssh_port:-22}
ansible_user: ${ssh_user}
services_hosts:
hosts:
services:
ansible_host: ${services_ipv4}
ansible_port: ${TF_VAR_ssh_port:-22}
ansible_user: ${ssh_user}
EOF
cat > inventory/host_vars/web.yml <<EOF
public_ipv4: ${web_ipv4}
EOF
if [[ "${run_ansible}" == "true" ]]; then
if [[ -n "${vault_args+x}" ]] && (( ${#vault_args[@]} )); then
ansible_extra_args=("${vault_args[@]}")
fi
ansible-playbook playbooks/services.yml ${ansible_extra_args[@]+"${ansible_extra_args[@]}"}
ansible-playbook playbooks/app.yml ${ansible_extra_args[@]+"${ansible_extra_args[@]}"}
fi

212
stackscripts/essentials.sh Normal file
View file

@ -0,0 +1,212 @@
#!/usr/bin/env bash
exec > >(tee -i /var/log/stackscript.log) 2>&1
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
# <UDF name="NAME" label="Node name" />
# <UDF name="GROUP" label="Create group (Optional)" />
# <UDF name="SSH_USER" label="Create a non-root user" />
# <UDF name="USER_PASSWORD" label="Non-root user password" />
# <UDF name="SSH_PUBLIC_KEY" label="SSH public key for the non-root user" default="" />
# <UDF name="SSH_PORT" label="Set SSH server port" default="22" />
# <UDF name="TIMEZONE" label="Set timezone" default="UTC" />
# <UDF name="ADD_CLOUDFLARE_IPS" label="Add Cloudflare IPs to UFW" type="boolean" default="false" />
touch ~/.hushlogin
echo "Updating system..."
apt-get update
apt-get install -y sudo openssh-server
echo "Setting hostname to $NAME"
hostnamectl set-hostname "${NAME}" || true
: "${SSH_USER:=ansible}"
: "${USER_PASSWORD:=}"
echo "Creating user $SSH_USER"
if ! id -u "${SSH_USER}" >/dev/null 2>&1; then
useradd -m -s /bin/bash "${SSH_USER}"
fi
if [ -n "${USER_PASSWORD}" ]; then
echo "${SSH_USER}:${USER_PASSWORD}" | chpasswd
fi
groupadd -f sudo
usermod -aG sudo "${SSH_USER}"
mkdir -p /etc/sudoers.d
cat > "/etc/sudoers.d/90-${SSH_USER}" <<EOF
${SSH_USER} ALL=(ALL) NOPASSWD:ALL
EOF
chmod 440 "/etc/sudoers.d/90-${SSH_USER}"
USER_HOME=$(getent passwd "${SSH_USER}" | cut -d: -f6)
if [ -z "${USER_HOME}" ]; then
echo "Unable to resolve home directory for user ${SSH_USER}" >&2
exit 1
fi
if [ -n "${GROUP}" ]; then
groupadd -f "${GROUP}"
usermod -aG "${GROUP}" "${SSH_USER}"
fi
# SSH setup
echo "Configuring SSH..."
mkdir -p "${USER_HOME}"/.ssh
for i in $(seq 1 60); do
if [ -s /root/.ssh/authorized_keys ]; then
break
fi
sleep 2
done
if [ -s /root/.ssh/authorized_keys ]; then
cp /root/.ssh/authorized_keys "${USER_HOME}"/.ssh/authorized_keys
else
if [ -n "${SSH_PUBLIC_KEY:-}" ]; then
printf '%s\n' "${SSH_PUBLIC_KEY}" > "${USER_HOME}"/.ssh/authorized_keys
else
echo "No /root/.ssh/authorized_keys and no SSH_PUBLIC_KEY provided" >&2
exit 1
fi
fi
if [ -n "${SSH_PUBLIC_KEY:-}" ]; then
if ! grep -qF "${SSH_PUBLIC_KEY}" "${USER_HOME}"/.ssh/authorized_keys; then
printf '%s\n' "${SSH_PUBLIC_KEY}" >> "${USER_HOME}"/.ssh/authorized_keys
fi
fi
chown -R "${SSH_USER}:${SSH_USER}" "${USER_HOME}"/.ssh
chmod 700 "${USER_HOME}"/.ssh
chmod 600 "${USER_HOME}"/.ssh/authorized_keys
chown "${SSH_USER}:${SSH_USER}" "${USER_HOME}"
chmod 755 "${USER_HOME}"
chmod go-w "${USER_HOME}"
mkdir -p /etc/ssh/sshd_config.d
cat > /etc/ssh/sshd_config.d/99-infra.conf <<EOF
Port ${SSH_PORT}
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
PubkeyAuthentication yes
UsePAM yes
ClientAliveInterval 180
LoginGraceTime 30
MaxAuthTries 3
MaxSessions 10
MaxStartups 10:30:60
StrictModes yes
AuthorizedKeysFile .ssh/authorized_keys
EOF
systemctl restart ssh
systemctl enable ssh
echo "Installing essentials..."
apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" upgrade
apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ufw fail2ban htop unzip logrotate curl gnupg python3-pip
# Firewall
echo "Configuring UFW firewall..."
ufw default deny incoming
ufw default allow outgoing
ufw allow "${SSH_PORT}/tcp"
ufw limit "${SSH_PORT}/tcp"
ufw allow 80/tcp
ufw allow 443/tcp
if [ "${ADD_CLOUDFLARE_IPS}" = "true" ]; then
CF_IPS=(173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18
108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22 198.41.128.0/17
162.158.0.0/15 104.16.0.0/13 104.24.0.0/14 172.64.0.0/13 131.0.72.0/22
2400:cb00::/32 2606:4700::/32 2803:f800::/32 2405:b500::/32 2405:8100::/32
2a06:98c0::/29 2c0f:f248::/32)
for ip in "${CF_IPS[@]}"; do
ufw allow from "${ip}"
done
fi
ufw --force enable
ufw logging low
mkdir -p /etc/sysctl.d
cat > /etc/sysctl.d/99-console-quiet.conf <<EOF
kernel.printk = 3 4 1 3
EOF
sysctl --system
# Timezone
echo "Setting timezone to ${TIMEZONE}"
timedatectl set-timezone "${TIMEZONE}"
# Docker
echo "Installing Docker..."
apt-get install -y ca-certificates software-properties-common
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
> /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
usermod -aG docker "${SSH_USER}"
systemctl enable docker
# Docker Compose (v2 plugin installed above)
# Ansible
echo "Installing Ansible..."
pip3 install ansible
# Fail2ban
echo "Configuring Fail2ban..."
cat > /etc/fail2ban/jail.d/sshd.local <<EOF
[sshd]
enabled = true
port = ${SSH_PORT}
maxretry = 3
bantime = 1h
findtime = 10m
EOF
systemctl enable fail2ban
systemctl start fail2ban
# Logrotate
cat > /etc/logrotate.d/custom <<EOF
/var/log/custom/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 root utmp
sharedscripts
postrotate
systemctl reload rsyslog > /dev/null 2>/dev/null || true
endscript
}
EOF
# Optional: NTP (Systemd handles this well now)
timedatectl set-ntp true
# Cleanup
echo "Cleaning up..."
history -c
rm -f /root/.bash_history /home/${SSH_USER}/.bash_history || true
unset NAME GROUP SSH_USER USER_PASSWORD SSH_PUBLIC_KEY SSH_PORT TIMEZONE ADD_CLOUDFLARE_IPS
echo "StackScript complete. Server ready."

20
stackscripts/services.sh Normal file
View file

@ -0,0 +1,20 @@
#!/usr/bin/env bash
exec > >(tee -i /var/log/stackscript.log) 2>&1
set -euo pipefail
# UDF fields
# <UDF name="NAME" label="Node name" />
# <UDF name="GROUP" label="Create group (Optional)" default="" />
# <UDF name="SSH_USER" label="Non-root username" />
# <UDF name="USER_PASSWORD" label="Password for non-root user" />
# <UDF name="SSH_PUBLIC_KEY" label="SSH public key for the non-root user" default="" />
# <UDF name="SSH_PORT" label="SSH Port" default="22" />
# <UDF name="TIMEZONE" label="Timezone" default="America/Toronto" />
# <UDF name="ADD_CLOUDFLARE_IPS" label="Allow Cloudflare IPs through firewall?" type="boolean" default="false" />
source <ssinclude StackScriptID="1">
source <ssinclude StackScriptID="__ESSENTIALS_STACKSCRIPT_ID__">
touch ~/.hushlogin
echo "Services StackScript completed successfully!"

View file

@ -0,0 +1,47 @@
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/cloudflare/cloudflare" {
version = "4.52.5"
constraints = "~> 4.0"
hashes = [
"h1:+rfzF+16ZcWZWnTyW/p1HHTzYbPKX8Zt2nIFtR/+f+E=",
"zh:1a3400cb38863b2585968d1876706bcfc67a148e1318a1d325c6c7704adc999b",
"zh:4c5062cb9e9da1676f06ae92b8370186d98976cc4c7030d3cd76df12af54282a",
"zh:52110f493b5f0587ef77a1cfd1a67001fd4c617b14c6502d732ab47352bdc2f7",
"zh:5aa536f9eaeb43823aaf2aa80e7d39b25ef2b383405ed034aa16a28b446a9238",
"zh:5cc39459a1c6be8a918f17054e4fbba573825ed5597dcada588fe99614d98a5b",
"zh:629ae6a7ba298815131da826474d199312d21cec53a4d5ded4fa56a692e6f072",
"zh:719cc7c75dc1d3eb30c22ff5102a017996d9788b948078c7e1c5b3446aeca661",
"zh:8698635a3ca04383c1e93b21d6963346bdae54d27177a48e4b1435b7f731731c",
"zh:890df766e9b839623b1f0437355032a3c006226a6c200cd911e15ee1a9014e9f",
"zh:8a9993f1dcadf1dd6ca43b23348abe374605d29945a2fafc07fb3457644e6a54",
"zh:b1b9a1e6bcc24d5863a664a411d2dc906373ae7a2399d2d65548ce7377057852",
"zh:b270184cdeec277218e84b94cb136fead753da717f9b9dc378e51907f3f00bb0",
"zh:dff2bc10071210181726ce270f954995fe42c696e61e2e8f874021fed02521e5",
"zh:e8e87b40b6a87dc097b0fdc20d3f725cec0d82abc9cc3755c1f89f8f6e8b0036",
"zh:ee964a6573d399a5dd22ce328fb38ca1207797a02248f14b2e4913ee390e7803",
]
}
provider "registry.terraform.io/linode/linode" {
version = "2.41.2"
constraints = "~> 2.0"
hashes = [
"h1:GZjEpAHVD35fcAdrOzIC2TLDJPgg5TjnxSuoOqw/GnQ=",
"zh:04b3e099349777d46c23242b1b217577c00a22a8a282759b0ea10f39fbe5295e",
"zh:24b6a94a309c6887a5e0080cd1c389874c93e35013774c30648d8d6f871cccf7",
"zh:522e2ca78c4c96cdfd96982acaca8f5d1886cc14cdb0d2355dfa6b0a9d12a19c",
"zh:590de3a70478c991d403ed8159c401d864927b3e62ac37aaec2e8a3c557f4c8a",
"zh:6534425a180d9962170b6a9b4f0c80a755d1ef9a9b4b5458fd979a0524e27fd0",
"zh:a0143448cf3f8f03ced3d8f64b58ce862da096d2af76a60b5918dc9179a495e6",
"zh:b593fe9f060e413a304de88ada4a22d9937549b1df0d4fe86d6c205bc2df5ece",
"zh:c05503fad80e9e83283a04d063b36cec0e5b573ce9ebc3be4977728ae4fe6f45",
"zh:d06165ad07b60507b72197b83499d565588a7c3dcae1563dbf7d1512878e5cd8",
"zh:e5ff60aed05b8cd5fc8b39f5a05fe5b9657dd8b78bcee940d3b594eb15c52fd7",
"zh:ed1ffe36c000df9116dfde52f6b0994c2af39d8836e1d9ee1bed07f6cc502552",
"zh:ed86e977142f90b5be547efe61d5ff4042c234816e67c0e5a7e252c5fba7e357",
"zh:eda5a32dd2dd3fea914d28daee0b56201785026a989c7f3541e38d0782277683",
"zh:ee9d3f51b28d0d44a30f91462ad94371bd64210d050fec5765f1c8dafc9ee35d",
]
}

275
terraform/main.tf Normal file
View file

@ -0,0 +1,275 @@
terraform {
required_version = ">= 1.5.0"
required_providers {
linode = {
source = "linode/linode"
version = "~> 2.0"
}
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 4.0"
}
}
}
provider "linode" {
token = var.linode_token
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "linode_stackscript" "essentials" {
label = "essentials"
description = "Baseline server init (SSH hardening, UFW, Docker, Ansible, etc.)"
images = [var.image]
rev_note = "managed by terraform"
script = file("${path.module}/../stackscripts/essentials.sh")
}
resource "linode_stackscript" "services" {
label = "services"
description = "Services node init (runs essentials + services specific steps)"
images = [var.image]
rev_note = "managed by terraform"
script = replace(
file("${path.module}/../stackscripts/services.sh"),
"__ESSENTIALS_STACKSCRIPT_ID__",
tostring(linode_stackscript.essentials.id)
)
}
resource "linode_instance" "web" {
label = var.web_label
region = var.region
type = var.instance_type
image = var.image
root_pass = var.root_pass
authorized_keys = [var.ssh_public_key]
stackscript_id = linode_stackscript.essentials.id
stackscript_data = {
NAME = var.web_label
GROUP = var.group
SSH_USER = var.user
USER_PASSWORD = var.user_password
SSH_PUBLIC_KEY = var.ssh_public_key
SSH_PORT = var.ssh_port
TIMEZONE = var.timezone
ADD_CLOUDFLARE_IPS = var.add_cloudflare_ips
}
lifecycle {
ignore_changes = [
root_pass,
stackscript_id,
stackscript_data,
]
}
}
resource "linode_instance" "services" {
label = var.services_label
region = var.region
type = var.instance_type
image = var.image
root_pass = var.root_pass
authorized_keys = [var.ssh_public_key]
stackscript_id = linode_stackscript.services.id
stackscript_data = {
NAME = var.services_label
GROUP = var.group
SSH_USER = var.user
USER_PASSWORD = var.user_password
SSH_PUBLIC_KEY = var.ssh_public_key
SSH_PORT = var.ssh_port
TIMEZONE = var.timezone
ADD_CLOUDFLARE_IPS = var.add_cloudflare_ips
}
lifecycle {
ignore_changes = [
root_pass,
stackscript_id,
stackscript_data,
]
}
}
resource "cloudflare_record" "root_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "@"
type = "A"
content = sort(tolist(linode_instance.web.ipv4))[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "root_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "@"
type = "AAAA"
content = split("/", linode_instance.web.ipv6)[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "www_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "www"
type = "A"
content = sort(tolist(linode_instance.web.ipv4))[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "www_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "www"
type = "AAAA"
content = split("/", linode_instance.web.ipv6)[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "services_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "services"
type = "A"
content = sort(tolist(linode_instance.services.ipv4))[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "services_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "services"
type = "AAAA"
content = split("/", linode_instance.services.ipv6)[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "grafana_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "grafana"
type = "A"
content = sort(tolist(linode_instance.services.ipv4))[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "grafana_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "grafana"
type = "AAAA"
content = split("/", linode_instance.services.ipv6)[0]
ttl = 1
proxied = true
}
resource "cloudflare_record" "auth_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "auth"
type = "A"
content = sort(tolist(linode_instance.services.ipv4))[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "auth_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "auth"
type = "AAAA"
content = split("/", linode_instance.services.ipv6)[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "git_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "git"
type = "A"
content = sort(tolist(linode_instance.services.ipv4))[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "git_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "git"
type = "AAAA"
content = split("/", linode_instance.services.ipv6)[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "mail_a" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "mail"
type = "A"
content = sort(tolist(linode_instance.web.ipv4))[0]
ttl = var.cloudflare_ttl
proxied = false
}
resource "cloudflare_record" "mail_aaaa" {
count = var.enable_cloudflare_dns ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "mail"
type = "AAAA"
content = split("/", linode_instance.web.ipv6)[0]
ttl = var.cloudflare_ttl
proxied = false
}
resource "cloudflare_record" "services_wildcard_a" {
count = (var.enable_cloudflare_dns && var.enable_services_wildcard) ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "*.services"
type = "A"
content = sort(tolist(linode_instance.services.ipv4))[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "services_wildcard_aaaa" {
count = (var.enable_cloudflare_dns && var.enable_services_wildcard) ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "*.services"
type = "AAAA"
content = split("/", linode_instance.services.ipv6)[0]
ttl = 1
proxied = false
}
resource "cloudflare_record" "blizzard_cname" {
count = (var.enable_cloudflare_dns && length(var.object_storage_bucket) > 0 && length(var.object_storage_region) > 0) ? 1 : 0
zone_id = var.cloudflare_zone_id
name = "blizzard"
type = "CNAME"
content = "${var.object_storage_bucket}.${var.object_storage_region}.linodeobjects.com"
ttl = var.cloudflare_ttl
proxied = false
}

31
terraform/outputs.tf Normal file
View file

@ -0,0 +1,31 @@
output "web_ip" {
value = sort(tolist(linode_instance.web.ipv4))[0]
}
output "web_ipv6" {
value = linode_instance.web.ipv6
}
output "web_status" {
value = linode_instance.web.status
}
output "services_ip" {
value = sort(tolist(linode_instance.services.ipv4))[0]
}
output "services_ipv6" {
value = linode_instance.services.ipv6
}
output "services_status" {
value = linode_instance.services.status
}
output "essentials_stackscript_id" {
value = linode_stackscript.essentials.id
}
output "services_stackscript_id" {
value = linode_stackscript.services.id
}

109
terraform/variables.tf Normal file
View file

@ -0,0 +1,109 @@
variable "linode_token" {
type = string
sensitive = true
}
variable "region" {
type = string
default = "ca-central"
}
variable "instance_type" {
type = string
default = "g6-nanode-1"
}
variable "image" {
type = string
default = "linode/debian13"
}
variable "ssh_public_key" {
type = string
}
variable "root_pass" {
type = string
sensitive = true
}
variable "web_label" {
type = string
default = "web"
}
variable "services_label" {
type = string
default = "services"
}
variable "user" {
type = string
default = "ansible"
}
variable "user_password" {
type = string
sensitive = true
}
variable "group" {
type = string
default = ""
}
variable "ssh_port" {
type = number
default = 22
}
variable "timezone" {
type = string
default = "America/Toronto"
}
variable "add_cloudflare_ips" {
type = bool
default = false
}
variable "cloudflare_api_token" {
type = string
sensitive = true
default = ""
}
variable "cloudflare_zone_id" {
type = string
default = ""
}
variable "enable_cloudflare_dns" {
type = bool
default = false
}
variable "enable_services_wildcard" {
type = bool
default = false
}
variable "cloudflare_ttl" {
type = number
default = 300
}
variable "cloudflare_proxied" {
type = bool
default = false
}
variable "object_storage_bucket" {
type = string
default = ""
}
variable "object_storage_region" {
type = string
default = "us-east-1"
}