๐ GitOps at Home: ArgoCD + Gitea and the Monorepo That Runs Everything
Fifth post in the k3s homelab series. Previously: CGNAT tunneling, LUKS + Dropbear + RAID6, multi-arch scheduling, and self-healing automation.
Every service in my cluster lives in a single git repo ๐ณ.. Push to main, ArgoCD syncs, the cluster converges. No SSH, no manual helm commands, no "I'll fix it in prod". Here's how the monorepo works and why I chose this shape.
The repo structure
One repo, everything in it:
1k3s.crisidev.org/
2โโโ charts/ # Helm charts, one per service domain
3โ โโโ system/ # Bootstrap: MetalLB, cert-manager, NFS CSI, descheduler
4โ โโโ monitoring/ # Prometheus, Alertmanager, Loki, Grafana, Alloy
5โ โโโ arr/ # Sonarr, Radarr, Prowlarr, Bazarr (reusable macros)
6โ โโโ jellyfin/ # Media server + Jellyseer
7โ โโโ pangolin/ # Edge stack: pangolin, gerbil, traefik-edge, CrowdSec
8โ โโโ gitops/ # ArgoCD + Gitea
9โ โโโ ... # 20+ more charts
10โโโ clusters/home/ # ArgoCD ApplicationSets
11โโโ provision/ # Ansible playbooks for bare-metal setup
12โโโ hack/ # Deployment scripts, escape hatches
13โโโ secrets.yaml # SOPS-encrypted secrets (single file)
Each chart maps to a Kubernetes namespace. charts/monitoring/ deploys to the monitoring namespace. charts/arr/ deploys to arr. The system chart is special โ it deploys to kube-system and bootstraps cluster-level infrastructure that everything else depends on.
Why a monorepo
I tried multi-repo once. Separate repos for charts, provisioning, and config. It lasted two weeks. The problem: a single change often touches multiple layers. Adding a new service means a Helm chart, an ArgoCD Application, a secrets entry, maybe a Grafana dashboard and a Glance bookmark. In a monorepo, that's one commit. In multi-repo, it's four PRs across four repos that need to land in the right order.
The monorepo also makes grep work. "Where is this secret referenced?" One search, full answer. "What changed when the media pipeline broke?" One git log, full picture.
ArgoCD: the sync loop
ArgoCD watches the repo and reconciles the cluster to match. Each service gets an ArgoCD Application, defined as YAML in clusters/home/:
1# clusters/home/apps/jellyfin.yaml
2apiVersion: argoproj.io/v1alpha1
3kind: Application
4metadata:
5 name: jellyfin
6 namespace: argocd
7spec:
8 source:
9 repoURL: <gitea-repo-url>
10 path: charts/jellyfin
11 helm:
12 valueFiles:
13 - values.yaml
14 - secrets://../../secrets.yaml
15 destination:
16 server: https://kubernetes.default.svc
17 namespace: jellyfin
The secrets:// prefix tells the helm-secrets ArgoCD plugin to decrypt the secrets file before passing it to Helm. ArgoCD sees the decrypted values, Helm renders the templates, and the manifests get applied.
Applications are organized in three tiers with explicit dependencies:
1Tier 1 (infrastructure): cert-manager, database
2 โ
3Tier 2 (platform): monitoring, gitops, pangolin, garage
4 โ
5Tier 3 (apps): jellyfin, arr, qbittorrent, tdarr, glance, ...
Tier 3 apps can't sync until Tier 2 is healthy, and Tier 2 waits for Tier 1. This prevents the classic "app deployed before its database exists" race condition.
Gitea: self-hosted git
I run Gitea in the cluster for git hosting. It's lightweight (single Go binary), supports organizations and teams, and most importantly runs multi-arch container image builds natively.
Multi-arch CI with native runners
Each architecture gets its own Gitea Actions runner, running on a node of that architecture:
1# Runner per architecture (DinD sidecar pattern)
2runners:
3 amd64:
4 nodeSelector:
5 kubernetes.io/arch: amd64
6 labels:
7 arm64:
8 nodeSelector:
9 kubernetes.io/arch: arm64
10 labels:
A typical CI workflow builds on both runners in parallel, pushes arch-specific tags, then merges them into a multi-arch manifest:
1# .gitea/workflows/build.yaml (simplified)
2jobs:
3 build:
4 strategy:
5 matrix:
6 arch:
7 runs-on: linux-${{ matrix.arch }}
8 steps:
9 - uses: actions/checkout@v4
10 - run: docker build -t myapp:${{ matrix.arch }} .
11 - run: docker push myapp:${{ matrix.arch }}
12
13 manifest:
14 needs: build
15 runs-on: linux-amd64
16 steps:
17 - run: crane index append --tag myapp:latest myapp:amd64 myapp:arm64
No QEMU emulation, no cross-compilation. Each runner builds natively on its own architecture. The crane tool merges the per-arch images into a single manifest that works on any node.
The bootstrap problem
There's a chicken-and-egg issue: ArgoCD deploys charts, but ArgoCD itself is a chart. Who deploys ArgoCD?
The system chart. It's the one chart that's always deployed manually:
1
The system chart bootstraps everything ArgoCD needs: the ArgoCD subchart itself, cert-manager, MetalLB, the NFS CSI driver, priority classes, the descheduler. Once the system chart is up, ArgoCD takes over and manages everything else, including itself (it syncs its own chart from the gitops Application).
This means the cluster can be rebuilt from scratch with a single command after the nodes are provisioned. install system, wait for ArgoCD to sync, done.
Encrypted secrets
All secrets live in a single SOPS-encrypted file at charts/secrets.yaml. One file, one encryption key, one place to look. SOPS encrypts values but leaves keys in plaintext, so git diff still shows you which secrets changed even though you can't read the values.
The system chart renders the decrypted secrets into a Kubernetes Secret that ArgoCD's helm-secrets plugin can reference. When I update a secret, I re-run install system to push the new values, and ArgoCD picks them up on the next sync.
The escape hatch
GitOps is great until it isn't. Sometimes you need to deploy right now, not wait for a commit โ push โ sync cycle. The hack/ directory has scripts for that:
1
2
3
These scripts use helm secrets directly, same encrypted secrets file, same chart, just without the ArgoCD loop. After using the escape hatch, ArgoCD will show the release as "OutOfSync" until the next commit catches up. That's fine โ it's a signal, not a problem.
I use the escape hatch for two things: debugging (template a chart to see what Helm would render) and emergencies (deploy a fix before pushing to git). For everything else, push to main and let ArgoCD handle it.
The result
27 ArgoCD Applications, all synced from a single monorepo. Three-tier dependency ordering. Multi-arch CI builds on native runners. Encrypted secrets in git. A bootstrap chart that can rebuild the cluster from scratch. And an escape hatch for when you need to move faster than GitOps allows.
The monorepo isn't the trendy choice. Platform engineering blogs will tell you to split everything into microservices repos with separate release cycles. But for a homelab where one person manages everything, the monorepo is the right tool ๐ ๏ธ.. One repository means one grep and one git log to audit a change, which is simpler than coordinating across a dozen repos for a cluster with only one operator.
Next up: the monitoring stack โ Prometheus, Loki, Grafana, and the alerting pipeline that pages my phone.