๐ Locking Down Newt: Egress Controls for Tunnel Agents You Can Actually Trust
Seventh post in the k3s homelab series. Previously: monitoring, GitOps, automation, scheduling, LUKS + Dropbear, and CGNAT tunneling.
In the first post I set up Pangolin and Newt to tunnel 40 services through a VPS. It works great ๐.. But there's a problem I glossed over: the newt agent can reach anything on your network.
Think about that for a second. Newt establishes a WireGuard tunnel back to a Pangolin server. Whatever Pangolin tells it to expose, it exposes. If someone compromises your Pangolin instance, or if there's a bug in newt, or if you misconfigure a resource.. the tunnel agent becomes an open door into your entire LAN or cluster network.
This is fine for my own self-hosted Pangolin where I control both sides. But what about hosted Pangolin services where someone else runs the server? You're installing a tunnel agent that phones home to infrastructure you don't control. Without egress restrictions, that agent can reach every pod in your cluster, every device on your LAN, every internal service.
The fix is simple: don't give newt access to anything you didn't explicitly allow.
The threat model
Let's be concrete about what we're defending against:
- Compromised Pangolin server โ An attacker gains control of the Pangolin instance and pushes malicious resource configurations to your newt agent, telling it to proxy traffic to internal services you never intended to expose
- Newt vulnerability โ A bug in the newt agent itself that allows arbitrary network access from the tunnel
- Misconfiguration โ You accidentally create a Pangolin resource pointing to an internal service (database, admin panel, monitoring) that should never be internet-facing
- Overly broad access โ The default newt deployment can reach anything, so a single mistake cascades
The principle is straightforward: newt should only be able to talk to DNS, the Pangolin server, and the specific services you're exposing. Everything else gets dropped.
I'll show two approaches: NetworkPolicy for Kubernetes, and iptables for Docker Compose.
Kubernetes: NetworkPolicy
If you're running newt in a Kubernetes cluster (k3s, EKS, GKE, anything with a NetworkPolicy controller), this is the clean approach. NetworkPolicies are declarative, version-controlled, and the CNI enforces them at the network level.
The strategy
Deploy newt in its own namespace with a NetworkPolicy that:
- Denies all ingress and egress by default
- Allows DNS โ egress to kube-system on port 53 (CoreDNS)
- Allows Pangolin โ egress to the Pangolin server IP (all ports, since it uses HTTPS + WebSocket + WireGuard)
- Allows specific services โ egress to the exact pods/namespaces/IPs you're exposing, on the exact ports
Anything not in that list? Dropped by the CNI. Newt can't reach your database namespace, your monitoring stack, your CI runner.. nothing you didn't explicitly punch a hole for.
The Helm chart
I built a wrapper chart that pulls in the official newt chart as a dependency and adds the NetworkPolicy on top. Here's the structure:
1chart/
2 Chart.yaml # depends on fosrl/newt
3 values.yaml # NetworkPolicy config + newt subchart values
4 templates/
5 _helpers.tpl
6 networkpolicy.yaml
7 resources.yaml # Secret + PVC
The Chart.yaml pulls in the upstream newt chart:
1apiVersion: v2
2name: newt-tunnel
3description: Newt tunnel agent with NetworkPolicy isolation
4type: application
5version: 0.1.0
6dependencies:
7 - name: newt
8 version: "*"
9 repository: "https://charts.fossorial.io"
The NetworkPolicy
This is the core of the whole thing:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: newt-tunnel-netpol
5spec:
6 podSelector:
7 matchLabels:
8 newt.instance: newt
9 policyTypes:
10 - Ingress
11 - Egress
12 egress:
13 # DNS โ CoreDNS in kube-system
14 - to:
15 - namespaceSelector:
16 matchLabels:
17 kubernetes.io/metadata.name: kube-system
18 ports:
19 - port: 53
20 protocol: UDP
21 - port: 53
22 protocol: TCP
23 # Pangolin server โ all traffic (HTTPS, WebSocket, WireGuard)
24 - to:
25 - ipBlock:
26 cidr: 203.0.113.10/32 # replace with your Pangolin server IP
27 # Gitea in another namespace
28 - to:
29 - namespaceSelector:
30 matchLabels:
31 kubernetes.io/metadata.name: gitea
32 podSelector:
33 matchLabels:
34 app.kubernetes.io/component: gitea
35 ports:
36 - port: 3000
37 protocol: TCP
38 ingress: # deny all ingress
The podSelector matches the newt pod using the label the official chart sets. The policyTypes list includes both Ingress and Egress, which means the default for both is deny all โ only the explicit rules above are allowed.
The targetPort gotcha
This trips everyone up. When you configure a Pangolin resource in the UI, you use the Kubernetes service DNS name and service port:
1gitea-http.gitea.svc.cluster.local:3000
But in the NetworkPolicy, you need the container targetPort, not the service port. Why? Because kube-proxy rewrites the destination via DNAT before the NetworkPolicy evaluates:
1newt pod โ ClusterIP:80 โ [kube-proxy DNAT] โ pod IP:3000 โ [NetworkPolicy evaluates here]
By the time the CNI checks the NetworkPolicy, the destination is already pod-ip:3000, not clusterip:80. If your NetworkPolicy only allows port 80, the traffic gets dropped.
Find the targetPort:
1
2
Use that in your NetworkPolicy. Use the service port in Pangolin. Two different ports for two different things.
Configuring services in values.yaml
The chart makes adding services declarative:
1networkPolicy:
2 enabled: true
3 pangolinIP: 203.0.113.10/32 # your Pangolin server IP
4
5 extraEgress:
6 # Kubernetes service in another namespace
7 - name: gitea
8 namespace: gitea
9 labels:
10 app.kubernetes.io/component: gitea
11 port: 3000 # targetPort!
12 protocol: TCP
13
14 # Home Assistant in its own namespace
15 - name: homeassistant
16 namespace: homeassistant
17 port: 8123
18 protocol: TCP
19
20 # External device on your LAN (not in the cluster)
21 - name: printer
22 cidr: 10.0.1.200/32
23 port: 631
24 protocol: TCP
Each entry generates a separate egress rule in the NetworkPolicy template. The namespace + optional labels form targets cluster-internal services. The cidr form targets external IPs on your LAN. Then deploy:
1
2
3
4
Verifying it works
The easiest test: exec into the newt pod and try to reach something that should be blocked:
1# This should work (allowed service)
2
3# (returns HTML)
4
5# This should fail (not in the NetworkPolicy)
6
7
8
If the second command times out, your NetworkPolicy is working. Newt can't reach grafana even though it's in the same cluster.
Docker Compose: iptables
If you're running newt outside Kubernetes (a NAS, a Raspberry Pi, a VM), Docker Compose with iptables is the way to go. It's more manual than NetworkPolicy, but equally effective.
The strategy
- Run newt in a Docker Compose stack with a dedicated network and a fixed IP
- Create a custom iptables chain that only applies to traffic from that IP
- Allow DNS, the Pangolin server, and your specific services
- Drop everything else
The key insight: we use a custom chain (NEWT-EGRESS) jumped to from DOCKER-USER, so we never touch Docker's own iptables rules. Your existing Docker networking keeps working. Only traffic from the newt container IP hits our chain.
The Docker Compose file
1services:
2 newt:
3 image: docker.io/fosrl/newt:latest
4 container_name: newt-tunnel
5 restart: unless-stopped
6 cap_add:
7 - NET_RAW
8 environment:
9 - PANGOLIN_ENDPOINT=https://your-pangolin-server.example.com
10 - NEWT_ID=your-newt-id
11 - NEWT_SECRET=your-newt-secret
12 volumes:
13 - newt-config:/root/.config/newt-client
14 networks:
15 newt:
16 ipv4_address: 172.30.0.10 # fixed IP for stable iptables rules
17
18volumes:
19 newt-config:
20
21networks:
22 newt:
23 ipam:
24 config:
25 - subnet: 172.30.0.0/24
The fixed IP (172.30.0.10) is important. Without it, Docker assigns a random IP on each restart, and your iptables rules break.
The iptables script
Here's the script that creates the egress rules. It's designed to be idempotent โ re-running it flushes only our custom chain, never DOCKER-USER:
1#!/usr/bin/env bash
2
3
4CHAIN="NEWT-EGRESS"
5NEWT_IP="172.30.0.10"
6PANGOLIN_IP="203.0.113.10" # replace with your Pangolin server IP
7ALLOWED_TARGETS=()
8REMOVE=false
9DRY_RUN=false
10
11while ; do
12
13
14
15
16
17
18
19
20done
21
22
23
24
25
26
27# Resolve hostname to IP (pass-through if already an IP)
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42# Remove mode
43if ; then
44
45 &&
46 && &&
47 ;
48fi
49
50# Create or flush custom chain
51if ; then
52
53else
54
55fi
56
57# Jump from DOCKER-USER (only newt traffic)
58if ! ; then
59
60fi
61
62# Allow established connections
63
64
65# Allow DNS
66
67
68
69# Allow Pangolin server
70
71
72# Allow explicit targets
73for; do
74 IFS=':'
75 ip=
76
77
78done
79
80# Drop everything else
81
82
83
84
Usage
1# Allow newt to reach Gitea on your LAN
2
3
4# Multiple services
5
6
7
8
9
10# Hostname instead of IP (resolved at apply time)
11
12
13# Preview without applying
14
How the chain works
Let's trace a packet from the newt container:
1newt container (172.30.0.10)
2 โ DOCKER-USER chain
3 โ matches source IP โ jump to NEWT-EGRESS
4 โ ESTABLISHED,RELATED? โ RETURN (allow)
5 โ DNS port 53? โ RETURN (allow)
6 โ destination is Pangolin IP? โ RETURN (allow)
7 โ destination matches --allow rule? โ RETURN (allow)
8 โ DROP (everything else)
RETURN means "go back to DOCKER-USER and continue with the rest of Docker's rules." DROP means the packet is silently discarded. The newt container never sees a rejection โ the connection just times out.
Hostname resolution
The --allow flag accepts hostnames in addition to IPs. When you pass gitea.local:3000:tcp, the script resolves it via getent ahosts at apply time and writes the resolved IP into the iptables rule.
A couple things to keep in mind:
- iptables only understands IPs โ the resolution is a point-in-time snapshot
- If the target's IP changes (DHCP renewal, DNS update), the rule goes stale โ re-run the script
- The script prints a warning for each resolved hostname so you know what IP it's using
- For production, prefer static IPs
Persisting across reboots
iptables rules don't survive a reboot. Options:
- iptables-persistent (Debian/Ubuntu):
sudo apt install iptables-persistent && sudo netfilter-persistent save - systemd service: a oneshot unit that runs the script at boot
- cron:
@reboot /path/to/iptables-rules.sh --pangolin-ip ... --allow ...
Cleanup
1# Remove iptables rules
2
3
4# Stop newt
5
Comparing the two approaches
| Kubernetes NetworkPolicy | Docker iptables | |
|---|---|---|
| Enforcement | CNI-level (kernel) | iptables (kernel) |
| Declarative | Yes (YAML in git) | Script-based |
| Survives restart | Yes (K8s reconciles) | Needs persistence |
| Targets cluster services | namespaceSelector + podSelector | IP:port only |
| Targets LAN devices | ipBlock CIDR | IP:port |
| Port gotcha | targetPort, not service port | Direct port |
| Idempotent | Yes | Yes (flushes custom chain) |
Both approaches achieve the same result: newt can only reach what you explicitly allow. Pick the one that matches your deployment.
What does the final state look like?
After applying egress controls, here's what newt can and can't do:
| Traffic | Allowed? | Why |
|---|---|---|
| DNS queries | Yes | Needed for service discovery |
| Pangolin server (all ports) | Yes | Control plane + WireGuard tunnel |
| gitea-http.gitea:3000 | Yes | Explicitly allowed |
| homeassistant:8123 | Yes | Explicitly allowed |
| grafana.monitoring:3000 | No | Not in the allow list |
| postgres.databases:5432 | No | Not in the allow list |
| 10.0.1.1 (your router) | No | Not in the allow list |
| Anything else | No | Default deny |
If someone compromises the Pangolin server and pushes a resource pointing to your database, the connection gets dropped at the kernel level before it reaches the first TCP SYN. The tunnel agent becomes a controlled pipe, not an open door.
Should you bother?
If you run your own Pangolin server on your own infrastructure and you're the only admin, the risk is lower. You control both sides.
But if you're using a hosted Pangolin service, or if you share admin access, or if your Pangolin instance is internet-facing (which it has to be).. yes, absolutely lock down newt's egress. It takes 10 minutes to set up and it means the difference between "a compromise of Pangolin gives access to one service" and "a compromise of Pangolin gives access to everything" ๐.
Defense in depth isn't paranoia, is engineering.