โ—index ๐Ÿ”newt-egress.md ๐Ÿท๏ธtags ๐Ÿ‘คabout

๐Ÿ” Locking Down Newt: Egress Controls for Tunnel Agents You Can Actually Trust

Seventh post in the k3s homelab series. Previously: monitoring, GitOps, automation, scheduling, LUKS + Dropbear, and CGNAT tunneling.

In the first post I set up Pangolin and Newt to tunnel 40 services through a VPS. It works great ๐Ÿš€.. But there's a problem I glossed over: the newt agent can reach anything on your network.

Think about that for a second. Newt establishes a WireGuard tunnel back to a Pangolin server. Whatever Pangolin tells it to expose, it exposes. If someone compromises your Pangolin instance, or if there's a bug in newt, or if you misconfigure a resource.. the tunnel agent becomes an open door into your entire LAN or cluster network.

This is fine for my own self-hosted Pangolin where I control both sides. But what about hosted Pangolin services where someone else runs the server? You're installing a tunnel agent that phones home to infrastructure you don't control. Without egress restrictions, that agent can reach every pod in your cluster, every device on your LAN, every internal service.

The fix is simple: don't give newt access to anything you didn't explicitly allow.

The threat model

Let's be concrete about what we're defending against:

  1. Compromised Pangolin server โ€” An attacker gains control of the Pangolin instance and pushes malicious resource configurations to your newt agent, telling it to proxy traffic to internal services you never intended to expose
  2. Newt vulnerability โ€” A bug in the newt agent itself that allows arbitrary network access from the tunnel
  3. Misconfiguration โ€” You accidentally create a Pangolin resource pointing to an internal service (database, admin panel, monitoring) that should never be internet-facing
  4. Overly broad access โ€” The default newt deployment can reach anything, so a single mistake cascades

The principle is straightforward: newt should only be able to talk to DNS, the Pangolin server, and the specific services you're exposing. Everything else gets dropped.

I'll show two approaches: NetworkPolicy for Kubernetes, and iptables for Docker Compose.

Kubernetes: NetworkPolicy

If you're running newt in a Kubernetes cluster (k3s, EKS, GKE, anything with a NetworkPolicy controller), this is the clean approach. NetworkPolicies are declarative, version-controlled, and the CNI enforces them at the network level.

The strategy

Deploy newt in its own namespace with a NetworkPolicy that:

  1. Denies all ingress and egress by default
  2. Allows DNS โ€” egress to kube-system on port 53 (CoreDNS)
  3. Allows Pangolin โ€” egress to the Pangolin server IP (all ports, since it uses HTTPS + WebSocket + WireGuard)
  4. Allows specific services โ€” egress to the exact pods/namespaces/IPs you're exposing, on the exact ports

Anything not in that list? Dropped by the CNI. Newt can't reach your database namespace, your monitoring stack, your CI runner.. nothing you didn't explicitly punch a hole for.

The Helm chart

I built a wrapper chart that pulls in the official newt chart as a dependency and adds the NetworkPolicy on top. Here's the structure:

๐Ÿ“„textโ€บ7 lines
  1chart/
  2  Chart.yaml          # depends on fosrl/newt
  3  values.yaml         # NetworkPolicy config + newt subchart values
  4  templates/
  5    _helpers.tpl
  6    networkpolicy.yaml
  7    resources.yaml    # Secret + PVC

The Chart.yaml pulls in the upstream newt chart:

๐Ÿ“‹yamlโ€บ9 lines
  1apiVersion: v2
  2name: newt-tunnel
  3description: Newt tunnel agent with NetworkPolicy isolation
  4type: application
  5version: 0.1.0
  6dependencies:
  7  - name: newt
  8    version: "*"
  9    repository: "https://charts.fossorial.io"

The NetworkPolicy

This is the core of the whole thing:

๐Ÿ“‹yamlโ€บ38 lines
  1apiVersion: networking.k8s.io/v1
  2kind: NetworkPolicy
  3metadata:
  4  name: newt-tunnel-netpol
  5spec:
  6  podSelector:
  7    matchLabels:
  8      newt.instance: newt
  9  policyTypes:
 10    - Ingress
 11    - Egress
 12  egress:
 13    # DNS โ€” CoreDNS in kube-system
 14    - to:
 15        - namespaceSelector:
 16            matchLabels:
 17              kubernetes.io/metadata.name: kube-system
 18      ports:
 19        - port: 53
 20          protocol: UDP
 21        - port: 53
 22          protocol: TCP
 23    # Pangolin server โ€” all traffic (HTTPS, WebSocket, WireGuard)
 24    - to:
 25        - ipBlock:
 26            cidr: 203.0.113.10/32  # replace with your Pangolin server IP
 27    # Gitea in another namespace
 28    - to:
 29        - namespaceSelector:
 30            matchLabels:
 31              kubernetes.io/metadata.name: gitea
 32          podSelector:
 33            matchLabels:
 34              app.kubernetes.io/component: gitea
 35      ports:
 36        - port: 3000
 37          protocol: TCP
 38  ingress: []  # deny all ingress

The podSelector matches the newt pod using the label the official chart sets. The policyTypes list includes both Ingress and Egress, which means the default for both is deny all โ€” only the explicit rules above are allowed.

The targetPort gotcha

This trips everyone up. When you configure a Pangolin resource in the UI, you use the Kubernetes service DNS name and service port:

๐Ÿ“„textโ€บ1 lines
  1gitea-http.gitea.svc.cluster.local:3000

But in the NetworkPolicy, you need the container targetPort, not the service port. Why? Because kube-proxy rewrites the destination via DNAT before the NetworkPolicy evaluates:

๐Ÿ“„textโ€บ1 lines
  1newt pod โ†’ ClusterIP:80 โ†’ [kube-proxy DNAT] โ†’ pod IP:3000 โ†’ [NetworkPolicy evaluates here]

By the time the CNI checks the NetworkPolicy, the destination is already pod-ip:3000, not clusterip:80. If your NetworkPolicy only allows port 80, the traffic gets dropped.

Find the targetPort:

โฏ_bashโ€บ2 lines
  1โฏโฏโฏ kubectl get svc gitea-http -n gitea -o jsonpath='{.spec.ports[0].targetPort}'
  23000

Use that in your NetworkPolicy. Use the service port in Pangolin. Two different ports for two different things.

Configuring services in values.yaml

The chart makes adding services declarative:

๐Ÿ“‹yamlโ€บ24 lines
  1networkPolicy:
  2  enabled: true
  3  pangolinIP: 203.0.113.10/32  # your Pangolin server IP
  4
  5  extraEgress:
  6    # Kubernetes service in another namespace
  7    - name: gitea
  8      namespace: gitea
  9      labels:
 10        app.kubernetes.io/component: gitea
 11      port: 3000     # targetPort!
 12      protocol: TCP
 13
 14    # Home Assistant in its own namespace
 15    - name: homeassistant
 16      namespace: homeassistant
 17      port: 8123
 18      protocol: TCP
 19
 20    # External device on your LAN (not in the cluster)
 21    - name: printer
 22      cidr: 10.0.1.200/32
 23      port: 631
 24      protocol: TCP

Each entry generates a separate egress rule in the NetworkPolicy template. The namespace + optional labels form targets cluster-internal services. The cidr form targets external IPs on your LAN. Then deploy:

โฏ_bashโ€บ4 lines
  1โฏโฏโฏ helm install newt-tunnel . --namespace newt-tunnel --create-namespace
  2โฏโฏโฏ kubectl get networkpolicy -n newt-tunnel
  3NAME                  POD-SELECTOR        AGE
  4newt-tunnel-netpol    newt.instance=newt   5s

Verifying it works

The easiest test: exec into the newt pod and try to reach something that should be blocked:

โฏ_bashโ€บ8 lines
  1# This should work (allowed service)
  2โฏโฏโฏ kubectl exec -n newt-tunnel deploy/newt -- wget -qO- --timeout=3 http://gitea-http.gitea:3000
  3# (returns HTML)
  4
  5# This should fail (not in the NetworkPolicy)
  6โฏโฏโฏ kubectl exec -n newt-tunnel deploy/newt -- wget -qO- --timeout=3 http://grafana.monitoring:3000
  7wget: download timed out
  8command terminated with exit code 1

If the second command times out, your NetworkPolicy is working. Newt can't reach grafana even though it's in the same cluster.

Docker Compose: iptables

If you're running newt outside Kubernetes (a NAS, a Raspberry Pi, a VM), Docker Compose with iptables is the way to go. It's more manual than NetworkPolicy, but equally effective.

The strategy

  1. Run newt in a Docker Compose stack with a dedicated network and a fixed IP
  2. Create a custom iptables chain that only applies to traffic from that IP
  3. Allow DNS, the Pangolin server, and your specific services
  4. Drop everything else

The key insight: we use a custom chain (NEWT-EGRESS) jumped to from DOCKER-USER, so we never touch Docker's own iptables rules. Your existing Docker networking keeps working. Only traffic from the newt container IP hits our chain.

The Docker Compose file

๐Ÿ“‹yamlโ€บ25 lines
  1services:
  2  newt:
  3    image: docker.io/fosrl/newt:latest
  4    container_name: newt-tunnel
  5    restart: unless-stopped
  6    cap_add:
  7      - NET_RAW
  8    environment:
  9      - PANGOLIN_ENDPOINT=https://your-pangolin-server.example.com
 10      - NEWT_ID=your-newt-id
 11      - NEWT_SECRET=your-newt-secret
 12    volumes:
 13      - newt-config:/root/.config/newt-client
 14    networks:
 15      newt:
 16        ipv4_address: 172.30.0.10  # fixed IP for stable iptables rules
 17
 18volumes:
 19  newt-config:
 20
 21networks:
 22  newt:
 23    ipam:
 24      config:
 25        - subnet: 172.30.0.0/24

The fixed IP (172.30.0.10) is important. Without it, Docker assigns a random IP on each restart, and your iptables rules break.

The iptables script

Here's the script that creates the egress rules. It's designed to be idempotent โ€” re-running it flushes only our custom chain, never DOCKER-USER:

โฏ_bashโ€บ84 lines
  1#!/usr/bin/env bash
  2set -euo pipefail
  3
  4CHAIN="NEWT-EGRESS"
  5NEWT_IP="172.30.0.10"
  6PANGOLIN_IP="203.0.113.10"  # replace with your Pangolin server IP
  7ALLOWED_TARGETS=()
  8REMOVE=false
  9DRY_RUN=false
 10
 11while [[ $# -gt 0 ]]; do
 12    case "$1" in
 13        --newt-ip)     NEWT_IP="$2"; shift 2 ;;
 14        --pangolin-ip) PANGOLIN_IP="$2"; shift 2 ;;
 15        --allow)       ALLOWED_TARGETS+=("$2"); shift 2 ;;
 16        --remove)      REMOVE=true; shift ;;
 17        --dry-run)     DRY_RUN=true; shift ;;
 18        *)             echo "Unknown option: $1"; exit 1 ;;
 19    esac
 20done
 21
 22run() {
 23    if $DRY_RUN; then echo "  [dry-run] iptables $*"
 24    else iptables "$@"; fi
 25}
 26
 27# Resolve hostname to IP (pass-through if already an IP)
 28resolve_host() {
 29    local host="$1"
 30    if [[ "$host" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
 31        echo "$host"; return
 32    fi
 33    local resolved
 34    resolved=$(getent ahosts "$host" 2>/dev/null | awk '/STREAM/{print $1; exit}')
 35    if [[ -z "$resolved" ]]; then
 36        echo "Error: could not resolve '$host'" >&2; exit 1
 37    fi
 38    echo "  WARNING: resolved '$host' -> $resolved (re-run if IP changes)" >&2
 39    echo "$resolved"
 40}
 41
 42# Remove mode
 43if $REMOVE; then
 44    echo "Removing $CHAIN..."
 45    iptables -C DOCKER-USER -s "$NEWT_IP" -j "$CHAIN" 2>/dev/null && run -D DOCKER-USER -s "$NEWT_IP" -j "$CHAIN"
 46    iptables -L "$CHAIN" -n >/dev/null 2>&1 && run -F "$CHAIN" && run -X "$CHAIN"
 47    echo "Done."; exit 0
 48fi
 49
 50# Create or flush custom chain
 51if iptables -L "$CHAIN" -n >/dev/null 2>&1; then
 52    run -F "$CHAIN"
 53else
 54    run -N "$CHAIN"
 55fi
 56
 57# Jump from DOCKER-USER (only newt traffic)
 58if ! iptables -C DOCKER-USER -s "$NEWT_IP" -j "$CHAIN" 2>/dev/null; then
 59    run -I DOCKER-USER -s "$NEWT_IP" -j "$CHAIN"
 60fi
 61
 62# Allow established connections
 63run -A "$CHAIN" -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
 64
 65# Allow DNS
 66run -A "$CHAIN" -p udp --dport 53 -j RETURN
 67run -A "$CHAIN" -p tcp --dport 53 -j RETURN
 68
 69# Allow Pangolin server
 70run -A "$CHAIN" -d "$PANGOLIN_IP" -j RETURN
 71
 72# Allow explicit targets
 73for target in "${ALLOWED_TARGETS[@]}"; do
 74    IFS=':' read -r host port proto <<< "$target"
 75    ip=$(resolve_host "$host")
 76    echo "  Allow $proto to $ip:$port"
 77    run -A "$CHAIN" -d "$ip" -p "$proto" --dport "$port" -j RETURN
 78done
 79
 80# Drop everything else
 81run -A "$CHAIN" -j DROP
 82
 83echo "Rules applied:"
 84iptables -L "$CHAIN" -n -v --line-numbers

Usage

โฏ_bashโ€บ14 lines
  1# Allow newt to reach Gitea on your LAN
  2โฏโฏโฏ sudo ./iptables-rules.sh --pangolin-ip 203.0.113.10 --allow 10.0.1.50:3000:tcp
  3
  4# Multiple services
  5โฏโฏโฏ sudo ./iptables-rules.sh \
  6    --pangolin-ip 203.0.113.10 \
  7    --allow 10.0.1.50:3000:tcp \
  8    --allow 10.0.1.100:8123:tcp
  9
 10# Hostname instead of IP (resolved at apply time)
 11โฏโฏโฏ sudo ./iptables-rules.sh --pangolin-ip 203.0.113.10 --allow gitea.local:3000:tcp
 12
 13# Preview without applying
 14โฏโฏโฏ sudo ./iptables-rules.sh --pangolin-ip 203.0.113.10 --allow 10.0.1.50:3000:tcp --dry-run

How the chain works

Let's trace a packet from the newt container:

๐Ÿ“„textโ€บ8 lines
  1newt container (172.30.0.10)
  2  โ†’ DOCKER-USER chain
  3    โ†’ matches source IP โ†’ jump to NEWT-EGRESS
  4      โ†’ ESTABLISHED,RELATED? โ†’ RETURN (allow)
  5      โ†’ DNS port 53? โ†’ RETURN (allow)
  6      โ†’ destination is Pangolin IP? โ†’ RETURN (allow)
  7      โ†’ destination matches --allow rule? โ†’ RETURN (allow)
  8      โ†’ DROP (everything else)

RETURN means "go back to DOCKER-USER and continue with the rest of Docker's rules." DROP means the packet is silently discarded. The newt container never sees a rejection โ€” the connection just times out.

Hostname resolution

The --allow flag accepts hostnames in addition to IPs. When you pass gitea.local:3000:tcp, the script resolves it via getent ahosts at apply time and writes the resolved IP into the iptables rule.

A couple things to keep in mind:

  • iptables only understands IPs โ€” the resolution is a point-in-time snapshot
  • If the target's IP changes (DHCP renewal, DNS update), the rule goes stale โ€” re-run the script
  • The script prints a warning for each resolved hostname so you know what IP it's using
  • For production, prefer static IPs

Persisting across reboots

iptables rules don't survive a reboot. Options:

  • iptables-persistent (Debian/Ubuntu): sudo apt install iptables-persistent && sudo netfilter-persistent save
  • systemd service: a oneshot unit that runs the script at boot
  • cron: @reboot /path/to/iptables-rules.sh --pangolin-ip ... --allow ...

Cleanup

โฏ_bashโ€บ5 lines
  1# Remove iptables rules
  2โฏโฏโฏ sudo ./iptables-rules.sh --remove
  3
  4# Stop newt
  5โฏโฏโฏ docker compose down

Comparing the two approaches

Kubernetes NetworkPolicyDocker iptables
EnforcementCNI-level (kernel)iptables (kernel)
DeclarativeYes (YAML in git)Script-based
Survives restartYes (K8s reconciles)Needs persistence
Targets cluster servicesnamespaceSelector + podSelectorIP:port only
Targets LAN devicesipBlock CIDRIP:port
Port gotchatargetPort, not service portDirect port
IdempotentYesYes (flushes custom chain)

Both approaches achieve the same result: newt can only reach what you explicitly allow. Pick the one that matches your deployment.

What does the final state look like?

After applying egress controls, here's what newt can and can't do:

TrafficAllowed?Why
DNS queriesYesNeeded for service discovery
Pangolin server (all ports)YesControl plane + WireGuard tunnel
gitea-http.gitea:3000YesExplicitly allowed
homeassistant:8123YesExplicitly allowed
grafana.monitoring:3000NoNot in the allow list
postgres.databases:5432NoNot in the allow list
10.0.1.1 (your router)NoNot in the allow list
Anything elseNoDefault deny

If someone compromises the Pangolin server and pushes a resource pointing to your database, the connection gets dropped at the kernel level before it reaches the first TCP SYN. The tunnel agent becomes a controlled pipe, not an open door.

Should you bother?

If you run your own Pangolin server on your own infrastructure and you're the only admin, the risk is lower. You control both sides.

But if you're using a hosted Pangolin service, or if you share admin access, or if your Pangolin instance is internet-facing (which it has to be).. yes, absolutely lock down newt's egress. It takes 10 minutes to set up and it means the difference between "a compromise of Pangolin gives access to one service" and "a compromise of Pangolin gives access to everything" ๐Ÿ”’.

Defense in depth isn't paranoia, is engineering.

:discuss share / comment on Mastodon โ†’