Real-World Examples
These walkthroughs have been tested end-to-end against a production headscale 0.28 cluster with the Tailscale Kubernetes operator. Each section shows the exact commands and manifests that were used.
Prerequisites
- headscale ≥ 0.28 running and reachable
- headtotails deployed with Gateway API / Ingress routing
(
/api/v2and/oauth/token→ headtotails,/→ headscale) on a shared hostname - Tailscale Kubernetes operator installed with
OPERATOR_LOGIN_SERVERpointing at your headscale URL - A
TailnetCR andProxyClassCR applied (see Kubernetes guide) - The
operator-oauthsecret in thetailscalenamespace matching headtotails'OAUTH_CLIENT_ID/OAUTH_CLIENT_SECRET
Expose a Service onto the tailnet
The simplest use case: put an in-cluster Service on your tailnet so any device can reach it by name. The operator handles auth key creation, proxy pod lifecycle, and device registration automatically via headtotails.
1. Create the namespace
kubectl create namespace demo
2. Deploy the workload and annotated Service
kubectl apply -f examples/03-expose-service.yaml
The manifest deploys a traefik/whoami pod and a Service with
tailscale annotations:
annotations:
tailscale.com/expose: "true"
tailscale.com/proxy-class: headscale
tailscale.com/tags: "tag:k8s,tag:demo"
tailscale.com/hostname: "demo-whoami"
3. Verify
# The operator creates a proxy StatefulSet — watch for it:
kubectl get pods -A | grep whoami
# Check the device registered in headscale:
kubectl exec -n headscale deployment/headscale -- \
headscale nodes list | grep whoami
You should see the proxy pod ts-whoami-* running in the
tailscale namespace and the device online in headscale
with your configured tags.
What happens under the hood
- Operator calls
POST /api/v2/oauth/tokenon headtotails to get a bearer token - Operator calls
POST /api/v2/tailnet/-/keysto create a pre-authorized auth key - Operator spawns a proxy pod with the auth key — it registers with headscale as a WireGuard peer
- Operator calls
POST /api/v2/device/{id}/tagsto apply the requested ACL tags - Traffic to
demo-whoamion the tailnet is forwarded to the in-cluster Service
Set up an exit node
Route all traffic from any tailnet device through the cluster. This uses
the Connector CRD — just a few lines of YAML.
1. Apply the Connector
kubectl apply -f examples/06-exit-node.yaml
The manifest:
apiVersion: tailscale.com/v1alpha1
kind: Connector
metadata:
name: exit-node
namespace: demo
spec:
proxyClass: headscale
tags:
- "tag:k8s"
- "tag:exit"
exitNode: true
2. Approve the exit-node routes in headscale
The proxy advertises 0.0.0.0/0 and ::/0 but
headscale requires explicit approval:
# Find the node ID:
kubectl exec -n headscale deployment/headscale -- \
headscale nodes list | grep exit-node
# Approve the routes (replace 24 with your node ID):
kubectl exec -n headscale deployment/headscale -- \
headscale nodes approve-routes \
--identifier 24 \
--routes "0.0.0.0/0,::/0"
# Confirm routes are serving:
kubectl exec -n headscale deployment/headscale -- \
headscale nodes list-routes | grep exit-node
3. Use the exit node from a client
# From any device on your tailnet:
tailscale set --exit-node=exit-node-connector
# Verify — your public IP should now be the cluster's egress IP:
curl -s https://ifconfig.me
# Stop using the exit node:
tailscale set --exit-node=
Expose cluster networks via subnet router
Advertise pod and service CIDRs so tailnet devices can reach cluster-internal IPs directly — useful for debugging, monitoring dashboards, or database access.
1. Apply the Connector
kubectl apply -f examples/05-subnet-router.yaml
Edit the routes to match your cluster's CIDR ranges:
apiVersion: tailscale.com/v1alpha1
kind: Connector
metadata:
name: subnet-router
namespace: demo
spec:
proxyClass: headscale
tags:
- "tag:k8s"
- "tag:subnet"
subnetRouter:
routes:
- "10.0.0.0/16" # pod CIDR
- "10.96.0.0/12" # service CIDR
2. Approve the routes
# Find the node ID:
kubectl exec -n headscale deployment/headscale -- \
headscale nodes list | grep subnet-router
# Approve (replace ID and CIDRs with yours):
kubectl exec -n headscale deployment/headscale -- \
headscale nodes approve-routes \
--identifier <ID> \
--routes "10.0.0.0/16,10.96.0.0/12"
Important notes
ProxyClass configuration
Do not set TS_EXTRA_ARGS in your ProxyClass.
Newer tailscale proxy images use TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR
internally, which conflicts with TS_EXTRA_ARGS, TS_HOSTNAME,
and TS_AUTHKEY environment variables. The Tailnet CR's
loginUrl field handles the login server configuration for all proxies.
Pre-auth key expiry
headtotails defaults to a 1-hour expiry when the operator creates auth keys
without specifying expirySeconds. This prevents zero-time
expiration issues with headscale's key validation.
Route approval
headscale requires manual route approval via the CLI. When the operator creates
a Connector (exit node or subnet router), the proxy will advertise the routes,
but they won't be active until you run
headscale nodes approve-routes. This is a headscale security
feature — there is no API endpoint to auto-approve routes.
Clean up
# Remove all demo resources:
kubectl delete connector exit-node subnet-router -n demo --ignore-not-found
kubectl delete -f examples/03-expose-service.yaml --ignore-not-found
kubectl delete namespace demo