Introduction
Policy-based routing (PBR) is a powerful technique for controlling how traffic leaves a host or router based on attributes beyond the destination address. When combined with WireGuard — a modern, performant, and simple VPN — PBR enables granular control over which traffic flows through specific tunnels, which remains on the local network, and which exits via the ISP. This article presents a practical, implementation-focused guide to PBR with WireGuard on Linux, including multiple approaches (ip rule/ip route, fwmark + iptables/nftables, and integration with common systems such as wg-quick, systemd-networkd, and OpenWrt). The target audience is site operators, enterprise network engineers, and developers who need deterministic, flexible VPN routing policies.
Why Policy-Based Routing with WireGuard
WireGuard itself establishes encrypted tunnels and configures simple point-to-point routes. However, complex environments often require decisions like:
- Route only certain subnets or ports through the VPN (split-tunneling).
- Send traffic from specific local IPs or containers through different WireGuard peers.
- Fail over to different tunnels based on destination or performance.
- Ensure return traffic follows the same path (important for asymmetric routing).
PBR solves these needs by allowing route selection based on source address, fwmark, incoming interface, and other criteria. For WireGuard, this means you can co-exist multiple tunnels (wg0, wg1), or mix direct internet and VPN-bound flows without breaking connectivity.
Foundational Concepts and Linux Primitives
Routing tables and ip rule
Linux supports multiple routing tables. The default table (main) handles ordinary routing lookups. PBR uses ip rule to select a routing table based on packet attributes. A typical construct for source-based routing is:
ip rule add from 10.0.2.0/24 table 200
Then populate table 200 with routes that send traffic via wg0. Table priorities are controlled with the pref value (lower = higher priority).
fwmark + netfilter
Marking packets in the netfilter framework (iptables or nftables) provides flexibility: you can mark by incoming interface, conntrack state, UID (for per-user routing), port, or L7 attributes if using helper software. Marks are then matched by ip rule with “fwmark” to select routing tables. Example ip rule: ip rule add fwmark 0x1 table 201.
Routing and connection tracking considerations
Asymmetric routing causes reply packets to go the wrong way and be dropped by remote endpoints. Ensuring both forward and return traffic use the same WireGuard interface is crucial. Use policies that take source addresses created by connections into account (e.g., marking established connections) to maintain symmetry.
Approach 1 — Source-Based Routing (Simplest)
Best for environments where clients/containers have dedicated source IPs and you want all traffic from those IPs to traverse a specific WireGuard peer.
Steps (overview)
- Create WireGuard interface(s) e.g., wg0 with local address 10.200.0.2/24.
- Assign client VMs/containers static IPs in 10.200.0.0/24 or a LAN subnet.
- Add an ip rule: ip rule add from 10.200.0.0/24 table 200 pref 100.
- Add routes in table 200: ip route add default dev wg0 table 200 or ip route add default via 10.200.0.1 dev wg0 table 200.
This pattern ensures packets with a source matching the specified prefix are looked up in table 200 and egress via wg0. It’s simple and effective for container hosts, VPSs, or multi-homed routers.
Approach 2 — fwmark with iptables/nftables (Flexible)
If you need port-based or UID-based policies, marking is the way. Use nftables or iptables to mark packets, then route based on that mark.
Marking with iptables (example)
To route all traffic to TCP 443 via wg1, you might:
iptables -t mangle -A PREROUTING -p tcp –dport 443 -j MARK –set-mark 0x1
Then bind the mark to a table:
ip rule add fwmark 1 lookup 301
ip route add default dev wg1 table 301
Ensure sysctl net.ipv4.conf.all.rp_filter is set appropriately (usually 0 or relaxed) because strict reverse-path filtering can drop packets if routing differs from expectations.
Marking by UID
To ensure all traffic from a specific application user goes through a particular tunnel:
iptables -t mangle -A OUTPUT -m owner –uid-owner vpnuser -j MARK –set-mark 0x2
Then add ip rule and table entries for fwmark 2. This is useful on multi-tenant hosts or when isolating services.
Approach 3 — Interface-Based and Policy Chains
Packets arriving on a specific interface (e.g., eth0 or docker0) can be routed differently. Use iptables PREROUTING hooking to apply complex rules such as excluding certain IPs from the VPN while marking others. Combine conntrack to preserve marks for established connections so that reply packets follow the same policy.
- mangle PREROUTING: mark packets based on source/destination/port.
- mangle OUTPUT: mark locally generated packets (remember PREROUTING doesn’t see LOCAL OUTPUT).
- connmark: save and restore marks across connection lifetimes to ensure symmetric routing (CONNMARK in iptables).
WireGuard-Specific Integration Tips
wg-quick caveats
wg-quick brings up WireGuard interfaces and can add default or selective routes using the PostUp and PostDown directives in /etc/wireguard/.conf. However, wg-quick’s automatic route handling may conflict with custom PBR tables. Use PostUp/PostDown to add table routes or disable AllowedIPs auto-routing by avoiding broad AllowedIPs like 0.0.0.0/0 and instead manage the routing tables yourself.
Persisting policies
System reboots need policy persistence. Options:
- Place ip rule and ip route commands in /etc/wireguard/.conf via PostUp/PostDown.
- Create systemd units that run after network-online.target to apply rules.
- On OpenWrt, add scripts to /etc/firewall.user or use UCI network/firewall sections to apply nftables rules and ip rules.
Multi-peer failover and metrics
When multiple WireGuard peers are available, implement health checks that modify ip rule priorities or ip route next-hops. A small supervisor script can monitor latency or packet loss and switch the default table gateway or adjust fwmark-based decisions. Use ip route replace or ip rule change to minimize disruption. For advanced setups, BFD-like monitoring and dynamic routing via routing daemons (FRRouting) can be employed, but keep simplicity as WireGuard’s strength.
Practical Examples
Per-container VPN routing on a host
Host has Docker containers with IP range 172.18.0.0/16. To route container traffic via wg-vpn:
- Create table 220: ip route add default dev wg0 table 220
- Add ip rule: ip rule add from 172.18.0.0/16 table 220
- On container restart, ensure IPs remain in that range, or use Docker macvlan/bridge static addressing.
Port-based split tunneling
Route HTTPS to VPN, everything else direct:
- iptables -t mangle -A PREROUTING -p tcp –dport 443 -j MARK –set-mark 0x10
- ip rule add fwmark 0x10 table 310
- ip route add default dev wg-https table 310
- Use connmark to save marks: iptables -t mangle -A PREROUTING -j CONNMARK –save-mark; iptables -t mangle -A PREROUTING -m connmark –mark 0x10 -j MARK –set-mark 0x10
Common Pitfalls and Troubleshooting
Reverse Path Filtering: rp_filter can drop packets when source-based routing changes the egress interface. Set net.ipv4.conf.default.rp_filter=0 and adjust per-interface settings if needed.
Firewall blocks: Ensure iptables/nftables rules allow WireGuard peer endpoints and established/related traffic. When marking packets, ensure the firewall doesn’t inadvertently reject them.
DNS leaks: If only some traffic goes through VPN, your DNS queries might leak. Use a DNS server reachable via the desired path, or set per-namespace resolv.conf for containerized apps.
wg-quick overwrites: If using wg-quick, be aware of its behavior adding routes for AllowedIPs. Use specific AllowedIPs (not 0.0.0.0/0) or disable automatic routing and manage routes separately.
Advanced: Namespaces, systemd, and Automation
Network namespaces provide clean separation. You can put an application into a namespace and attach a WireGuard interface directly to that namespace, avoiding PBR complexity entirely for per-application routing. systemd’s NetworkNamespacePath and PrivateNetwork directives let you orchestrate this at unit level.
Automation strategies include:
- systemd service that sets ip rules/routes after WireGuard is up.
- small wrapper that uses wg(8) events or monitor sockets to dynamically adjust policies.
- use tools like nftables sets and iproute2 scripts for high-performance matching and fewer rules.
Security and Operational Best Practices
- Keep WireGuard keys secure and rotate periodically.
- Limit AllowedIPs to only what’s necessary to reduce unintended routing.
- Log policy decisions in a controlled way to aid debugging (use conntrack and fwmark logging judiciously).
- Test failover behavior under controlled conditions to ensure connections don’t break unexpectedly.
Conclusion
Policy-based routing combined with WireGuard provides a flexible, high-performance toolkit for implementing split tunnels, per-host/per-app VPN routing, and complex multi-tunnel topologies. By leveraging Linux primitives — ip rule/ip route, fwmark with iptables/nftables, and network namespaces — you can achieve deterministic routing policies that meet enterprise needs. Start with a simple source-based policy, then introduce marking strategies for greater granularity, and automate persistence with systemd or WireGuard PostUp/PostDown hooks.
For practical deployments and additional resources, visit the site: Dedicated-IP-VPN — https://dedicated-ip-vpn.com/