Introduction
Implementing per-user bandwidth limits in an IKEv2 VPN environment requires careful planning across several layers: authentication and session management, packet classification and marking, traffic shaping/policing, and monitoring. IKEv2 handles key exchange and IP address assignment, but it doesn’t natively control per-user throughput. To enforce reliable limits you must integrate IKEv2 with system-level traffic control (tc), packet-marking (iptables or nftables), and optionally RADIUS/CoA for dynamic control. This article walks through architectural options, concrete Linux-based examples, and operational considerations for production deployments.
Design approaches: where to impose limits
There are multiple places you can apply per-user bandwidth controls. The common approaches include:
- Per-user IP-based shaping: Each VPN user is assigned a unique virtual IP (common in IKEv2). Traffic is classified by source/destination IP and shaped accordingly.
- Packet marking and tc filters: Mark packets using iptables/nftables (fwmark) or use netfilter connection tracking to identify flows, then map marks to tc classes for shaping.
- RADIUS-driven policing: Use RADIUS to supply per-session limits and implement CoA (Change of Authorization) to update policies dynamically.
- Per-namespace or veth interfaces: Put each user into a separate network namespace or veth pair, and apply qdiscs directly to that interface (useful for strict isolation).
- BPF/eBPF or hardware offload: Use tc with BPF classifiers for high performance, or offload to programmable NICs when available.
Each approach has tradeoffs in complexity, scalability and enforcement precision. IP-based shaping is simplest; namespaces and per-interface qdiscs provide the most isolation but add resource overhead for many users.
Prerequisites and assumptions
Examples below assume a Linux VPN gateway running an IKEv2 implementation such as strongSwan or libreswan, with virtual IP assignment (e.g., via IKEv2 virtual IPs). The gateway has a single external egress interface (e.g., eth0). Adjust interface names for your environment.
- Linux kernel with tc (iproute2), iptables or nftables, and optional ipset support.
- IKEv2 configured to assign unique IPs per user (common with XAUTH or when users get dynamic virtual IPs).
- Optional FreeRADIUS server for centralized policy/rate control and CoA support.
Why rely on packet marking + tc?
tc supports classful queuing disciplines (htb, htb+sfq) and policing (police) that can enforce bandwidth limits precisely. However, tc needs a way to associate packets with users: that’s where iptables/nftables marking comes in. This separation allows per-user policies without creating thousands of tc qdiscs on the physical interface if you use classes and filters efficiently.
Concrete example: IP-based per-user limits with iptables + tc
This approach presumes each user gets a virtual IP in the 10.10.0.0/24 range. We’ll mark packets by source IP and map marks to HTB classes on the egress device.
Step 1 — Mark packets with iptables
On the VPN gateway, use the mangle table to apply fwmarks per user IP. For many users, populate an ipset where each entry is associated with a mark ID, or use per-IP rules when users are few.
Example commands (single-user rule):
<code>
iptables -t mangle -A PREROUTING -s 10.10.0.42 -j MARK –set-mark 100
iptables -t mangle -A PREROUTING -s 10.10.0.43 -j MARK –set-mark 101
</code>
Better: use ipset to group many IPs per rate-class and then mark by matching the ipset.
Example ipset + iptables:
<code>
ipset create bw_1k hash:ip
ipset add bw_1k 10.10.0.42
iptables -t mangle -A PREROUTING -m set –match-set bw_1k src -j MARK –set-mark 100
</code>
Step 2 — Create HTB qdisc and classes
On the egress interface (eth0) create a root htb qdisc and child classes for each bandwidth profile.
Example:
<code>
# root qdisc
tc qdisc add dev eth0 root handle 1: htb default 999
# create a parent class
tc class add dev eth0 parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit
# create per-profile classes
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 5mbit ceil 5mbit
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 10mbit
tc class add dev eth0 parent 1:1 classid 1:999 htb rate 100mbit ceil 1000mbit
</code>
Step 3 — Add filters to map fwmarks to classes
Use the fw filter to direct marked packets to the appropriate class.
Example:
<code>
tc filter add dev eth0 parent 1: protocol ip handle 100 fw flowid 1:10
tc filter add dev eth0 parent 1: protocol ip handle 101 fw flowid 1:11
</code>
Now packets marked 100 go to the 5Mbps class, and 101 to 10Mbps.
Dynamic, RADIUS-driven enforcement
In larger deployments you want central control: when a user authenticates via RADIUS, the radius server provides the limit and the VPN gateway applies the policy dynamically. Two common patterns:
- Reply attributes: Use vendor or standard attributes in RADIUS reply to convey bandwidth class label or numeric limit. The VPN gateway runs a script on session creation that parses these attributes and updates ipset/fwmark rules and tc classes.
- CoA (Change of Authorization): FreeRADIUS can send CoA to modify an existing session’s policy (e.g., increase/decrease bandwdith). The gateway must accept CoA and reconfigure marks or classes accordingly.
Example flow:
- User authenticates → RADIUS responds with Attribute “X-BW-Class: gold”.
- strongSwan calls a connection script that maps “gold” to fwmark 200 and adds the user’s virtual IP to ipset_gold.
- tc filters already map fwmark 200 to a specific class. No restart required.
This enables per-session enforcement with centralized policy but requires scripting integration with your IKE daemon.
Alternative: per-namespace / per-veth isolation
For strict enforcement and simpler accounting, create a network namespace (or veth pair) per user and apply a qdisc bound to that veth. This prevents users from affecting each other and eliminates complex filter lookup. But it scales poorly: thousands of namespaces consume resources and increase management complexity.
- Pros: clean isolation, no complex marking/filtering, per-interface qdisc is precise.
- Cons: high resource use, more complex orchestration, may not be suitable for large user bases.
Scaling considerations
When designing for hundreds or thousands of concurrent VPN users consider:
- Number of tc classes/filters: Each class consumes kernel resources. Group users into tiers (e.g., 1Mbps, 5Mbps, 10Mbps) rather than one class per user unless absolutely required.
- Use ipset: Avoid thousands of iptables rules; use ipset to maintain large sets of IPs then mark with a single rule per set.
- eBPF and cls_bpf: For high throughput, use eBPF-based classifiers with tc to reduce overhead and implement complex matching logic at kernel speed.
- Asymmetric routing and NAT: Ensure you shape on the egress interface where congestion occurs. For NATed setups, account for connection tracking and ensure marks survive NAT if needed (use raw table PREROUTING).
- Monitoring and accounting: Export per-class/counter stats to Prometheus or use iptables counters to alert on policy violations or saturation.
Common pitfalls and troubleshooting
Below are frequent issues when enforcing per-user limits on IKEv2 VPNs:
- MTU and fragmentation: VPN overhead reduces MTU. Lower MTU or enable MSS clamping so throughput isn’t impacted by retransmissions.
- Incorrect packet match direction: Filters on egress vs ingress matter. Many use egress shaping (egress on ISP-facing interface) and ingress policing to prevent bursts.
- Marks lost by NAT or conntrack: Ensure marking occurs before NAT if you match on original source IP. Use PREROUTING mangle for marking incoming packets from VPN clients.
- Latencies from policing: Strict policing (tc police) drops excess packets which can cause retransmissions — prefer token-bucket (HTB) shaping for smoother behavior.
- StrongSwan integration: Use conn scripts or updown hooks to add user IPs to ipsets when connections come up and remove them on disconnect.
Example integration with strongSwan
strongSwan supports connection up/down scripts. Configure an updown script that:
- Reads the assigned virtual IP and username.
- Determines the bandwidth class from a local mapping or RADIUS reply attributes.
- Adds the IP to the corresponding ipset and optionally creates per-session firewall marks.
- On disconnect, removes the IP from ipset and clears state.
This way the dynamic lifecycle of VPN sessions is tied to your traffic control policy automatically.
Monitoring and validation
Validate enforcement by generating traffic from a test client and observing tc class counters and iptables/nftables counters. Useful commands:
- tc -s class show dev eth0
- tc -s qdisc show dev eth0
- iptables -t mangle -L -v -n
- ipset list <setname>
Automate synthetic tests that saturate a user’s throughput and check that observed bandwidth remains within configured limits. Also monitor latency and packet loss as user count scales.
Conclusion
Enforcing per-user bandwidth on IKEv2 VPNs blends VPN session lifecycle management with robust Linux traffic control. For most deployments, the balanced approach is: assign users unique virtual IPs, mark packets via ipset+iptables (or nftables), and map marks to HTB classes on the egress device. Use RADIUS/CoA for central policy and consider eBPF or hardware offload for high-scale, high-performance scenarios. Be mindful of MTU, NAT, and asymmetric routing, and prefer shaping (HTB) over strict policing to minimize application-layer impacts.
For implementation guides, script examples, and managed hosting insights specific to IKEv2 environments, visit Dedicated-IP-VPN at https://dedicated-ip-vpn.com/.