Introduction

Implementing per-user bandwidth control for L2TP VPNs is essential for operators who need to enforce fair usage, implement tiered plans, or prevent a small number of clients from saturating uplinks. L2TP (Layer 2 Tunneling Protocol) typically pairs with PPP authentication and provides each user with a PPP interface on the VPN server. That per-user interface is a convenient anchor for traffic shaping, accounting, and policy application.

Design considerations

Before diving into configuration, plan around the following points:

  • Granularity: Do you need per-user (per-account), per-IP, or per-session control?
  • Direction: Shaping egress is straightforward; controlling ingress requires policing with IFB or shaping on the opposite interface.
  • Dynamic vs static: Are bandwidth limits static per plan, or pushed dynamically from an authentication backend like RADIUS?
  • Scalability: How many concurrent sessions will you handle? Per-session tc classes scale linearly in complexity.
  • Accuracy vs overhead: HTB + SFQ/fq_codel combinations provide good fairness with modest overhead; iptables hashing/marking + tc filters scale better for many users.

How L2TP/PPP exposes per-user endpoints

Most Linux L2TP stacks (xl2tpd, libreswan/strongSwan with L2TP plugin, or ikev2+L2TP combinations) create individual PPP network interfaces: ppp0, ppp1, etc. The PPP layer provides hooks such as ip-up and ip-down scripts and environment variables (e.g., IFNAME, IPLOCAL, IPREMOTE, USER). Use these hooks to apply per-session traffic control at session bring-up.

Basic approach

The typical implementation flow is:

  • Authenticate user via PPP (local file, RADIUS, LDAP).
  • On session establishment (ip-up), read username and assigned IP.
  • Create or attach to a traffic-control class/queue for that user.
  • Mark packets in iptables (or use interface-based filters) and add tc filters to map marks to classes.
  • On session teardown (ip-down), remove class and filter.

Example: using ip-up/ip-down scripts + tc + iptables MARK

The following shows a minimal flow you can adapt. Use your distribution’s ppp ip-up template typically located at /etc/ppp/ip-up.d/.

Key points:

  • Use a parent HTB qdisc on the physical egress interface (e.g., eth0) as the shaping anchor.
  • Create per-user HTB classes with rate/ceil parameters.
  • Use iptables to mark packets originating from the user’s ppp interface (or source IP), then create tc filters to map marks to the HTB class.

Setup parent qdisc (once)

On the server egress interface (replace eth0, and set appropriate bandwidth):

tc qdisc add dev eth0 root handle 1: htb default 100

tc class add dev eth0 parent 1: classid 1:1 htb rate 500mbit ceil 500mbit

This creates a root HTB. Individual users get child classes under 1:1.

ip-up script logic

When PPP session is brought up, variables such as IFNAME, IPLOCAL, and USER are available. Example sequence:

1. Compute a unique classid for the session (e.g., hash username or use a counter).

2. Create HTB child class: tc class add dev eth0 parent 1:1 classid 1:10X htb rate 5mbit ceil 5mbit (replace 10X with generated id).

3. Attach a leaf qdisc like fq_codel or SFQ: tc qdisc add dev eth0 parent 1:10X handle 10X: fq_codel.

4. Mark packets from the PPP session using iptables mangle table: iptables -t mangle -A POSTROUTING -o eth0 -s ${IPREMOTE} -j MARK –set-mark ${MARKID}.

5. Add a tc filter mapping the fw mark to the class: tc filter add dev eth0 parent 1:0 protocol ip handle ${MARKID} fw flowid 1:10X.

Notes:

  • Use distinct MARK ids per user; careful to avoid collision with kernel reserved marks.
  • For symmetric shaping (both directions), apply similar constructs on the server ingress using IFB devices (see next section).

Handling inbound traffic (policing) with IFB

Because you can only shape egress on a given interface, inbound client traffic that arrives on eth0 must be redirected to an Intermediate Functional Block (IFB) to be shaped. Steps:

  • Load IFB module and create an IFB device: modprobe ifb; ip link add ifb0 type ifb; ip link set dev ifb0 up.
  • Attach a qdisc on eth0 ingress that redirects to ifb0 using tc action mirred: tc qdisc add dev eth0 handle ffff: ingress and tc filter add dev eth0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0.
  • On ifb0, set up HTB classes and filters similar to egress; mark packets based on destination IP or connection tracking.

Dynamic bandwidth policies via RADIUS

If you’re using RADIUS for authentication (common with xl2tpd + freeradius), you can push bandwidth attributes at authentication time. Popular options:

  • Use vendor-specific attributes (e.g., Mikrotik-Rate-Limit or Ascend rate-limit attributes) if client/server understand them.
  • Use RADIUS to populate session attributes and write scripts to apply limits in ip-up based on RADIUS reply (e.g., via radattr or environment variables).

Freeradius can call executables or write files with per-session limits. Your ip-up script can then read those and create appropriate tc classes.

Scaling strategies

For a small number of users, creating one class per session is fine. For larger deployments, consider:

  • Per-plan classes: Instead of one class per user, create classes per subscription tier and use hashing filters to map users into fairness buckets.
  • Use iptables CONNMARK: Save/restore marks via connmark across NAT and routing contexts to ensure consistent classification.
  • Aggregate policing: Combine both per-user and aggregate shaping: per-user ceil shapes, aggregate rate limiter for the whole pool to protect the uplink.
  • Offload-capable hardware: Use routers/switches with QoS features where possible to reduce CPU overhead on the VPN server.

Monitoring and accounting

Monitoring usage and enforcement health is crucial. Useful tools and approaches:

  • Use tc -s class show dev eth0 to inspect class counters.
  • iptables -t mangle -L -v to view mark packet counters.
  • Netdata, collectd, Prometheus exporters (node_exporter, tc_exporter) for time-series data.
  • rrdtool/vnStat for long-term per-interface metrics.
  • Use RADIUS accounting for per-user usage records, storing data centrally for billing or analytics.

Security and robustness

Consider these hardening steps:

  • Validate user-supplied identifiers before using them in shell commands to avoid injection vulnerabilities.
  • Enforce limits on number of classes created to prevent resource exhaustion; cap sessions per user and total queue entries.
  • Implement cleanup on ip-down and ensure orphaned classes/filters are removed on service restart.
  • Use namespaces or cgroups if isolating per-tenant resources is required.

Troubleshooting tips

Typical issues and where to look:

  • Traffic not matching class: verify iptables marks are set (iptables -t mangle -L -v) and tc filter uses correct handle/fw matches.
  • Ingress shaping ineffective: confirm IFB redirection is active and filters exist on ifb device.
  • High CPU: simplify qdisc (use SFQ over fq_codel if CPU is constrained), or offload QoS to dedicated hardware.
  • Class/mark collisions: keep a registry of used mark/class ids and reuse them deterministically per-user or per-session id.

Example end-to-end sequence (summary)

1. User authenticates via PPP/RADIUS. 2. ip-up runs: create HTB class 1:10X, add fq_codel, add iptables mangle rule to mark packets for IPREMOTE, add tc filter mapping mark -> class. 3. For ingress, make sure IFB redirection and corresponding filters are configured. 4. Monitor via tc/iptables and RADIUS accounting. 5. On ip-down, remove iptables rules, tc filters and classes.

Conclusion

Implementing per-user bandwidth control for L2TP VPNs on Linux relies on leveraging the PPP session lifecycle and combining iptables marking with tc shaping (HTB + fq_codel/SFQ) and IFB for ingress policing. For small deployments, per-session classes provide precise control. For larger scale, use per-plan aggregates, hashing, and RADIUS-driven dynamic policies. Pay attention to resource usage, security of scripts, and robust cleanup to maintain stable operation.

For practical templates, scripts and a managed solution tailored to Dedicated IP deployments, visit Dedicated-IP-VPN.