Deploying a modern proxy stack for secure, flexible traffic routing can be a routine task when you use containerization wisely. This guide walks you through a practical, production-ready way to run V2Ray inside Docker using Docker Compose, with automated TLS handled by Caddy. It focuses on clear configuration examples, security considerations, and operational tips so webmasters, enterprise admins, and developers can get a robust service up quickly and maintain it reliably.

Why use Docker Compose for V2Ray?

Docker Compose brings repeatability and portability to multi-container deployments. With Compose you can:

  • Version-control configuration as code.
  • Isolate runtime concerns (proxy process vs. TLS reverse proxy vs. logs).
  • Easily scale or roll out updates with minimal downtime.

Using a dedicated TLS reverse proxy (Caddy) allows automatic HTTPS provisioning and simplifies firewall rules — Caddy handles certificates via Let’s Encrypt, so the V2Ray container can run on an unexposed internal port.

High-level architecture

The recommended architecture for production-grade deployment in this guide:

  • Caddy: public-facing TLS reverse proxy with automatic certificate issuance and ACME renewal.
  • V2Ray: internal service exposing a WebSocket (ws) or HTTP/2 transport over an internal port.
  • Docker network: private bridge network so services communicate securely inside the host.

Prerequisites

Before you start, ensure you have:

  • A Linux host (Debian/Ubuntu/CentOS/AlmaLinux) with Docker and Docker Compose installed.
  • A registered domain name pointing an A record to your server’s public IP.
  • Root or sudo access to manage Docker and firewall rules.
  • Familiarity with JSON and basic Linux commands.

Example Docker Compose setup

Below is a minimal but production-oriented docker-compose.yml. It creates a private network, runs Caddy to terminate TLS and proxy to V2Ray, and persists essential data.

docker-compose.yml

<pre>version: ‘3.8’
services:
caddy:
image: caddy:2
restart: unless-stopped
ports:
– “80:80”
– “443:443”
volumes:
– ./Caddyfile:/etc/caddy/Caddyfile:ro
– caddy_data:/data
– caddy_config:/config
networks:
– v2net

v2ray:
image: v2fly/v2fly-core:latest
restart: unless-stopped
depends_on:
– caddy
volumes:
– ./config.json:/etc/v2ray/config.json:ro
– v2ray_logs:/var/log/v2ray
networks:
– v2net
expose:
– “10000” # internal only

volumes:
caddy_data:
caddy_config:
v2ray_logs:

networks:
v2net:
driver: bridge
</pre>

Explanation

In this compose file:

  • Ports 80 and 443 on the host are bound to Caddy, which performs ACME and TLS termination.
  • The V2Ray container only exposes an internal port (10000) via Docker network — it’s not published to host interfaces.
  • Volumes persist Caddy data (certificates) and V2Ray logs/configuration for safe upgrades.

Caddyfile configuration

Create a Caddyfile next to your compose file. This instructs Caddy to accept TLS traffic for your domain and reverse proxy to V2Ray over WebSocket.

Example Caddyfile

<pre>your.domain.example {
encode gzip

route /ray* {
reverse_proxy v2ray:10000 {
header_up Host {http.request.host}
header_up X-Real-IP {http.request.remote}
# Preserve WebSocket connection
transport http {
versions h1 h2
}
}
}

# Optional: redirect root to a status page or deny
respond / 200 “OK”
}
</pre>

Notes:

  • The path prefix /ray (or any custom path) is used to separate V2Ray WebSocket traffic from other HTTP routes.
  • Caddy will automatically request and renew certificates for your.domain.example.

V2Ray configuration (config.json)

V2Ray core uses a JSON configuration. For WebSocket transport behind a reverse proxy, a minimal inbound and outbound configuration looks like this.

config.json

<pre>{
“log”: {
“loglevel”: “warning”,
“access”: “/var/log/v2ray/access.log”,
“error”: “/var/log/v2ray/error.log”
},
“inbounds”: [
{
“port”: 10000,
“listen”: “0.0.0.0”,
“protocol”: “vmess”,
“settings”: {
“clients”: [
{
“id”: “PUT-YOUR-UUID-HERE”,
“alterId”: 0,
“level”: 0
}
] },
“streamSettings”: {
“network”: “ws”,
“wsSettings”: {
“path”: “/ray”
}
},
“sniffing”: {
“enabled”: true,
“destOverride”: [“http”, “tls”] }
}
],
“outbounds”: [
{
“protocol”: “freedom”,
“settings”: {}
},
{
“protocol”: “blackhole”,
“settings”: {},
“tag”: “blocked”
}
],
“routing”: {
“rules”: [
{
“type”: “field”,
“domain”: [“geosite:cn”],
“outboundTag”: “blocked”
}
] }
}
</pre>

Important points:

  • Replace PUT-YOUR-UUID-HERE with a secure UUID. Generate it with uuidgen or an online tool you trust. Example: uuidgen.
  • The path must match the path used by Caddy (/ray in examples).
  • alterId for vmess may be set to 0 for v4 protocol; verify compatibility with client versions.

Client configuration pointers

On the client side, configure your V2Ray client to connect to your.domain.example on port 443 using WebSocket transport with the same path /ray and the UUID you configured. TLS must be enabled at the client because Caddy terminates TLS publicly.

Security considerations

  • Firewall: only expose ports 80 and 443 on the host. Block other inbound ports. Example with UFW:
    • ufw allow 80/tcp
    • ufw allow 443/tcp
    • ufw enable
  • Credentials: never embed UUIDs or private keys in public repositories. Use environment-based injection or keep config files out of VCS.
  • Log management: rotate logs and limit retention. Docker volume for logs makes it easy to collect with ELK/Prometheus & Grafana stacks.
  • Container updates: pin images for stability or adopt an automated CI/CD rollout with canary testing. Avoid running latest in critical production without testing.
  • Rate limiting and abuse: consider Caddy middleware, fail2ban, or additional WAF rules if you expect attack traffic.

Operational tips

  • Health checks: add Docker healthcheck to V2Ray container to automatically restart unhealthy containers.
  • Monitoring: expose metrics via an exporter and integrate with Prometheus for uptime and traffic metrics.
  • Backups: back up the Caddy volume (/data) regularly to preserve certificates, and store copies of your V2Ray config securely.
  • Testing: use curl -vI https://your.domain.example/ray to verify TLS and that Caddy is responding; then test the client for actual proxying.

Example: adding a simple healthcheck

Add to the v2ray service in compose:

<pre>healthcheck:
test: [“CMD”, “grep”, “v2ray”, “/var/log/v2ray/error.log”] interval: 30s
timeout: 10s
retries: 3
</pre>

Adjust the check to match how V2Ray logs or exposes status on your image.

Troubleshooting

Common issues and how to address them:

  • Certificate not issued: make sure your DNS A record is correct and ports 80/443 are reachable. Check Caddy logs in docker logs caddy.
  • Client cannot connect: verify the path and UUID match exactly. Check V2Ray logs to see if inbound handshakes are arriving.
  • 404 from Caddy: ensure your Caddyfile route matches the path prefix and there’s no conflicting host block.
  • High latency: examine network MTU and congestion; check container CPU/memory; view docker stats and host resource usage.

Scaling and high-availability

For enterprise scenarios, consider:

  • Deploying a load balancer with multiple V2Ray instances behind it (sticky sessions when using WebSocket).
  • Using an orchestrator like Docker Swarm or Kubernetes for automatic scheduling, rolling updates, and service discovery.
  • Separating logs and metrics into centralized systems for easier troubleshooting and capacity planning.

With the Compose stack above, you have a secure, maintainable baseline to deploy V2Ray quickly while leveraging Caddy’s automated TLS. It strikes a practical balance between security, ease-of-deployment, and operational readiness.

For more in-depth tutorials and configuration examples tailored to different client protocols and enterprise deployments, visit Dedicated-IP-VPN at https://dedicated-ip-vpn.com/.