Trojan Rust

Server Setup

Setting up and deploying the trojan-rs server

The trojan-rs server accepts Trojan protocol connections over TLS, authenticates clients, and proxies traffic to its destination. Non-Trojan traffic is forwarded to a configurable fallback address.

Running the Server

With a Config File

trojan server -c config.toml

With CLI Options

trojan server --listen 0.0.0.0:443 \
  --tls-cert /etc/trojan/cert.pem \
  --tls-key /etc/trojan/key.pem \
  --password "your-password" \
  --fallback 127.0.0.1:80

CLI options override config file values when both are provided.

How It Works

Client ──TLS──► trojan-server ──► Target

                    └── (invalid auth) ──► Fallback HTTP server
  1. Client establishes a TLS connection
  2. Server reads the Trojan header (56-byte SHA-224 password hash + request)
  3. If authentication succeeds, the server proxies traffic to the requested destination
  4. If authentication fails or the header is invalid, traffic is forwarded to the fallback address

This makes the server indistinguishable from a normal HTTPS server to outside observers.

Fallback Configuration

The fallback address receives all non-Trojan traffic. Typically this is a web server:

[server]
fallback = "127.0.0.1:80"

Fallback Connection Pool

For lower latency on fallback connections, enable the warm pool:

[server.fallback_pool]
max_idle = 10
max_age_secs = 30
fill_batch = 5
fill_delay_ms = 100

The pool pre-connects to the fallback server in the background. Connections are used once and not returned to the pool.

TCP Tuning

[server.tcp]
no_delay = true           # Disable Nagle's algorithm for lower latency
keepalive_secs = 300       # TCP keepalive interval
reuse_port = false         # SO_REUSEPORT for multi-process load balancing
fast_open = false          # TCP Fast Open (requires kernel support)
fast_open_qlen = 5         # TFO queue length

SO_REUSEPORT

Enable reuse_port = true to run multiple server processes on the same port. The kernel distributes connections across processes. This is useful for multi-core scaling on Linux.

TCP Fast Open

TCP Fast Open reduces connection latency by sending data in the SYN packet. Requires kernel support (net.ipv4.tcp_fastopen sysctl on Linux).

Rate Limiting

Protect against connection floods with per-IP rate limiting:

[server.rate_limit]
max_connections_per_ip = 100
window_secs = 60
cleanup_interval_secs = 300

Connections exceeding the limit are rejected before TLS handshake.

Connection Limits

[server]
max_connections = 10000    # Maximum concurrent connections

When the limit is reached, new connections are rejected until existing ones close.

Signal Handling

The server handles Unix signals:

  • SIGTERM / SIGINT — Graceful shutdown
  • SIGHUP — Reload configuration (re-reads config file, updates auth passwords)
# Reload configuration
kill -HUP $(pidof trojan)

# Graceful shutdown
kill -TERM $(pidof trojan)

Resource Limits

Fine-tune buffer sizes and backlog:

[server.resource_limits]
relay_buffer_size = 8192      # TCP relay buffer size
tcp_send_buffer = 0           # SO_SNDBUF (0 = OS default)
tcp_recv_buffer = 0           # SO_RCVBUF (0 = OS default)
connection_backlog = 1024     # TCP listener backlog

Next Steps

On this page