Strengthen the mesh.
Run a relay node.
Lumamesh is a decentralised network — no company runs it, no server owns it. Every node someone adds makes it harder to stop. If enough people run nodes, the network becomes as unstoppable as BitTorrent or Bitcoin.
A relay node is a single Go binary — one UDP port, zero database, no HTTP by default. It only handles signaling (introducing browsers to each other). Once two browsers have exchanged SDP the relay drops out completely — your node never sees the actual data.
You can run one on a spare machine at home, a Raspberry Pi, or a $3/mo VPS. The whole setup takes about 5 minutes.
One-command install
Clone the repo and run the installer. It builds the binary, generates your identity files, installs a systemd service, backs up your keys, and encodes your relay config — all in one pass.
git clone https://github.com/lumamesh/lumamesh.git
cd lumamesh
PUBLIC_IP=YOUR.PUBLIC.IP bash install.sh
The installer writes a full log to operator/install.log and saves your config values to operator/setup.env for reference.
curl https://api.ipify.org — or leave PUBLIC_IP unset and the installer will detect and confirm it for you.
What the installer does
- Detects your public IP (or uses the one you provide)
- Checks Go is installed (installs via snap if not)
- Runs
go test ./...— all tests must pass - Builds the
lumamesh-relaybinary - Generates your DTLS cert, ICE password, and Ed25519 node identity key
- Encodes your relay config →
operator/relay.txt - Installs and starts a systemd service (auto-restart on reboot)
- Backs up identity files to
operator/keys-backup/ - Runs a health check and prints your next steps
Identity files — back these up
These four files in pion-server/ are your node's entire identity. Copy operator/keys-backup/ somewhere offline (USB, encrypted cloud):
server.crt # DTLS certificate (fingerprint changes if regenerated)
server.key # DTLS private key
server.key.icepwd # ICE password
node.key # Ed25519 node identity (nk changes if regenerated)
Manual install (step by step)
If you prefer to do it yourself or are on a non-systemd system:
1. Build
git clone https://github.com/lumamesh/lumamesh.git
cd lumamesh/pion-server
go test ./...
go build -o lumamesh-relay .
2. Open the UDP port
sudo ufw allow 3478/udp
3. Run
PUBLIC_IP=YOUR.PUBLIC.IP UDP_PORT=3478 ./lumamesh-relay
Run as a systemd service
# /etc/systemd/system/lumamesh-relay.service
[Unit]
Description=Lumamesh Relay
After=network-online.target
Wants=network-online.target
[Service]
WorkingDirectory=/opt/lumamesh
ExecStart=/opt/lumamesh/lumamesh-relay
Environment=PUBLIC_IP=YOUR.PUBLIC.IP
Environment=UDP_PORT=3478
Environment=HEALTH_LISTEN=127.0.0.1:7401
Restart=always
RestartSec=3
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/lumamesh
PrivateTmp=true
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now lumamesh-relay
sudo journalctl -u lumamesh-relay -f
Verify it's working
# Health check (requires HEALTH_LISTEN)
curl http://127.0.0.1:7401/healthz
# {"nodeId":"...","ok":true,"ts":...}
# Mesh state
curl http://127.0.0.1:7401/statsz | jq
Mesh with other nodes (optional)
Multiple relays sharing a MESH_SALT gossip room hints over TCP so browsers always find each other regardless of which node they land on.
# Node A
PUBLIC_IP=203.0.113.10 MESH_SALT=pick-a-long-random-string \
MESH_LISTEN=0.0.0.0:7400 MESH_PEERS=203.0.113.20:7400 ./lumamesh-relay
# Node B
PUBLIC_IP=203.0.113.20 MESH_SALT=pick-a-long-random-string \
MESH_LISTEN=0.0.0.0:7400 MESH_PEERS=203.0.113.10:7400 ./lumamesh-relay
MESH_SALT value. Different salts = rooms never converge.Add your node to the network
Once your relay is running, open a pull request on GitHub to add it to the public relay list at lumamesh.com/relay.txt. Include your node's ip (or host), port, fingerprint, and nk — all printed by the relay on startup.
The curated list is just the bootstrap entry point. Once a browser connects to any node it automatically discovers all others via the nodes gossip action and caches them locally. The more nodes exist, the harder the network is to stop. There is no central authority — anyone can add a node, and the network routes around failures automatically.
Environment variables
| Var | Default | Purpose |
|---|---|---|
| PUBLIC_IP | 127.0.0.1 | Public IPv4 browsers dial. Required. |
| PUBLIC_HOST | — | DNS name (resolved client-side via DoH). Use for dynamic DNS. |
| UDP_PORT | 3478 | Browser-facing UDP port. |
| ICE_UFRAG | luma | ICE username fragment. |
| ICE_PWD | auto | ICE password. Auto-generated and persisted if unset. |
| CERT_FILE | server.crt | DTLS cert path. |
| KEY_FILE | server.key | DTLS key path. |
| NODE_KEY | node.key | Ed25519 identity path. |
| HEALTH_LISTEN | — | host:port for /healthz + /statsz. |
| MESH_SALT | — | Enables mesh gossip. Identical across all mesh nodes. |
| MESH_LISTEN | — | host:port to accept gossip peers (TCP). |
| MESH_PEERS | — | Comma-separated host:port of peer nodes to dial. |
| MAX_SESSIONS | 1000 | Concurrent browser sessions. |
| MAX_ROOM_SIZE | 250 | Members per room. |