Simple question, difficult solution. I can't work it out. I have a server at home with a site-to-site VPN to a server in the cloud. The server in the cloud has a public IP.
I want people to access server in the cloud and it should forward traffic through the VPN. I have tried this and it works. I've tried with nginx streams, frp and also HAProxy. They all work, but, in the server at home logs I can only see that people are connecting from the site-to-site VPN, not their actual source IP.
Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people's real IP addresses, not the server in the cloud private VPN IP.
Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people’s real IP addresses, not the server in the cloud private VPN IP.
Not that I'm aware of. Most methods require some kind of out-of-band way to send the client's real IP to the server. e.g. X-Forwarded-For headers, Proxy Protocol, etc.
If your backend app supports proxy protocol, you may be able to use HAProxy in front on the VPS and use proxy protocol from there to the backend. Nginx may also support this for streams (I don't recall if it does or not since I mainly use HAProxy for that).
Barring that, there is one more way, but it's less clean.
You can use iptables on the VPS to do a prerouting DNAT port forward. The only catch to this is that the VPN endpoint that hosts the service must have its default gateway set to the VPN IP of the VPS, and you have to have a MASQUERADE rule so traffic from the VPN can route out of the VPS. I run two services in this configuration, and it works well.
Where eth0 is the internet-facing interface of your VPS.
Edit: One more catch to the port forward method. This forward happens before the traffic hits your firewall chain on the VPS, so you'd need to implement any firewalls on the backend server.
Forgot to ask: Is your server a VPN client to the VPS or a VPN server with the VPS as a client? In my config, the VPS is the VPN server.
Not sure about the netplan config (all my stuff is debian and uses oldschool /etc/network/interfaces), but you'd need logic like this:
Server is VPN client of the VPS:
routes:
# Ensure your VPS is reachable via your default gateway
- to: <vps public ip>
via: <your local gateway>
# Route all other traffic via the VPS's VPN IP
- to: 0.0.0.0/0
via: <vps vpn ip>
You may also need to explicitly add a route to your local subnet via your eth0 IP/dev. If the VPS is a client to the server at home, then I'm not sure if this would work or not.
Sorry this is so vague. I have this setup for 2 services, and they're both inside Docker with their own networks and routing tables; I don't have to make any accommodations on the host.
Everything I use is in Docker too, I'd much rather use Docker than mess around with host files, but to try it out I don't mind. If you have an image you could share, I'd appreciate it.
Anyway, neither are clients or servers as I just used ZeroTier as a quick setup. On my other infra I use wireguard with the VPS being the server (that setup works well but I only reverse proxy HTTP stuff so X-Forwarded-For works well).
I've no experience with Zerotier, but I use a combo of WG and Openvpn. I use OpenVPN inside the Docker containers since it's easier to containerize than WG.
Inside the Docker container, I have the following logic:
supervisord starts openvpn along with the other services in the container (yeah, yeah, it's not "the docker way" and I don't care)
OpenVPN is configured with an "up" and "down" script
When OpenVPN completes the tunnel setup, it runs the up script which does the following:
# Get the current default route / Docker gateway IP
export DOCKER_GW=$(ip route | grep default | cut -d' ' -f 3)
# Delete the default route so the VPN can replace it.
ip route del default via $DOCKER_GW;
# Add a static route through the Docker gateway only for the VPN server IP address
ip route add $VPN_SERVER_IP via $DOCKER_GW; true
ip route add $LAN_SUBNET via $DOCKER_GW; true
LAN_SUBNET is my local network (e.g. 192.168.0.1/24) and VPN_SERVER_IP is the public IP of the VPS (1.2.3.4/32). I pass those in as environment variables via docker-compose.
The VPN server pushes the default routes to the client (0.0.0.0/1 via <VPS VPN IP> and 128.0.0.0/1 via <VPS VPN IP>
Again, sorry this is all generic, but since you're using different mechanisms, you'll need to adapt the basic logic.
Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in? I've setup a quick Wireguard VPN with Docker, setup the client so that it routes ALL traffic through the VPN. Doing something like curl ifconfig.me now shows the public IP of the VPS... this is good. But it seems like the iptables command aren't working for me.
You may need to move the logic from netplan to a script that gets executed when the VPN is brought up. Otherwise, it will likely fail since it won't have the VPN tunnel interface up to route traffic to.