# Intro This is a small post about `tunnelling UDP` over `TCP` and `WireGuard Site-to-Site VPN` configurations. I just configured a tiny private network with remote access capabilities and I thought it would be practical to share some of my learnings. It also helps me personally to do some documentation so I later remember what went on exactly (now I can delete the information from my attic-space 🧠). ``` "I can't believe THAT!" said Alice. "Can't you?" the Queen said in a pitying tone. "Try again; draw a long breath, and shut your eyes." Alice laughed. "There's no use trying," she said "one CAN'T believe impossible things." "I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast." ``` # Infrastructure First it's good to have some idea of the setup. We have have a `router` that is internet facing, on the `LAN` we have a bunch of resources (compute, tv, whatever). The `pfSense` firewall also lives on the `LAN`, its `WAN` port (`igc0`) has an address on the `LAN` subnet. The `pfSense` has two private networks (`igc1` and `igc2`) that have some stuff connected to them but they are separated from each other and the `LAN`. We have this kind of `double-NAT` scenario going on. ![[tunnel-01.svg]] Ok, this is all fine and it is easy to configure some rules on `pfSense` that give selective access to the private networks. What we really want to understand here is how we can expose `icg1` using `wireguard` to facilitate remote access. The additional detail here is that we can spin up a `micro cloud vm` that can act as a public endpoint for our `wireguard` service. Btw, you can launch a `B1ls` instance on `Azure` for `$3.80/m`. # Such tunnel, much stability Ok first, I want to show you a solution that you will see described online but is actually bad! Once configured on `pfSense`, `WireGuard` will be listening on `UDP 51820`. We just need to get that port to the `Azure VM`. We can set up something like this through a shell on the `pfSense` appliance: ```bash #!/bin/sh SSH_TUNNEL_CMD="ssh -i /jumanji/tunnel.pem -N -f -R 15182:localhost:15182 [email protected]" SOCAT_CMD="/usr/local/bin/socat TCP-LISTEN:15182,fork UDP4:localhost:51820" # Check if SSH tunnel is running, otherwise start it if ! pgrep -f "$SSH_TUNNEL_CMD" > /dev/null; then $SSH_TUNNEL_CMD fi # Check if socat is running, otherwise start it if ! pgrep -f "$SOCAT_CMD" > /dev/null; then $SOCAT_CMD & fi ``` We reverse tunnel `TCP 15182` on the `Azure` server back to `TCP 15182` on `pfSense`. We then use `socat` to listen on `TCP 15182` and pass any traffic to `WireGuard` on `UDP 51820`. The reason we have this in a bash script is because we schedule a health-check every 5 minutes with `cron` (there is also an app in the package manager). ``` */5 * * * * /jumanji/ssh_socat.sh ``` Then on the `Azure VM` we do something similar but only with `socat`: ```bash #!/bin/sh # Check if socat is running, otherwise start it if ! pgrep -f "socat UDP4-LISTEN:51820,fork TCP:localhost:15182"; then # Listen for wireguard traffic on public and forward it to the SSH tunnel socat UDP4-LISTEN:51820,fork TCP:localhost:15182 & fi ``` Here we listen with `socat` on `UDP 51820` and send it down the tunnel. Again, we use `cron` to health-check our "service". We encapsulate the `UDP` in `TCP` and then unwrap it again. This sort of works fine, we can externally connect to the `WireGuard VPN` and we can talk to the network on `igc1`. ![[tunnel-02.png]] This seems pretty workable until you try to do something more complex, for example load an internal webpage. At that point it becomes quite unstable. I wasn't really sure why but reading a `ServerFault` post about someone doing something similar I saw this comment. ![[tunnel-03.png]] This is a `5-head` take, I talked to the `AI's` and they seem to think this is a plausible explanation (`merging`/`splitting`/`corruption`). They recommended a few options: - `udptunnel` is apparently designed in a way that it will preserve packet boundaries so it could be used as an alternative. We would need to compile it for the appliance. - `udp2raw` similar story, this actually looks more promising because there are BSD release packages. - Implement custom data framing by `rewriting` parts of `socat` (the `AI's` think time grows on trees, clearly). We have some options but all of these still wrap and unwrap `UDP` traffic. Additionally we don't know how well the solutions work until we try them. The `AI's`, while well intentioned, may be predicting tokens in an optimistic way (`remind yourself that overconfidence is a slow and insidious killer`). Before we start compiling code on `FreeBSD` let's take a step back and consider the scenario again. # Site-to-Site WireGuard I've done a lot of infrastructure work, professionally, but not exactly the way this is set up. I would consider this an atypical scenario because of the `double-NAT` behind `dynamic IP` situation. However, thinking a bit more clearly there is a direct business parallel. Companies may have a requirement where they have to securely connect two (or more) disparate network resources, literally `Site-to-Site`. ![[tunnel-04.svg]] Here, typically, the edges each have a public IP and use some kind of port forwarding to the internal VPN endpoint. Both sides will have a fully featured VPN and register the other site as a client. Both sites will also have traffic rules that describe how clients on the local site can talk to networks on the remote site and how remote clients can route traffic on the local site. We can actually implement this with `WireGuard`. One of our `Sites` doesn't have a public IP but that doesn't matter. The only requirement is that at least one party (remember we can add more `Sites` to this mesh later if we want) has a public endpoint, in our case the `Azure VM`. First we create a `WireGuard` configuration for the `Azure Site`, something like this: ``` [Interface] # Azure Site Server's Private Key PrivateKey = â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’ # Azure Server's WireGuard IP Address Address = 10.9.0.2/32 # Port WireGuard Will Listen On ListenPort = 51820 # PostUp Commands: Executed After the Interface Is Brought Up PostUp = sysctl -w net.ipv4.ip_forward=1; iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # PostDown Commands: Executed Before the Interface Is Brought Down PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE [Peer] # pfSense Site PublicKey = â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’ PresharedKey = â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’â–’ AllowedIPs = 10.9.0.0/24, 10.1.0.0/24 ``` Notice a few things here: - The `PostUp` commands facilitate traffic routing over the `WireGuard` interface - There is a `[Peer]` configured for the `pfSense Site` - The `[Peer]` has an `AllowedIPs` setting that describes what networks it can address When you initially set this up you will, of course, not have the `Peer` information till you create the `WireGuard` service on `pfSense`. ![[tunnel-05.png]] Take note of the `Public key` here, you need this for the `[Peer]` section on the `Azure VM`. This is pretty standard, let's take a look at the `AZURE_SITE` peer configuration. ![[tunnel-06.png]] The key point here is that we uncheck the `Dynamic Endpoint` box, that let's us specify the public IP address and port of the `Azure Site`. Take note of the `Pre-shared key`, if generated, it has to go into the `[Peer]` section on the `Azure VM`. At this point you should first start/restart the `Azure Site WireGuard` and then `disable->enable` the `pfSense Site WireGuard`. Once you do that you should see that `pfSense` has connected to the `Azure Site WireGuard`. ![[tunnel-07.png]] Perfect, finally you may have noticed that the `pfSense Site` addresses traffic for `10.9.0.0/24` (the `WireGuard` subnet) and `10.1.0.0/24` (the `igc1` subnet). To allow us to route traffic to `10.1.0.0/24` we need add a firewall rule on `pfSense`. ![[tunnel-08.png]] In `Destination`, from the dropdown, pick the network that maps to the local subnet (`igc1` in my case). Notice here that I have a unified interface for `WireGuard`, this means that if I make multiple tunnels, then all tunnels will be able to route to `igc1`. If you wish to avoid this behaviour, you can go to `VPN->WireGuard->Settings`, here you can change `Interface Group Membership` from `All Tunnels` to `Only Unassigned Tunnels`. Then, once you assign an interface for your `WireGuard` tunnel, you will be able to create rules that map only to a specific tunnel. The steps are a bit condensed for brevity but that is all you need to do, any clients that now connect to the `Azure Site` will be able to route traffic to the `pfSense Site` and to `igc1`, in my case. ![[tunnel-09.png]] ### Notes Just keep in mind that you should add `client profiles` to the `Azure Site` not the `pfSense Site`. You do this in the normal way and add more peers below the existing ones. Later, if needed, you can add more sites to the mesh. So long as the remote site connects to the `Azure VM` you can configure routing like we did before. Finally, here one of the sites is, of course, a `pfSense` appliance but you can do this purely through configs on both ends like we did on the `Azure VM`. You don't need the `pfSense` middleware layer, that's just my setup.