in DevOps, linux, security, unix

Optimizing OpenVPN Throughput

Believe it or not, the default OpenVPN configuration is likely not optimized for our link. It probably works but its throughput could possibly be improved if we take the time to optimize it.

For the proper optimization of the link, both ends of the tunnel should be known beforehand. That means when we are using OpenVPN in server mode serving different clients that we do not have control over, the best we could do is to optimize our own end of the tunnel and use appropriate default settings suitable for the most clients. In this post however, our main interest is OpenVPN p2p or point-to-point mode which you most likely would use to connect different parts of your networks together.


OpenVPN can use LZO to compress each packets individually. If we load compression module on one side, it must be loaded on the other side as well. Their comp-lzo mode however, do not need to be matched. It is generally best to set the comp-lzo mode to adaptive and let OpenVPN decides whether it should be used or not. There are circumstances however that using the module could hurt. In case of uncompressable data (e.g., transferring a zip file), comp-lzo could backfire by increasing the final packet size by 1. We can check the effectiveness of the LZO compression over time by sending SIGUSR2 signal to the OpenVPN process (if its running as a daemon, the output would go to the syslog file). With such statistics you could decide whether using compression is useful for your link or not.
It is also worth noting that although LZO algorithm is fast, it still uses some resources. The algorithm is in such a way that compressing a block would require relatively more resource that decompressing it.
If we decide to not use compression, prevent loading the module completely by not including the comp-lzo line. Including it even when it’s set to no, would reduce your effective tun-mtu size by 1.

Cipher algorithm and size

Different ciphers have different speeds in different hardwares(ie an AES-NIcapable CPU). This is a hard topic to cover as it is up to you to decide whether you’d want to sacrifice better encryption to a faster tunnel or using smaller keysize to reduce the CPU load. There are countless of articles about OpenSSL ciphers, their speed and their strength. Do a google search and get familiar with the subject. Just as a side note, to compare the ciphers speed in your platform take a look at openssl speed -h command.

sndbuf and rcvbuf

There have been reports of speed improvement in some circumstances when these values are set to 0 1. Discussion –


This little flag which is supported in non-windows systems improves CPU usage when dealing with UDP packets by using non-blocking write operations. It only applies to UDP tunnels.

MTU adjustments

OpenVPN UDP packets should not be fragmented. So we need to ensure we are not sending a packet larger than what can travel to the other end of the tunnel. TCP tunnels usually don’t require such adjustments.

The maximum size of the final UDP packet after encapsulation minus the headers. So for example if your link MTU is 1500, the correct value for link-mtuwould be 1472 (1500 – 20(IP header) – 8(UDP header)). In the OpenVPN manual it is said that it’s best to not set this value directly. However in my experience, this is in fact the best way to adjust your tun/tap link MTU properly and the tun-mtuvalue (which we will discuss later) will be derived from that. The default value of link-mtu however is derived from tun-mtu and is bigger than 1500.


The actual MTU of OpenVPN tun/tap device. This defaults to 1500.

We can only specify either link-mtu or tun-mtu and not both. The other one will be calculated internally by OpenVPN. One other thing to note is that link-mtuapplies to final packets (after encryption and encapsulation) while tun-mtuapplies to the unencrypted packets which are about to enter the tun/tap device.

The tun-mtu of 1500 is ideal if your physical link MTU could handle it. It would provide maximum compatibility between routers along the way. However this is not always the case. Although OpenVPN is supposed to be able to discover this and act accordingly but the whole thing would collapse if you have a broken PMTUD.
In such case, manual intervention for adjusting MTU is required. In another post I will talk about ways to find the correct MTU of a path but assuming you already know the correct value, you subtract 28 bytes from it and set it as link-mtuvalue and let OpenVPN calculate the right tun-mtu for you. Again, remember, the calculated tun-mtu value applies to packets before compression/encapsulation and its size highly depends on other factors like cipher algorithm, keysize, compression module, etc.

In very fast links, setting tun-mtu to a high value could potentially help 2.


This option should generally be avoided when possible. It adds 4 bytes of overhead to each packet. But it is there as the last resort when no other option works. With this option, OpenVPN internally fragments packets to chunks not bigger than the set value and send them over the link. The other end receives and reassembles them to create the original sent packet. This is the only instance I know that a single packet could result in more than one OpenVPN UDP packets being sent over the link.


This option only applies to TCP connections inside the tunnel. Maximum Segment Size is yet another feature of TCP. This option is negotiated between peers during TCP handshaking via SYN packets. It is the maximum size of the payload each TCP packet can carry. It does not take IP and TCP header sizes into account. This option can be used in a link with broken PMTUD to at least make TCP connections possible.
Even though MSS itself is a TCP feature, this OpenVPN option targets encapsulated UDP packets. Meaning it changes the MSS value of the TCP protocols inside the tunnel in a way that after UDP encryption/encapsulation the resulting UDP packet size (minues IP/UDP headers), would not exceed the mssfix value.
So in an optimized link, mssfix is either disabled (set to 0) or it’s value would be the same as link-mtu’s.
As a side note, mssfix applies to both sending AND receiving SYN packets so it is not an ideal solution for asymmetric links… but that’s for another post.