OpenVidu 2.2.0: TURN made easy

OpenVidu
9 min readJun 27, 2018

We published while ago the direction where OpenVidu platform was heading after release 2.0.0 (you can get into context by reading that post right here). Well, one of the main goals we wanted to achieve in following iterations was to increase the success rate of media connections when establishing OpenVidu Sessions. Those developers who have worked with WebRTC will know for sure what a hell it can be to successfully establish a media connection across networks protected by NATs and firewalls.

And what’s the point of a media communication technology over the Internet if it cannot support a pretty common scenario for Internet users? Out there, so many companies and public organizations protect their private networks by blocking all ports and protocols except just a few of them (probably TCP 80, 443 and some others such as 53 for DNS). WebRTC relies on STUN and TURN servers to overcome these situations, but successfully integrating these services in your application’s system is not easy at all. Sure, regarding client-side code it is as simple as configuring RTCPeerConnection objects with the proper ice servers. Since the inception of OpenVidu, OpenVidu Browser has been collecting free STUN servers when creating WebRTC connections by using freeice package. And we provided the ability to manually configure any STUN or TURN server with release 2.0.0 thanks to method OpenVidu.setAdvancedConfiguration({iceServers: []}). So, to this point users had two options to set their STUN/TURN configuration: “freeice” or “manual”. First one pretty straightforward but without TURN support (for some reason nobody offers a TURN relay server for free… Note the irony of that phrase). Second one pretty powerful but very poorly integrated into OpenVidu’s ecosystem. Why should OpenVidu users take direct control of this configuration when the platform could handle it automatically? That’s where this iteration comes into play.

WebRTC testing over protected networks

First step in our approach to seamlessly integrate STUN/TURN in OpenVidu platform is designing an effective, well-contained, highly customizable, easy to deploy testing environment. Sounds to good to be true? As so many times, Docker is the answer.

Docker to the rescue

The idea is pretty simple: we can extend an Ubuntu image with a Chrome browser, and including a noVNC service we can handle it from the host machine. Finally, by installing iptables and writing some pre-established network restrictions we can force the container to ignore (DROP in iptable’s jargon) any package coming or going to certain port or under certain protocol. This way we don’t need to use different machines (physical or virtual) and mess around with their own iptables rules. We just launch the container, use it and destroy it whenever we want, and every OS network configuration changed inside them will never affect our host machines.

OpenVidu dockerized testing environment for simulating network restrictions. The firewall configured inside the container thanks to iptables will always affect exclusively that container itself.

The implementation has been a little more complex than the idea, of course. The main issue was checking that the iptables restrictions worked as they should in theory: in order to work Docker itself modifies iptables in the host machine, and customizing the rules inside a container could just mess up with them. But after performing some test batteries and using wireshark to analyse the traffic from/to the docker0 network interface, it became clear that the iptables restrictions were actually working as expected. The most restricted scenario was configured by setting the following iptables rules:

# Drop all UDP packagessudo iptables -A OUTPUT -p udp -j DROP
sudo iptables -A INPUT -p udp -j DROP
# Accept TCP packages from ports 80 (http), 443 (https), 53 (dns), 4444/6080/5900 (noVNC), 4200/4443 (testing app and OpenVidu Server) and 3478 (COTURN).sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 53 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 4444 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 6080 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 5900 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 4200 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 4443 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --dport 3478 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 80 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 443 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 53 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 4444 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 6080 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 5900 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 4200 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 4443 -j ACCEPT
sudo iptables -A OUTPUT -o eth0 -p tcp --sport 3478 -j ACCEPT
# Drop every other TCP packagesudo iptables -A OUTPUT -o eth0 -p tcp -j DROP

The Docker container was then dropping every network packet except those TCP packets with source or destination port 80, 443, 53, 4444, 6080, 5900, 4200, 4443 and 3478. So, no WebRTC connections through UDP.

First results

After serving an instance of OpenVidu Server (port 4443) and KMS in the host machine, as well as one videoconference application (OpenVidu TestApp, through port 4200), everything was ready to run the docker container and connect to the application to test the media transmission. Inside the container, Chrome was initialized with the following command to avoid certificate issues and access to physical devices problems:

google-chrome -start-maximized -disable-infobars -no-first-run -ignore-certificate-errors -use-fake-device-for-media-stream -use-fake-ui-for-media-stream

By connecting with noVNC to http://localhost:6080/vnc.html through a Chrome tab in the host machine we are able to handle the Chrome instance inside the container. We just must replace any localhost URL with 172.17.0.1 when connecting and using the test application served by the host, as that is the bridge that Docker provides us to access the host from inside the container.

Use of OpenVidu TestApp in a dockerized Chrome while being served in the Docker host. Its use in a browser in the host would be the same, but both highlighted URLs would be instead https://localhost:4200/ and https://localhost:4443/

No surprises: a 1 to 1 scenario doesn’t work at all if no COTURN server is configured.

With iptables restrictions, no remote videos are able to reach the dockerized browser from the external KMS. We can only see the local videos being played.

If we start the container without running the iptables commands, remote connections work OK. Wireshark tells us that every media connection is being done through UDP.

But this is the same result we obtain if only UDP packets are dropped (no TCP ports blocked at all). And wireshark this time captures media packages through TCP, but no TURN server is configured in our system. How is this possible?

ICE-TCP and its magic

ICE-TCP is a mechanism by which media is sent over TCP, but not over TURN. It is has been supported by Chrome for a few years now, and it is available in Firefox since version 54. It is not a widely known aspect of WebRTC lifecycle (in the ICE candidate and SDP negotiation process), and we certainly were confused for a while trying to understand how there was successful WebRTC transmission without UDP and no TURN server at all. The answer was ICE-TCP: modern browsers will try to directly connect to the other endpoint of a WebRTC connection through TCP if UDP is blocked, without needing the intervention of a TURN relay server. This is in fact very positive: it means that the load on our TURN will be reduced on many occasions.

After knowing ICE-TCP came into play in our system, we proceeded to block almost every TCP port as stated above. That way no WebRTC connection could be established with the KMS without a TURN server.

Adding a COTURN to the system

Last necessary step was to deploy this testing scenario in the cloud and make some real tests by adding a COTURN server. COTURN is the most popular open source STUN/TURN server implementation.

For a real testing setup we deploy our server side stuff into an Amazon EC2 instance, including a COTURN server to guarantee the WebRTC connection establishment in any case, including the scenario where we blocked every port except 8 or 9 TCP ports.

The image above portraits the possible WebRTC connections that could be established in our system: we want to communicate our browser with our KMS instance, but the COTURN may act as relay server in case the two peers cannot be directly connected. This is for sure the case in our super-network-restricted Docker container, when we couldn’t get to work this scenario in our local setup.

First we configure the system like this:

  • KMS is configured to use our COTURN just as a STUN server (because they are deployed in the same machine our KMS should never need to use the COTURN as relay server. No NAT between them).
  • Our dockerized Chrome is configured to use the COTURN server, which has been configured with a static user and password. We can do this thanks to OpenVidu Browser API. Every user connecting to a session will be finally initializing their RTCPeerConnection objects with the TURN server ip and credentials:
var OV = new OpenVidu();
OV.setAdvancedConfiguration({ iceServers: [
{ urls: "stun:aws.public.ip:3478" },
{ urls: [
"turn:aws.public.ip:3478",
"turn:aws.public.ip:3478?transport=tcp"
],
username: "USER",
credentials: "PASS" },
]});

And… this works great. Now our only-8-TCP-opened-ports browser is able to send/receive WebRTC media streams to/from the KMS deployed in AWS. If we inspect the traffic with wireshark in our host, we can see that every packet is now sent through TCP to AWS port 3478 (that is the port by default for COTURN servers, and it was one of the ports we had opened in our container).

So, last step. We don’t want any fixed user and password hanging around our client-side code. Our COTURN is a very precious resource and no one should be able to use it without our permission. How have we implemented this feature in OpenVidu platform? See it below.

Dynamic TURN credentials system integrated in OpenVidu user’s lifecycle

COTURN provides a native tool for Linux systems to handle this kind of behavior: turnadmin. It is installed by default along COTURN itself. We then just needed 3 small addons to OpenVidu platform to support dynamic TURN credentials generation:

  • A database for storing TURN credentials: COTURN by default expects a SQLite file as persistence mechanism, but after experiencing some serious cache problems it became mandatory to install a proper database. COTURN/turnadmin support PostgreSQL, MySQL, Redis and MongoDB. Redis was the chosen one for its simplicity and lightness.
  • OpenVidu Server: every time a user connects to an OpenVidu session, OpenVidu Server must generate and store one new credential making use of turnadmin, so whenever the user later asks to send or receive media it will be possible for him/her to connect to the COTURN. The COTURN server will browse the Redis database and will grant access to the user when it founds the created credentials. This also means OpenVidu Server now must send the credentials to the user (we just simply append them to the user’s token, generated with OpenVidu Server API REST and used in the client side to connect to an OpenVidu session). And of course, OpenVidu Server deletes the TURN credentials whenever the user leaves the session, thanks also to turnadmin.
  • OpenVidu Browser: only a simple change: when a user consumes a token in the JavaScript code to connect to a session, the library must store the TURN credentials appended to it (if any). This way we can internally configure later the RTCPeerConnection objects to use the COTURN server with those credentials, after the user asks to send or receive a media stream.
var OV = new OpenVidu();
var session = OV.initSession();
// 'token' generated and returned by the application's backend
session.connect(token, () => {
// OpenVidu Browser library internally stores the credentials contained in 'token' parameter to later use
});

And that’s it. Now every new OpenVidu user (actually every OpenVidu token) will be paired with a dynamically generated TURN credentials, and those will only be valid as long as the user remains connected to the session.

Our media transmission works on difficult networks and our COTURN server is properly securized. Also, we have built a very flexible setup to extend our Continuous Integration environment to test WebRTC in different network configurations. And best of all, developers using OpenVidu doesn’t have to worry about anything regarding STUN and TURN: we have succesfully automated the configuration and use of the COTURN server in OpenVidu platform.

Stay tuned for next iterations! You can follow us on Twitter and a Star in GitHub is always welcomed :)

--

--