Simulate high latency network using containers and “tc”

Simulate high latency network using containers and “tc”

With containers, it is very easy to simulate high latency network in one machine.

In this post, we will see how.

Preparation

We will make the following files.

  • docker-compose.yml
  • client/Dockerfile
  • server/Dockerfile

Article content

docker-compose.yml

Since we are going to modify network-related stuff, NET_ADMIN capability is needed.

services:
  client:
    hostname: wansim-client
    container_name: wansim-client
    build: ./client
    tty: true
    cap_add:
    - NET_ADMIN
  server:
    hostname: wansim-server
    container_name: wansim-server
    build: ./server
    tty: true
    cap_add:
    - NET_ADMIN        

For the client / server i am using the same my custom "nettools" container

client/Dockerfile

FROM  alpine:3.22.2
RUN apk add --no-cache \
 ping  \
 fping  \
 httping \
 tcpdump \
 iproute2-tc \
 mtr \
 ngrep \
 nmap \
 nc \
 ncat \
 openssl \
 curl \
 wget \
 bind-tools \
 zsh \
 iperf3 \
 tc\
rm -rf /var/cache/apk/*        

server/Dockerfile

FROM  alpine:3.22.2
RUN apk add --no-cache \
 ping  \
 fping  \
 httping \
 tcpdump \
 iproute2-tc \
 mtr \
 ngrep \
 nmap \
 nc \
 ncat \
 openssl \
 curl \
 wget \
 bind-tools \
 zsh \
 iperf3 \
 tc\
rm -rf /var/cache/apk/*        

Adding Latencies

Now, we are ready.

First, let’s get the two containers running with the following command.

docker compose up -d        
Article content

Execute the following two commands.

These add 100ms latencies to the outbound traffic on the two containers.

docker exec wansim-client tc qdisc add dev eth0 root netem delay 100ms
docker exec wansim-server tc qdisc add dev eth0 root netem delay 100ms        

Let’s see whether things went well. We are expecting the RTT between the two to be 200ms

Article content

It looks good!


What is happening

When we create a docker container, a pair of virtual network interfaces are created. One of them belongs to the host machine, and the other belongs to the container (network namespaces are used here). And the host-side one is connected to a virtual bridge, enabling the containers to communicate with each other. For more information on Docker networks, please refer to this page.

In Linux machines, outbound network traffic first gets into a “queue.” And usually, they are sent out to the outer world as fast as possible in the FIFO order.

The behavior of this queue is configurable. For example, we can add latencies (like what we did in this post), or we can discard some portion of the traffic randomly. This mechanism is called “Queueing Discipline” or “qdisc.” The “tc qdisc … netem …” command we executed was exactly for emulating high latency network in the qdisc.

“qdiscs” are configurable for each network interface. In our example, we configured the qdisc attached to eth0 network interface. By default, a Docker container has an eth0 virtual network interface, which is connected to the host machine.

Note that we used the tc command in each of the two containers. This is because qdisc only affects outbound traffic. We executed the command in the both to make an actual 100ms bi-directional latency, which better reflects the real-world environments.


To view or add a comment, sign in

More articles by Armin Hofer

Others also viewed

Explore content categories