Multicast in AWS with Transit Gateways
Public cloud has been missing capabilities needed to perform high-quality, low-latency live video production. For example, lack of bandwidth QoS guarantees, lack of accurate packet pacing, and lack of multicast IP. At re:Invent 2019 last week, AWS announced multicast IP via Transit Gateways, so I thought I'd try it out.
First, you should understand what a Transit Gateway is. AWS defines it as "a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks." But now a Transit Gateway can also route multicast IP between network interfaces both inside a VPC and between VPCs and your on-premises networks.
[update: IGMP is now supported by Transit Gateway - this article was originally written before that development, so it uses manual group membership]
Here is a quick example of setting up AWS multicast between three EC2 instances with Transit Gateway (these steps assume that you understand basic AWS Console GUI interaction, and how to launch EC2 instances). Do everything in the US-East region, as this is where multicast support only exists currently.
- Go ahead and use your default VPC to launch three Amazon Linux 2 AMI instances. I suggest using t3a.nano instances. These are Nitro instances and because of that you don't have to disable the Source/Dest check on their network interfaces. (They are not in the free tier, but are less than a half cent per hour on-demand). Of the three EC2 instances, one will be a source of multicast IP, and two will be receivers (two simultaneous receivers "proves" it isn't unicast.) Make sure you are auto-assigning Public IPs so that you can ssh into them, and in “Configure Security Group” add a rule to allow inbound UDP on port 20000 from 0.0.0.0/0.
- In VPC Dashboard, create a Transit Gateway. You can use the default ASN. Be sure to check “Multicast support.” It will take a while for the Transit Gateway to launch before you can add the attachment in the next step.
- Staying in the VPC Dashboard, create a Transit Gateway Attachment to the Transit Gateway ID of the Transit Gateway you just created. Select Attachment Type "VPC", and the VPC ID of the default VPC where you launched the EC2 instances. It will show the various AZs and subnets that make up the VPC, you may want to leave them all checked to make sure you are including the subnet of your EC2 instances.
- Click on “Transit Gateway Multicast” and create a Transit Gateway Multicast Domain with the Transit Gateway ID of the Transit Gateway you created. It will take a while for it to launch before you can create the associations in the next step.
- Once the Transit Gateway Multicast Domain has launched, click on the “Associations” tab, and “Create association” button. Chose the attachment you have made, and the subnet where the EC2 instances were launched to associate (you may need to check back at the EC2 Dashboard to determine the subnet that you launched your EC2 instances in).
- On your Transit Gateway Multicast Domain, click on its Groups tab and click “add member”. In the Group IP address use 239.0.0.1, and choose network interfaces that are on your three EC2 instances. (You may need to go back to the EC2 Dashboard, select each of your instances in turn, and then click on the Network interfaces "eth0" to get this info). You can either add all three network interfaces to the Group in one shot by checking all of them, or add them one at a time. Click on "Add members" to add them.
- Pick one of the EC2 instances as the source of the multicast IP, and under the Transit Gateway Multicast Domain's "Groups" tab press the “Add Source” button. Again enter multicast IP 239.0.0.1 and select a network interface on the source EC2 instance.
Now log into your receiver EC2 instances, and on each of them run this line:
sudo tcpdump ip dst 239.0.0.1
Now your receivers are awaiting IP multicast traffic.
Scapy is an easy way to craft & send proper multicast packets that we need for the demo. Log into your source EC2 instance, install scapy and send some multicast packets:
sudo yum install git git clone https://github.com/secdev/scapy.git cd scapy sudo python setup.py install sudo scapy p=Ether()/IP(dst="239.0.0.1")/UDP(dport=20000) sendp(p,count=10)
You should see something like this:
showing the reception of the multicast IP packets on the receive EC2 instances.
Based on tcpdump, I'm currently seeing about 300 μs latency from the source EC2 to destination EC2s in the same VPC. Reported reception time differences of EC2 receiving instances in the same VPC as the sender are around 20 μs, and reception time differences between receiving EC2 instances in the same VPC of the sender and in a different VPC & AZ than the sender are around 200 μs. Of course this is with low bit rates and on the tiniest of Nitro instances, but it shows promise for many interesting use cases.
Those timings assume synchronized clocks between EC2 instances. On Amazon Linux 2, the default chrony configuration is already set up to use the Amazon Time Sync Service, although it isn't exactly clear how precise that is. Here is the an example of the chronyc tracking output I see:
In conclusion, it appears that yes, you can now send multicast IP inside a public cloud!
Very good article. But I have two point which should be mentioned. 1. It only works on a t3 instance. i have tried it with a t2.micro and could not make it work. After I changed them to t3.nano it worked perfectly. 2. We do not get real multicast. You can only define one single source, which can send messages, the other can only receive messaes. Unfortunately I need to have a configuration where all instances can send and receive.
I'm trying to follow your steps to get NDI to work between instances but unfortunately multicast transports gateways aren't available in my region (Middle East - Bahrain). Are there any workarounds to get something unicast/multicast that uses mDNS to work between AWS EC2 instances?
Thanks for writing a very thoughtful article, Thomas!
Cool. I’d be concerned about cost. Done forget to unsubscribe IGMP
This is a very interesting development! I hope it can reach its full potential.