SIP + Serverless

SIP + Serverless

One of the Cloud trends today is serverless computing. It certainly lower the threshold to get a service up and running without having to bother with infrastructure. It also force you to make a design that scale elastic and thereby make sure you can utilize the full power of cloud infrastructure.

No alt text provided for this image


In between the trips to the mountains in Northern Sweden I have spent some time to validate some design thoughts I have had in mind for some time to validate the famous "How hard can it be..." statement.

What about setting up an elastic SIP service running serverless? The traditional tools in the toolbox have been missing a piece to allow you to do this completely serverless. Services like Amazon AWS lambda only had a HTTP interface, so to be able to deploy a service exposing other protocols you had to run something like an EC2 instance. It is cloud indeed, but not serverless by definition. You could also use something like Amazon Elastic Container service and run your SIP service in a container. But this service also required you to set up a set of EC2 instances to host the containers.

Timely when I had some time for own projects Amazon released AWS Fargate which allow you to run containers without having to bother with EC2 instances. Yes! Finally an option to realise the idea of a true serverless SIP service. Fargate is currently only available in the North Virginia region, but will spread to other Amazon regions later.

It would have been easy to just spin up a container running you favourite SIP server, but what would that proof? As said, the exiting opportunities with serverless is the elastic scalability ans service resiliency. So that's what I wanted to explore.


No alt text provided for this image

To make sure the service was elastic and could grow and shrink based on demand I first configured a SIP server running in a container. A key consideration is to not maintain registration or dialog state in the server. These were instead stored and Amazon DynamoDB (serverless of course). This make sure that any SIP request can be served by any of the running containers. The containers were deployed as ECS tasks in Fargate.

Another nut to crack was how to distribute inbound SIP traffic across the deployed containers. In a traditional web application one of the load balancers provided by Amazon could be used. This does not work for SIP since some devices need to use UDP for signalling was not at the time writing this article supported by these load balancers. This was solved by using DNS for load sharing and resiliency (RFC 3263). Amazon provides a DNS service called Route 53 which served the puropse.

Update: Now AWS network load balancers support UDP. I have not had the opportunity to invstigate it's applicability to SIP. Another factor to consider moving to AWS network load balancers is how they handle TLS certificates. The solution proposed in this article is proven to work for both UDP and TLS. If you have experimented with SIP and network load balancer for the new UDP support as well TLS, please share your experiences by commenting the article.

Amazon's monitoring service named Cloudwatch can be configured with rules to trigger certain activities based on log events. In this case events from Fargate indicating that a task have been started or stopped was used to trigger a Lambda function which updates the DNS service for the SIP domain with the IP addresses of the running SIP server containers.

Using these building blocks it was possible to create a truly serverless SIP infrastructure. The next step will be to explore the auto scale functions of Fargate as well set up some incredible load test scenarios you only would dream of if you were running the servers in a traditional environment... Why? Because you can!

Hi Jorgen I found this post very interesting! I would love to see the github repository to explore in more detail.

Like
Reply

AWS Network Load Balancer now support UDP. I have not had time to investigate it's applicability to SIP. Another complicating factor is the use of TLS certificates for SIP. The Route 53 approach has been proven to serve almost 40 millions of SIP Message request last 12 months so I see no urgency to change what is working. But if I find a reason I will give it a try. I for example gave up my own service discovery built on route 53 and Lambda when I was about to fix a bug in that implementation and moved to Cloud Map instead. If anyone from AWS read this, please launch it for the Stockholm region as well!

Like
Reply

Do the latest Application Load Balancer or Network Load Balancer work well to load balance SIP traffic . As in, can we not use them and route traffic to EC2 instances with asterisk servers installed on them.

Like
Reply

there are unforutnately quite many reasons why the LB doesn't work for VoIP. VoIP requires UDP, linking a load-balancer to autoscaling through a load-balancer is more difficult because of VoIP's session model, and there are quite some more SIP session specifics beyond what HTTP can do.

Interesting approach. I'm curious as to which SIP server you're using? What is the performance like? We've had issues with VoIP on EC2 caused by CPU contention, but does Fargate scale quickly enough to handle the load?

Like
Reply

To view or add a comment, sign in

More articles by Jorgen Bjorkner

Others also viewed

Explore content categories