AWS Lambda: Cold Starts !

AWS Lambda: Cold Starts !

If you want to improve time-to-market, cost efficiency, flexibility, and scalability along with improved resource utilization Serverless solutions like AWS Lambda or Azure functions are good options to try depending on the cloud platform you are using. In not so far down the memory lane, I got a chance review one of the systems that was using AWS Lambda to its optimum.

The team there was facing some issues, one of the issues they were facing was frequent cold starts. For those you are new to serverless programming or AWS Lambda world, cold starts happen when instances of a Lambda function are terminated, the whole idea of have serverless solutions (technically) is to have a lightweight piece of code that can be executed on-demand and you don’t have to spend days and nights managing these instances. These instances could get terminated for quite a number of reasons but one of the reason I have come across is prolonged inactivity of functions; making them stale and eventually resulting in their termination, other reasons include resource constrains which often is a result of mistakes in resource allocation or missteps in calculating the floor number for the resources. It is also noted and may be deemed as a common sense, that if a function’s configuration and code is frequently changed, you can certainly expect higher cold starts.

Why cold starts are a problem? With cold starts, new instances of the functions need to be created and the initialization process of creating a new instance and loading the function's code and dependencies can take some time, resulting in delays in processing requests.

How do we mitigate cold starts? There are quite a few ways to mitigate this situation, one way is to use provisioned concurrency where you always have a set number of instances in warm condition all the time, so that they are ready to service the requests anytime, but this solution makes me think, if you need to provision the concurrency aren’t you better off using some other solution may be Kafka (yes it’s a pain to setup kafka if you do not want to use managed clusters, I know you are jumping out of your seat now as this is a different dimension completely)? Another way is to literally keep the function warm/awake/alive and one way of doing this is by using CloudWatch Events to schedule regular invocations of the function.  One more way is to keep the function’s initialization lean and light, now this goes to the developers and the architecture team how to do it, but having less dependencies or make sure to have smaller leaner functions that do not need heavy initialization procedures is the way to go for most of the use cases. Some may call it a lazy way to handle this situation but may be one of the first solutions you may use to fight such a problem in a production environment, which is my increasing the memory allocation for the function leading to higher CPU power which could in turn decrease the number of cold starts or at least the time taken by the function to initialize can be reduced. One sophisticate way is to use application load balancer or API Gateway in front of your Lambda as it can handle the initial request and the connection to the Lambda function.

I do not claim that the above-mentioned laundry list is the complete list, and I am more than happy to hear what you have to say about your experience about cold starts.

To view or add a comment, sign in

More articles by Vikrant Kulkarni

Others also viewed

Explore content categories