Shrinking C++ Lambda Deployments: A Layer-Based Packaging Strategy for Custom Runtimes
When you build AWS Lambda functions in C++ using the custom runtime (provided.al2), every deployment zip ships the compiled binary alongside every shared library it depends on — the AWS SDK, libcurl, zlib, libstdc++, and dozens of transitive system libraries. For a single function, the ~50MB zip is tolerable. When you're running twenty or thirty microservices built from the same codebase and Docker image, you're deploying the same 50MB of identical libraries thirty times over.
This article describes a two-part packaging strategy that splits Lambda deployment into a shared layer (common libraries, deployed once) and a per-service package (binary + delta libraries, typically a few hundred KB). The result is faster deployments, lower S3 storage costs, and a cleaner separation between infrastructure and application code.
The Problem
The standard AWS Lambda C++ packaging approach uses the aws_lambda_package_target CMake function from the aws-lambda-cpp runtime. It runs ldd against your binary, copies every shared library dependency into a zip alongside a bootstrap script, and produces a self-contained deployment artifact.
This works well for a single function. But in a microservice architecture where every service links against the same AWS SDK, the same serialization libraries, and the same system libraries — all built from the same Docker image — the overlap is nearly 100%. Each service's zip is ~50MB, of which ~49.5MB is identical across all services.
The deployment cost adds up:
The Solution: Layer + Delta Packaging
Lambda layers are zip archives that Lambda extracts to /opt before your function runs. A function can reference up to five layers. The key insight: if the shared libraries live in a layer, the per-service zip only needs to contain the binary and any libraries unique to that service.
This entire strategy relies on one critical invariant: every service is built inside the same Docker image. The layer is built from that image, and every service is built from that image. Because ldd resolves the same shared libraries in the same paths with the same versions every time, the manifest is deterministic — a library filename in the layer is guaranteed to match the library a service links against. If different services were built with different compilers, different SDK versions, or different system packages, the filenames might match but the binaries wouldn't, and you'd get runtime crashes.
In our case, all builds — CI/CD and local development — use the same Docker image (public.ecr.aws/q6u2b1h2/switched-on-systems/code-catalyst/cpp:latest). This is enforced by the CodeCatalyst workflow configuration and the CLion Docker toolchain. The Docker image pins the compiler version, the AWS SDK version, and every system library. When the image is updated, the layer is rebuilt, the manifest changes, and the diff is visible in the pull request.
The strategy has two parts:
Part 1: Build the Layer
A reference binary links every library that services commonly depend on. It's never deployed — it exists solely so ldd can discover the full transitive dependency tree:
// packaging/layer_reference.cpp
#include <aws/lambda-runtime/runtime.h>
#include <aws/core/Aws.h>
#include <aws/s3/S3Client.h>
#include <aws/eventbridge/EventBridgeClient.h>
#include <zlib.h>
#include <curl/curl.h>
int main() { return 0; }
A shell script runs ldd against this binary, copies every shared library into a zip, and writes a manifest — a sorted list of library filenames:
# Collect all shared library dependencies
for lib in $(ldd "$REF_BINARY" | awk '{print $(NF-1)}'); do
[ ! -f "$lib" ] && continue
filename=$(basename "$lib")
[[ "$filename" == ld-* ]] && continue # skip the dynamic loader
cp "$lib" "$PKG_DIR/lib/"
MANIFEST="$MANIFEST$filename"$'\n'
done
echo -n "$MANIFEST" | sort > "$OUTPUT_DIR/layer-libs.txt"
The output is two files:
The manifest is the contract between the layer and the service packager. It's deterministic — the same Docker image always produces the same manifest — and it's reviewable in pull requests when the build image changes.
A CMake function wraps this into a build target:
function(cloud_acute_lambda_layer)
add_executable(cloud-acute-layer-reference
${CLOUD_ACUTE_PACKAGING_DIR}/layer_reference.cpp)
target_link_libraries(cloud-acute-layer-reference PRIVATE
cloud-acute-service cloud-acute-utility-aws cloud-acute-logging
AWS::aws-lambda-runtime ${AWSSDK_LINK_LIBRARIES} ZLIB::ZLIB)
add_custom_target(cloud-acute-lambda-layer
COMMAND ${CLOUD_ACUTE_PACKAGING_DIR}/layer_packager
$<TARGET_FILE:cloud-acute-layer-reference>
${CMAKE_CURRENT_SOURCE_DIR}/layer
DEPENDS cloud-acute-layer-reference)
endfunction()
Build it with:
cmake --build build --target cloud-acute-lambda-layer
Part 2: Package the Service
Each service uses a different packager that reads the layer manifest and excludes any library already in the layer:
# Read layer manifest into an associative array
declare -A LAYER_LIBS
while IFS= read -r lib; do
[ -n "$lib" ] && LAYER_LIBS["$lib"]=1
done < "$LAYER_MANIFEST"
# Collect only libraries NOT in the layer
for lib in $(ldd "$PKG_BIN_PATH" | awk '{print $(NF-1)}'); do
[ ! -f "$lib" ] && continue
filename=$(basename "$lib")
[[ "$filename" == ld-* ]] && continue
if [[ -v "LAYER_LIBS[$filename]" ]]; then
EXCLUDED=$((EXCLUDED + 1))
continue
fi
cp "$lib" "$PKG_DIR/lib/"
INCLUDED=$((INCLUDED + 1))
done
The bootstrap script sets LD_LIBRARY_PATH to include both the layer's /opt/lib and the function's own $LAMBDA_TASK_ROOT/lib:
Recommended by LinkedIn
#!/bin/bash
set -euo pipefail
export AWS_EXECUTION_ENV=lambda-cpp
export LD_LIBRARY_PATH=$LAMBDA_TASK_ROOT/lib:/opt/lib:${LD_LIBRARY_PATH:-}
exec $LAMBDA_TASK_ROOT/bin/my-service ${_HANDLER}
In a consumer project's CMakeLists.txt:
include(cmake/cloud-acute-packaging.cmake)
add_executable(my-service src/main.cpp)
target_link_libraries(my-service PRIVATE cloud-acute-service cloud-acute-utility-aws cloud-acute-logging)
cloud_acute_lambda_package(my-service)
Build with:
cmake --build build --target cloud-acute-package-my-service
# Output: my-service.zip (typically 200-500KB)
Deployment
The layer is built once from the service library project's CI pipeline, triggered when the main branch is updated. The workflow:
Resources:
RuntimeLayer:
Type: AWS::Lambda::LayerVersion
Properties:
LayerName: cloud-acute-runtime
Content:
S3Bucket: !Ref S3Bucket
S3Key: cloud-acute-lambda-layer.zip
CompatibleRuntimes:
- provided.al2
Outputs:
LayerArn:
Value: !Ref RuntimeLayer
Export:
Name: cloud-acute-runtime-layer-arn
Consumer services reference the exported ARN in their SAM templates:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Runtime: provided.al2
CodeUri: my-service.zip
Layers:
- !ImportValue cloud-acute-runtime-layer-arn
For multi-account deployments, the same CloudFormation template is deployed to each account — either via additional CI pipeline stages or via CloudFormation StackSets across an AWS Organization.
When Does the Layer Change?
Rarely. The layer contains system libraries and the AWS SDK — these only change when:
Both are deliberate, reviewable changes. The layer-libs.txt manifest is committed to version control, so any change shows up as a diff in the pull request. If the manifest changes, the layer needs to be rebuilt and redeployed before services that depend on the new libraries can be deployed.
Service code changes — new endpoints, business logic, DTOs — never affect the layer. Only the per-service zip changes, and it's a few hundred KB.
The Numbers
For a typical microservice architecture with 20 C++ Lambda functions:
MetricWithout layerWith layerPer-service zip~50MB~300KBTotal deployment size (20 services)~1GB~6MB + 50MB layerS3 storage per version~1GB~56MBUpload time per service~10s<1sLayer deployments—Once per SDK update
The layer is deployed perhaps once a quarter when the build image is updated. The 20 service deployments happen daily and are now nearly instant.
Trade-offs
Summary
The core idea is simple: identify the libraries that are identical across all services, package them once as a layer, and deploy each service with only its unique binary. The manifest file is the contract that keeps the two in sync. The CMake functions and shell scripts automate the entire process — developers just call cloud_acute_lambda_package(my-service) and get a deployment-ready zip that's two orders of magnitude smaller than the monolithic alternative.
See also on
#AWS #Lambda #CD #CPP #CMAKE
Fixing broken apps & infrastructures | Cloud & Code Rescue for Startups | AWS Community Builder
1wVery nice solution, thank you for sharing. I would like to ask what your solution is to upgrades on the library side. Libraries are trying to address breaking changes by semantic versioning, but from my experience, there are regularly some regressions despite the best efforts. Therefore, testing it is quite important even when minor patches are applied. When you have 30 microservices, how do you approach such update releases? Do you test new versions of libraries with all 30 microservices and then release everything at once? Or do you store multiple versions of the same libraries in the shared layer and switch service by service gradually?
Serious question: how many companies actually need microservices before hitting the scale that justifies them? C++ on Lambda sounds like "we ran out of normal problems, so we made interesting ones."