Simple Event Driven Architecture Using S3, SQS, Lambda and Python
You've probably heard the term "event driven" being used from time to time. But what exactly does it all mean? EDA is a software pattern where decoupled services communicate and interact through events. At the base level, it is made up of producer, brokers and consumers. The producers create the event, the brokers facilitate the communication of the messages between producers and consumers, and the consumers process the messages.
The advantages of this pattern are decoupling, scalability and flexibility. For our example, we will create a simple project that leverages:
In more complex examples, we can leverage a fanout strategy using SNS. For example, in an ecommerce website, an order service can publish a message to an SNS topic. This will then publish the message to multiple SQS queues each handling email/sms notifications, inventory updates, payment processing etc. Each queue can be handled by different services(lambda) ensuring independence and concurrency.
Step by Step Approach
Ensure you have an AWS account and have setup AWS CLI.
Create an S3 Bucket
In your terminal, run this command replacing <bucket-name-of-your-choice> with your bucket name and <your-region> with the actual region:
aws s3 mb s3://<bucket-name-of-your-choice> --region <your region>
Ensure that the bucket is created from your console or run:
aws s3api head-bucket --bucket <bucket-name-of-your-choice>
Create an SQS Queue
Ensure to double check the region in which you are creating the queue, or just pass a --region flag. This command will create a standard queue. Ensure to replace <queue-name-of-your-choice> with the queue name of your choice and <region-of-your-choice> with the region.
aws sqs create-queue --queue-name <queue-name-of-your-choice> --region <region of your choice>
Update the SQS Queue Policy
Now that we have our S3 and SQS configured, we need to allow S3 to publish message to SQS. For that, we need to update our SQS policy. In your cwd in the terminal, create a json file, sqs-policy.json with the following content: Ensure to replace <YOUR_SQS_QUEUE_ARN>, <YOUR_S3_BUCKET_NAME> and <YOUR_ACCOUNT_ID> with your correct values.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3ToSendMessage",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "<YOUR_SQS_QUEUE_ARN>",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>"
},
"StringEquals": {
"aws:SourceAccount": "<YOUR_ACCOUNT_ID>"
}
}
}
]
}
Once this is done, run: Replace <YOUR_SQS_QUEUE_URL> with what was returned in the terminal after your created your queue. Also ensure that the user/role has full access to run this command and the regions match.
aws sqs set-queue-attributes --region <your-region> --queue-url <YOUR_SQS_QUEUE_URL> --attributes Policy="$(cat sqs-policy.json | jq -c . | jq -R .)"
Configure S3 Event Notifications
We now configure the S3 bucket to send s3:ObjectCreated:* events for files to SQS queue. Create a s3-notification.json file and add the following ensuring to replace <YOUR_SQS_QUEUE_ARN> with the actual SQS arn.
{
"QueueConfigurations": [
{
"Id": "SendImagesToSQS",
"QueueArn": "<YOUR_SQS_QUEUE_ARN>",
"Events": ["s3:ObjectCreated:*"]
}
]
}
Then run this command replacing <YOUR_S3_BUCKET_NAME> with your actual bucket name and <your-region> with your region.
aws s3api put-bucket-notification-configuration --region <your-region> --bucket <YOUR_S3_BUCKET_NAME> --notification-configuration file://s3-notification.json
Confirm the changes from s3 > bucket properties > event notifications; you should see SendImagesToSQS as the event name.
Testing
Now, go to your s3 bucket and upload an image or use the following command replacing <path to image> with the image path and <s3-bucket> with the bucket name:
aws s3 cp <path to image> s3://<s3-bucket>/<image name + extension(image.png)>
Then go to your SQS and switch to the Monitoring tab, you should see that the message was delivered. See screenshot.
Creating the Lambda Execution Role
Before creating our lambda function, we need to create a lambda execution role that will allow our lambda function to access both the s3 bucket and SQS messages.
In your terminal, create a trust-policy.json document with the following contents:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Next, run the following command that creates this role replacing <your-role-name> with your role name:
Recommended by LinkedIn
aws iam create-role --role-name <your-role-name> --assume-role-policy-document file://trust-policy.json
You should get a json response signaling a successful role creation. Go to your dashboard under Roles and you should see it listed there.
Now we need to attach policies to our role that grant it access to SQS and S3. Run the following commands replacing <your-role-name> with the role name you created before:
aws iam attach-role-policy --role-name <your-role-name> --policy-arn arn:aws:iam::aws:policy/AWSLambdaBasicExecutionRole
aws iam attach-role-policy --role-name <your-role-name> --policy-arn arn:aws:iam::aws:policy/AmazonSQSFullAccess
aws iam attach-role-policy --role-name <your-role-name> --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
To see these actioned, open up your role and under Permission policies, you should see these two policy names added.
Creating the lambda function
This will be a straight forward approach, we just need to see our lambda function get triggered and print out the contents of the file added to our S3 bucket.
In your terminal, create a index.py with the following contents:
import json
def handler(event, context):
s3_notification = json.loads(event["Records"][0]["body"])
s3_object_data = s3_notification["Records"][0]["s3"]
print(s3_object_data)
return {"statusCode": 200}
This will print out bucket details every time it is triggered.
Now lets create a new lambda function and upload our code to it. Ensure to zip the index.py file. Run the following in the terminal replacing <my-lambda-function> with the function name and <lambda-execution-role-arn> with the role ARN and <your-region> with your region:
aws lambda create-function --function-name <my-lambda-function> --zip-file fileb://index.py.zip --handler index.handler --runtime python3.14 --role <lambda-execution-role-arn> --region <your-region>
Go to the dashboard under lambda functions and you should see this new function created.
Add Trigger To Lambda
To be able to use this effectively, we need to add a trigger to our lambda function that will allow it to get triggered by SQS messages. Run the following replacing <lambda-function-name> with your lambda function name, <queue-arn> with the SQS queue ARN, and <region> with your region.
aws lambda create-event-source-mapping --function-name <lambda-function-name> --batch-size 1 --event-source-arn <queue-arn> --region <region>
You should get a json response signifying that this was successful. On your dashboard, refresh and you should see an SQS trigger added.
Testing
Ensure to deploy your lambda function from the Code tab on the dashboard.
Now upload an image to your S3 bucket and watch the Monitor Tab of your lambda dashboard. To see the logs, we'll need to go to cloudwatch log groups where our logs will be stored. You should see something like this:
{'s3SchemaVersion': '1.0', 'configurationId': 'SendImagesToSQS', 'bucket': {'name': 'kyole-event-driven-bucket-test', 'ownerIdentity': {'principalId': 'A1GSKM8KEHYOCJ'}, 'arn': 'arn:aws:s3:::kyole-event-driven-bucket-test'}, 'object': {'key': 'Screenshot+2025-12-02+at+22.37.32.png', 'size': 94240, 'eTag': 'd96a08a93e663b7391ec527c3971d796', 'sequencer': '00692FB53BA4DAB84B'}}
Cleanup and Conclusion
After you've tested and ensured everything is working as expected, go ahead and delete all the setup and configurations we've done so as not to incur costs unnecessarily.
SQS default retention duration is 4 days but can go to upwards of 14days. This means that when the consumer is down, the messages are not lost, and when the consumer comes back online, AWS automatically polls the queue for messages and sends them to the consumer for processing. When the consumer exits with success, AWS sends a signal to SQS to delete the message from the queue.
To simulate a failure, modify the file name from index.py to handler.py and deploy changes. Then go ahead and upload an image to S3. You'll notice that inside cloudwatch, you'll keep getting new logs on lambda trying to execute the messages from SQS with failures because because the handler specification expects index.py. Every time a failure happens, SQS adds back the message to the queue for processing again.
NB: use least privilege instead of FullAccess policies in production
Best
Kyole