When an event triggers your function, Lambda does not just run your code. Firstly, it creates a secure, isolated execution environment which serves as your function's temporary home. Lambda uses your configurations (memory, timeout, etc.) to optimise this setup, and this whole process is neatly split into three phases: Init, Invoke, and Shutdown. 🔹Phase 1: Initialisation(INIT) This is where the journey begin (expect the dreaded "cold start" for the first request). Lambda creates or "unfreezes" a secure execution environment, downloads your function's code, and any layers it needs. → Extension Init: Any companion extensions like monitoring or security agents get started first. They run their setup tasks, ensuring the full ecosystem is ready. → Runtime Init: The runtime (e.g., Node.js, Python, Java) is bootstrapped. It is the engine that will eventually execute your code. → Function Init: Finally, your function's static code is ran. Anything outside the main handler method is executed. This is where the heavy lifting like initializing SDK clients or creating database connections once and saving crucial milliseconds on subsequent calls is done. This meticulous initialisation ensures everything is perfectly set up before your core logic starts. 🔹Phase 2: Invocation(Invoke) → Function Call: Lambda invokes your function's handler method with the event data. → Execution: Your code runs its intended purpose which can be processing data, calling APIs, or whatever business logic it holds. → The Wait: Once your function completes, Lambda doesn't immediately scrap the environment. Instead, it freezes the environment, keeping it "warm" and ready. If another request for the same function arrives shortly, it skips the entire Init phase and jumps right back into Invoke, providing near-instantaneous response times and better performance. 🔹Phase 3: Shutdown When the execution environment sits idle, receiving no further requests for a period (the exact duration is up to AWS), the shutdown phase is initiated. → Runtime Shutdown: Lambda gracefully shuts down the function runtime. → Extension Shutdown: It alerts any running extensions and sends them a final shutdown event, giving them a brief window (typically up to two seconds) to stop cleanly, wrap up logs, or remove any remaining data before the environment is permanently terminated. 📌 Understanding this three-phased Execution lifecycle is key to writing high-performance, cost-efficient Lambda functions. ❓Is it just me who is always looking for what happens behind the scenes of every cloud service that I use or there is someone else that is interested in the little very little details?
How Lambda Functions Work: Init, Invoke, Shutdown
More Relevant Posts
-
🚀 **JSON: The Developer's Swiss Army Knife** From REST APIs to config files and database exports, JSON is the universal language of data exchange. As devs, mastering JSON parsing, manipulation, and generation is non-negotiable. 💡 **Why it matters:** - Simplifies data handling across platforms - Powers modern web services and tools - Essential for seamless integrations Whether you're tweaking configs or building APIs, JSON fluency keeps you agile. Time to level up your JSON game! #JSON #WebDevelopment #APIs #DeveloperTools #DataExchange
To view or add a comment, sign in
-
#!/usr/bin/env bash # quick_workspace_setup.sh set -euo pipefail ROOT="$HOME/ai_aug_workspace" mkdir -p "$ROOT"/{models,datasets,projects,scripts,notebooks,results,logs} # 1. Install core dependencies (Ubuntu/Debian) sudo apt update sudo apt install -y git curl wget python3 python3-pip python3-venv build-essential docker.io docker-compose jq unzip nodejs npm # 2. Clone essential open-source AI/ML model repos declare -A REPOS=( [llama.cpp]="https://lnkd.in/eRFJcGwq" [transformers]="https://lnkd.in/eczTeVVC" [stable-diffusion-webui]="https://lnkd.in/eDjx6t23" [whisper.cpp]="https://lnkd.in/e3zynTAn" [starcoder]="https://lnkd.in/enPD_Hea" [yolov5]="https://lnkd.in/emPNjMSu" ) for name in "${!REPOS[@]}"; do DIR="$ROOT/models/$name" if [ ! -d "$DIR/.git" ]; then git clone --depth=1 "${REPOS[$name]}" "$DIR" fi done # 3. Example: Create a data transformer utility (Python) mkdir -p "$ROOT/scripts" cat > "$ROOT/scripts/data_transformer.py" <<'PY' #!/usr/bin/env python3 # Simple data transformer: CSV to JSON import csv, json, sys if len(sys.argv) != 3: print("Usage: data_transformer.py in.csv out.json"); sys.exit(1) with open(sys.argv[1]) as f, open(sys.argv[2], 'w') as o: reader = csv.DictReader(f) json.dump(list(reader), o, indent=2) PY chmod +x "$ROOT/scripts/data_transformer.py" # 4. Prepare Python virtual environment for notebooks and transformers python3 -m venv "$ROOT/.venv" source "$ROOT/.venv/bin/activate" pip install --upgrade pip pip install torch torchvision transformers jupyter pandas matplotlib # 5. Example notebook scaffold cat > "$ROOT/notebooks/data_transform_example.ipynb" <<'NB' { "cells": [ {"cell_type": "markdown", "metadata": {}, "source": [ "# Data Transformer Example\n", "Convert CSV to JSON and visualize with pandas." ]}, {"cell_type": "code", "metadata": {}, "source": [ "import pandas as pd\n", "df = pd.read_csv('../datasets/example.csv')\n", "df.head()" ]} ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 } NB # 6. Fetch basic datasets (MNIST/CIFAR-10 as examples) cd "$ROOT/datasets" wget -nc https://lnkd.in/eWcBU2JG wget -nc https://lnkd.in/eUw44KgC # 7. Print summary echo "Workspace ready at $ROOT" echo "Models: $ROOT/models" echo "Datasets: $ROOT/datasets" echo "Scripts: $ROOT/scripts" echo "Notebooks: $ROOT/notebooks" echo "Activate Python env: source $ROOT/.venv/bin/activate" echo "Run Jupyter: cd $ROOT/notebooks && jupyter notebook"
To view or add a comment, sign in
-
💻 #Day489 (DSA) – LeetCode 3217: Delete Nodes From Linked List Present in Array 🧩 🧠 Story Time: Imagine you have a long train 🚆 (the linked list), and some passengers’ IDs match those on a “remove list” 🚫 (the array nums). Your task as the inspector 🚔? 👉 Detach every passenger whose ID appears in nums — and let the remaining ones continue their journey peacefully 😌 🧩 Example: Input: nums = [1,2,3] head = [1,2,3,4,5] Output: [4,5] Explanation: Nodes with values 1, 2, and 3 are removed. Remaining nodes → [4,5] ✅ 💡 Intuition: We want to remove nodes whose values exist in nums. To do this efficiently: 1️⃣ Store all nums values in a HashSet → O(1) lookup 2️⃣ Traverse the linked list and skip nodes found in the set 3️⃣ Return the updated head 🧠⚡ 💻 Java Solution: import java.util.*; class Solution { public ListNode modifiedList(int[] nums, ListNode head) { HashSet<Integer> set = new HashSet<>(); for (int num : nums) set.add(num); ListNode dummy = new ListNode(0); dummy.next = head; ListNode curr = dummy; while (curr.next != null) { if (set.contains(curr.next.val)) { curr.next = curr.next.next; } else { curr = curr.next; } } return dummy.next; } } 🕒 Time Complexity: O(n) 💾 Space Complexity: O(m) — where m = size of nums ✨ Key Learnings: 🔹 Smart use of HashSet for quick lookups 🔹 Clean linked list traversal & node deletion 🔹 Strong understanding of the dummy node technique 🏷️ Tags: #LeetCode #DSA #ProblemSolving #Java #HashSet #LinkedList #CodingChallenge #DeveloperJourney #TechCommunity #CodingLife #SoftwareEngineering #KeepLearning 💬 Question for you: If nums were extremely large (millions of entries), 👉 how would you optimize space while still maintaining speed? 🤔 ✨ Inspired by the problem-solving culture at: Google | Amazon | Microsoft | Infosys | Tata Consultancy Services | Wipro | Accenture | Flipkart | Zomato | Paytm | Tech Mahindra | LTIMindtree | Capgemini | IBM | Netflix | Swiggy | Airbnb 💻
To view or add a comment, sign in
-
-
Outcome: Entire DB can be migrated in just few lines of Code! Learning: "Your business logic should depend on abstractions (interfaces), not concrete implementations." This was one of the mistakes I used to make while writing Service Classes in Backend Development where in I used to think a Service class "Has-A" relationship with Repository Class thus used to make a Composition. However, recently I came across this Hexagonal Architecture ("Ports and Adapters") which helps to achieve Massive Decoupling in Backend development and achieve great isolation in Testing. The ART of writing modular,reusable and maintainable code makes a good engineer better by following the SOLID principles as much as possible. This is a small blog on writing modular and reusable REST APIs from basics to Hexagonal Architecture with examples in Python. https://lnkd.in/gnyQMXV6 #BackendDevelopment #SoftwareArchitecture #Python #FastAPI #CleanCode #HexagonalArchitecture #PortsAndAdapters #REST
To view or add a comment, sign in
-
🧐 𝗝𝗣𝗔𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆 𝘃𝘀 𝗖𝗥𝗨𝗗𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: 𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲? If you’ve been working with Spring Data JPA for a while, you’ve probably noticed that sometimes we extend CrudRepository, and other times, JpaRepository. But, what’s really the difference between them? 𝗟𝗲𝘁’𝘀 𝗴𝗼 𝘀𝘁𝗿𝗮𝗶𝗴𝗵𝘁 𝘁𝗼 𝘁𝗵𝗲 𝗽𝗼𝗶𝗻𝘁: Both are interfaces provided by Spring Data, and both help us perform basic operations like save(), findById(), delete(), etc. So, if CrudRepository already gives us all these basic CRUD operations, why would we need JpaRepository? The answer is: JpaRepository extends CrudRepository and adds more JPA-specific functionalities. Here’s what you get extra with JpaRepository: Methods like findAll(Sort sort) and findAll(Pageable pageable), super useful when dealing with pagination and sorting. Batch operations such as saveAll() or deleteAllInBatch(). Integration with JPA features, like flushing the persistence context (flush()) or deleting entities in batches, which can significantly improve performance. 💡 𝗧𝗵𝗲 𝗺𝗮𝗶𝗻 𝗶𝗱𝗲𝗮: CrudRepository → Basic CRUD operations. JpaRepository → Everything from CrudRepository + JPA extra features (pagination, sorting, batch operations, flush, etc). If you’re using Spring Data JPA, the default and most common choice is JpaRepository, because it’s a superset of the other two main repositories (CrudRepository and PagingAndSortingRepository). When we use JpaRepository, it’s not just about saving or finding data, it gives us extra control and performance with JPA. Let me know how you’ve used these extra features in your projects! #LearningJourney #CuriosityDriven #Java #developers #JavaDevelopers #Programming #SoftwareEngineering #CleanCode #TechTips #CodingJourney
To view or add a comment, sign in
-
Design Decision: String vs Map<String, Object> vs JsonNode in Java Data Models ——————————————————————— When designing an entity or DTO that includes flexible JSON data — such as API metadata, configuration, or custom attributes — choosing the right data type can simplify (or complicate) your processing logic. Let’s compare: 1. String ———————————— private String metadata; • ✅ Best for raw storage (e.g., DB column). • ⚠️ Requires parsing when accessing JSON fields: JsonNode node = objectMapper.readTree(metadata); • 💡 Use this when the application doesn’t need to inspect the JSON frequently. 2. Map<String, Object> ———————————— private Map<String, Object> metadata; • ✅ Directly deserializable from JSON. • ✅ Easy to serialize back. • ⚠️ Type erasure at runtime — nested values might still need casting. • 💡 Good middle ground for APIs with predictable JSON structure. 3. JsonNode ———————————— private JsonNode metadata; • ✅ Tree model — allows partial traversal, dynamic keys, and on-demand access. • ✅ Integrates seamlessly with Jackson. • ⚠️ Slightly more verbose but powerful for schema-less payloads. String userType = metadata.path("user").path("type").asText(); Practical Scenario: ———————————- a. Raw storage or simple logging——> String b. Known structure but variable values ——> Map<String, Object> c. Highly dynamic / unknown schema ——> JsonNode #java #springboot #json #objectmapping #datamodel #jackson #backenddevelopment #softwaredesign #javadeveloper #api #restapi #microservices #codingtips #programming #softwareengineering #techinsights
To view or add a comment, sign in
-
Day 1 — What is Spring Data REST and Why Should You Care? 💬 The Question I want to build a REST API, but I don’t want to write a lot of boilerplate code. Can Spring Data REST help me? 🧠 The Explanation Normally, when you build a REST API with Spring Boot, you: ✅ Create Entity classes (to map to database tables). ✅ Create Repository interfaces (to handle database access). ✅ Write Controller classes (to expose endpoints like /api/products). That’s a lot of code — especially when most endpoints are just simple CRUD (Create, Read, Update, Delete). Spring Data REST solves this by: ✅ Reading your JPA repositories. ✅ Automatically creating REST endpoints for them. ✅ Supporting pagination, sorting, and links out-of-the-box. @Entity public class Product { @Id @GeneratedValue private Long id; private String name; private Double price; // getters & setters } public interface ProductRepository extends JpaRepository<Product, Long> {} URL : http://localhost:8080/products { "_embedded": { "products": [ {"name": "Laptop", "price": 999.0, "_links": {"self": {"href": "/products/1"}}} ] }, "_links": {"self": {"href": "/products"}} } Learning never stops! Follow me for more Spring Boot, Java, and backend development content — let’s grow together 🙏 #SpringBoot #Java #BackendDevelopment #SpringData #RESTAPI #Developers #LearnToCode #TechLearning #CareerGrowth #Programming #SoftwareDevelopment #SpringFramework #APIDevelopment #BackendEngineer #JavaDeveloper #TechCommunity
To view or add a comment, sign in
-
Quick paper reading note: F3: The Open-Source Data File Format for the Future https://lnkd.in/gA98V5TR I heard about this very interesting paper from a previous South Bay Systems meetup https://lnkd.in/gd5AmHU7 The core idea: new columnar file format to replace Parquet, ORC, ... by embedding WebAssembly (WASM) decoder code together with data file. Problem with Parquet / ORC etc. open formats: legacy, inefficient encoding scheme. Hard to adopt newer better schemes for compatibility with legacy systems. WebAssembly https://lnkd.in/g-2hqcfc is commonly thought of as a web browser technology, but can do a lot more. At its core, it is an open binary code format and corresponding runtime environment (like JVM for Java). It allows C++, Rust, Golang ... source code to be compiled into WASM binary, and executes on client browser session. Why? 1. much faster than JavaScript interpreter execution, so good for performance sensitive browser apps. 2. security by-design: WASM code runs in isolated memory, with other sandbox isolation technologies. This paper embeds data file decoders (in WASM binary code) into data files, so that a file reader can decode arbitrary encoded data into Arrow format (a database internal in-memory data respresentation). This allows data writer to implement newer, better data encoding schemes, without worrying about backward compatibility. Limitation: WASM has limited support for input / output data types, and cannot directly read / write outside of WASM sandboxed memory. So output Arrow data must be stored in a pre-defined WASM memory address, then copied over to host memory by the host process: this is slow and blocks the WASM instance from pipelining more inputs until host consumes the output. My opinions: 1. Fun and creative paper. 2. WASM has 3 main "utilities": performance (compared to Javascript), security (sandboxed execution of untrusted code) and openness (multi-language support). In database systems, it is hard to surpass native C++ / Rust binaries performance, so secured code execution is the main benefit, openness gives more space for creativity. 3. Bigger security surface area and responsiility for developers (compared to traditional open file formats). Risk of unknown future WASM framework vulnerabilities. 4. WASM will be more powerful, but more complex and harder to be secured, if it allows caller to provide an output buffer for WASM runtime to spill out results. 5. Put WASM into my toolbox as a secured code injection / isolation technology, next to virtual machines, containers and other language specific VMs (like JVM).
To view or add a comment, sign in
-
🚀 Day 43 of #100DaysOfCode – LeetCode Problem #2460: Apply Operations to an Array 💬 Problem Summary: You’re given a non-negative integer array nums. You need to: Sequentially check each element nums[i]. If nums[i] == nums[i + 1], then: Multiply nums[i] by 2 Set nums[i + 1] to 0 After all operations, move all zeros to the end of the array. Return the resulting array. 🧩 Example: Input: [1,2,2,1,1,0] Output: [1,4,2,0,0,0] Explanation: - i=1: 2 == 2 → [1,4,0,1,1,0] - i=3: 1 == 1 → [1,4,0,2,0,0] Shift zeros → [1,4,2,0,0,0] 🧠 Logic: ✅ Traverse the array once, applying the doubling rule. ✅ Use a two-pointer approach or list building to shift zeros efficiently. 💻 Java Solution: class Solution { public int[] applyOperations(int[] nums) { int n = nums.length; for (int i = 0; i < n - 1; i++) { if (nums[i] == nums[i + 1]) { nums[i] *= 2; nums[i + 1] = 0; } } int index = 0; for (int num : nums) { if (num != 0) nums[index++] = num; } while (index < n) { nums[index++] = 0; } return nums; } } ⚙️ Complexity: Time: O(n) Space: O(1) ✅ Result: Accepted (Runtime: 0 ms) 💬 Takeaway: This problem reinforces the importance of in-place transformations and efficient data movement.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
The most valuable knowledge here is understanding the Init Phase. To drastically reduce "cold start" latency, make sure to move all your heavy-duty setup like importing large libraries, creating database connections, and initializing AWS SDK clients outside your main handler function. This ensures that expensive work runs only once when the environment is created, not on every subsequent request.