☁️ 𝗪𝗵𝗮𝘁 𝗶𝘀 𝟭𝟮-𝗙𝗮𝗰𝘁𝗼𝗿 𝗔𝗽𝗽 𝗺𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝘆? The Twelve-Factor App is a set of 12 best practices to build scalable, maintainable, and cloud-ready applications. 🚀 The 12 Factors (Simple + Interview Ready) 𝟭. 📦 𝗖𝗼𝗱𝗲𝗯𝗮𝘀𝗲 👉 One codebase tracked in version control (Git) Multiple deploys (dev, QA, prod) 𝟮. 📚 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀 👉 Explicitly declare dependencies 𝗨𝘀𝗲 𝗠𝗮𝘃𝗲𝗻 / 𝗚𝗿𝗮𝗱𝗹𝗲 (𝗝𝗮𝘃𝗮) 3. ⚙️ Config 👉 Store config in environment variables No hardcoding Example: DB URL, API keys 𝟰. 🧩 𝗕𝗮𝗰𝗸𝗶𝗻𝗴 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 👉 Treat DB, cache, messaging as attached resources Easily replaceable (e.g., MySQL → PostgreSQL) 𝟱. 🔨 𝗕𝘂𝗶𝗹𝗱, 𝗥𝗲𝗹𝗲𝗮𝘀𝗲, 𝗥𝘂𝗻 👉 Separate: Build → compile Release → config Run → execution 𝟲. 🧱 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 👉 App runs as stateless processes No session stored in memory 𝟳. 🌐 𝗣𝗼𝗿𝘁 𝗕𝗶𝗻𝗱𝗶𝗻𝗴 👉 App exposes service via port Example: Spring Boot runs on 8080 𝟴. ⚡ 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 👉 Scale via multiple processes Horizontal scaling (more instances) 𝟵. 🔄 𝗗𝗶𝘀𝗽𝗼𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 👉 Fast startup & graceful shutdown Important for containers (Docker) 𝟭𝟬. 🧪 𝗗𝗲𝘃/𝗣𝗿𝗼𝗱 𝗣𝗮𝗿𝗶𝘁𝘆 👉 Keep dev, QA, prod environments similar Avoid “works on my machine” issues 𝟭𝟭. 📊 𝗟𝗼𝗴𝘀 👉 Treat logs as event streams Don’t store locally → use ELK / Splunk 𝟭𝟮. 🛠️ 𝗔𝗱𝗺𝗶𝗻 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 👉 Run admin tasks as one-off processes Example: DB migration scripts 🎯 Interview Short Answer The 12-factor app is a methodology for building cloud-native applications. It includes principles like using a single codebase, managing configurations via environment variables, keeping applications stateless, enabling horizontal scaling, and maintaining dev-prod parity to ensure scalability and maintainability. #systendesign #cloud #java
Twelve-Factor App Methodology for Cloud-Native Applications
More Relevant Posts
-
A solid reminder of how building scalable, maintainable, and cloud-native applications requires strong foundational principles. From configuration management to stateless processes and efficient dependency handling — every factor plays a crucial role in modern application design. Really valuable insights for anyone working on distributed systems or cloud-based architectures. Kudos to the author for putting together such a practical and insightful post! 🚀 #SoftwareEngineering #CloudNative #12FactorApp #SystemDesign #Microservices #DevOps #Architecture
☁️ 𝗪𝗵𝗮𝘁 𝗶𝘀 𝟭𝟮-𝗙𝗮𝗰𝘁𝗼𝗿 𝗔𝗽𝗽 𝗺𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝘆? The Twelve-Factor App is a set of 12 best practices to build scalable, maintainable, and cloud-ready applications. 🚀 The 12 Factors (Simple + Interview Ready) 𝟭. 📦 𝗖𝗼𝗱𝗲𝗯𝗮𝘀𝗲 👉 One codebase tracked in version control (Git) Multiple deploys (dev, QA, prod) 𝟮. 📚 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀 👉 Explicitly declare dependencies 𝗨𝘀𝗲 𝗠𝗮𝘃𝗲𝗻 / 𝗚𝗿𝗮𝗱𝗹𝗲 (𝗝𝗮𝘃𝗮) 3. ⚙️ Config 👉 Store config in environment variables No hardcoding Example: DB URL, API keys 𝟰. 🧩 𝗕𝗮𝗰𝗸𝗶𝗻𝗴 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 👉 Treat DB, cache, messaging as attached resources Easily replaceable (e.g., MySQL → PostgreSQL) 𝟱. 🔨 𝗕𝘂𝗶𝗹𝗱, 𝗥𝗲𝗹𝗲𝗮𝘀𝗲, 𝗥𝘂𝗻 👉 Separate: Build → compile Release → config Run → execution 𝟲. 🧱 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 👉 App runs as stateless processes No session stored in memory 𝟳. 🌐 𝗣𝗼𝗿𝘁 𝗕𝗶𝗻𝗱𝗶𝗻𝗴 👉 App exposes service via port Example: Spring Boot runs on 8080 𝟴. ⚡ 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 👉 Scale via multiple processes Horizontal scaling (more instances) 𝟵. 🔄 𝗗𝗶𝘀𝗽𝗼𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 👉 Fast startup & graceful shutdown Important for containers (Docker) 𝟭𝟬. 🧪 𝗗𝗲𝘃/𝗣𝗿𝗼𝗱 𝗣𝗮𝗿𝗶𝘁𝘆 👉 Keep dev, QA, prod environments similar Avoid “works on my machine” issues 𝟭𝟭. 📊 𝗟𝗼𝗴𝘀 👉 Treat logs as event streams Don’t store locally → use ELK / Splunk 𝟭𝟮. 🛠️ 𝗔𝗱𝗺𝗶𝗻 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 👉 Run admin tasks as one-off processes Example: DB migration scripts 🎯 Interview Short Answer The 12-factor app is a methodology for building cloud-native applications. It includes principles like using a single codebase, managing configurations via environment variables, keeping applications stateless, enabling horizontal scaling, and maintaining dev-prod parity to ensure scalability and maintainability. #systendesign #cloud #java
To view or add a comment, sign in
-
-
𝐄𝐯𝐞𝐫 𝐰𝐨𝐧𝐝𝐞𝐫𝐞𝐝 𝐰𝐡𝐚𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐡𝐚𝐩𝐩𝐞𝐧𝐬 𝐰𝐡𝐞𝐧 𝐲𝐨𝐮 𝐡𝐢𝐭 𝐚𝐧 𝐀𝐏𝐈 𝐞𝐧𝐝𝐩𝐨𝐢𝐧𝐭? You click a button… And data magically appears. But behind the scenes? There’s a full backend pipeline running in milliseconds ⚡ Let’s break it down 👇 🌐 𝟏. 𝐃𝐍𝐒 𝐑𝐞𝐬𝐨𝐥𝐮𝐭𝐢𝐨𝐧 You hit: 👉 api.example.com DNS converts it into: 👉 IP address (e.g., 142.x.x.x) No DNS → No connection. 🔐 𝟐. 𝐓𝐂𝐏 + 𝐓𝐋𝐒 𝐇𝐚𝐧𝐝𝐬𝐡𝐚𝐤𝐞 Before data flows: • TCP connection is established • TLS handshake secures it (HTTPS 🔒) 👉 This ensures encrypted communication ⚖️ 𝟑. 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫 Request doesn’t go to just one server. It hits a load balancer which: • Distributes traffic • Prevents overload • Improves availability 🚪 𝟒. 𝐀𝐏𝐈 𝐆𝐚𝐭𝐞𝐰𝐚𝐲 / 𝐑𝐞𝐯𝐞𝐫𝐬𝐞 𝐏𝐫𝐨𝐱𝐲 Acts like a gatekeeper: • Authentication (JWT, API keys) • Rate limiting • Routing to correct service ⚙️ 𝟓. 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐒𝐞𝐫𝐯𝐞𝐫 (𝐒𝐩𝐫𝐢𝐧𝐠 𝐁𝐨𝐨𝐭) Now your Java app kicks in: 👉 Controller → Service → Repository • Business logic runs • Validations happen • Data is prepared 🗄️ 𝟔. 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 / 𝐂𝐚𝐜𝐡𝐞 App fetches data from: • Database (MySQL, PostgreSQL) • Cache (Redis for speed ⚡) 👉 Good systems always try cache first 🔄 𝟕. 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐅𝐥𝐨𝐰 Data travels back: Server → Gateway → Load Balancer → Client All in a few milliseconds. 🧠 𝐖𝐡𝐚𝐭 𝐦𝐨𝐬𝐭 𝐝𝐞𝐯𝐬 𝐦𝐢𝐬𝐬 An API call is NOT just a function call. It’s: 👉 Networking 👉 Security 👉 Scalability 👉 System design 🎯 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐬𝐡𝐢𝐟𝐭 Beginner: 👉 “I created an API” Engineer: 👉 “I understand how requests flow through systems” Next time you hit an API… Remember — a lot more is happening than you think. Which part of this flow did you not know before? 👇 #Backend #Java #SystemDesign #APIs #SoftwareEngineering
To view or add a comment, sign in
-
Your transaction works perfectly… ✅ Until you call 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗺𝗲𝘁𝗵𝗼𝗱 inside it. Suddenly things behave… differently. Why? 𝗣𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗶𝗼𝗻. If you’re using `@Transactional` and not thinking about propagation, you’re basically saying: “Spring… you decide.” So what is propagation? It defines how a transaction behaves when one method calls another. Simple idea. But huge impact. Let’s break it down with real-world scenarios 🔹 REQUIRED (default) “Join if exists, else create” 💡 Example: Placing an order → Payment service called Both run in the "same transaction". If payment fails → everything rolls back. 🔹 REQUIRES_NEW “Always start fresh” 💡 Example: Order fails ❌ But you still want to "log the failure" Logging runs in a separate transaction → it gets saved ✅ 🔹 SUPPORTS “Join if exists, else run without transaction” 💡 Example: Fetching optional audit data Transaction or not… it doesn’t care. 🔹 NOT_SUPPORTED “Run without transaction” 💡 Example: Heavy read operation where transaction overhead is unnecessary Suspends existing transaction ⏸️ 🔹 MANDATORY “Transaction must exist” 💡 Example: Critical financial operation If no transaction → throw error 🚨 🔹 NEVER “Transaction must NOT exist” 💡 Example: A method that should never be part of a transaction If one exists → exception 💥 🔹 NESTED “Transaction inside a transaction” 💡 Example: Partial rollback scenario Inner transaction fails → outer can still continue Here’s the real insight Most bugs in transaction handling don’t come from syntax… They come from 𝘄𝗿𝗼𝗻𝗴 𝗽𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗶𝗼𝗻 choices. Using `@Transactional` is easy. Using it correctly is what makes you a solid backend developer. Next time you add `@Transactional`, don’t stop there… Ask: “How should this transaction behave?” #CoreJava #SpringBoot #JavaDeveloper #Transactional #BackendDevelopment #SoftwareEngineering #Developers #Programming #Developers #RDBMS #SQL #JPA #Hibernate #Database #Microservices #aswintech #SystemDesign
To view or add a comment, sign in
-
Day 10/30 — If you can’t trace a failed request across services in under 2 minutes, your logging is broken. Most teams realize this during an incident. At 2 AM. With leadership asking, “What happened?” A user reports: “My order failed.” You check: Order Service → request looks fine Payment Service → no record API Gateway → thousands of requests, impossible to isolate one 45 minutes later, you’re still grepping logs across 5 services. That’s not a debugging problem. That’s a logging architecture problem. 3 things every production log must have 1️⃣ Structure — log JSON, not sentences Human‑readable logs don’t scale. Machine‑queryable logs do. Structured logs let you filter by orderId, userId, traceId, amount, latency — instantly. When you have millions of log lines, you don’t read. You query. 2️⃣ Correlation — one traceId everywhere Without a correlation ID: Gateway logs are one story Order logs another Payment logs a third With a single traceId, they become one timeline. One query should tell you: When the request entered Which service failed Why At which millisecond If you need multiple terminal windows and manual grep… you’ve already lost. 3️⃣ Centralization — all logs, one place Logs on individual servers are effectively invisible. Ship everything to a central system: ELK, Datadog, Loki, CloudWatch — pick your poison. Key rule: ✅ Log to stdout ✅ Let your platform collect & forward ❌ Don’t SSH into servers to read files If logs aren’t searchable centrally, they don’t exist during incidents. What to log (and what not to) ✅ Request entry & exit (with duration) ✅ Every external call ✅ Every exception with full context ✅ Every state transition (order created → payment started → failed) ❌ Tight loops ❌ Sensitive data (passwords, cards, tokens) ❌ DEBUG by default in production INFO + structured fields + traceId beats verbose noise every time. The rule that covers everything: A developer who’s never seen your system should be able to: Take a traceId from a customer complaint Reconstruct exactly what happened Across all services Without touching a single server If that’s not true today, your logging isn’t done yet. #microservices #springboot #java #backend #softwareengineering
To view or add a comment, sign in
-
In a Spring Boot application, code is structured into layers to keep things clean, maintainable, and scalable. The most common layers are Controller, Service, and Repository each with a clear responsibility. i)Controller * Entry point of the application. * Handles incoming HTTP requests (GET, POST, etc.). * Accepts request data (usually via DTOs). * Returns response to the client. ii)Service * Contains business logic. * Processes and validates data. * Converts DTO ↔ Entity. iii)Repository * Connects with the database. * Performs CRUD operations. * Works directly with Entity objects. Request Flow (Step-by-Step): Let’s understand what happens when a user sends a request: 1. Client sends request Example: `POST /users` with JSON data. 2. Controller receives request * Maps request to a method. * Accepts data in a DTO. ``` @PostMapping("/users") public UserDTO createUser(@RequestBody UserDTO userDTO) { return userService.createUser(userDTO); } ``` 3. Controller → Service * Passes DTO to Service layer. 4. Service processes data * Applies business logic. * Converts DTO → Entity. ``` User user = new User(); user.setName(userDTO.getName()); ``` 5. Service → Repository * Calls repository to save data. ``` userRepository.save(user); ``` 6. Repository → Database * Data is stored in DB. 7. Response Flow Back * Repository → Service → Controller. * Entity converted back to DTO. * Response sent to client. Why DTO is Used: * Prevents exposing internal entity structure. * Controls input/output data. * Improves security. * Keeps layers independent. Why This Architecture Matters: * Clear separation of concerns * Easier debugging & testing * Scalable and maintainable codebase #Java #Spring #SpringBoot #BackendDevelopment #SoftwareEngineering #JavaDeveloper
To view or add a comment, sign in
-
-
We didn’t have a performance issue. We had an API identity crisis. Everything looked fine on the surface. - Requests were fast. - Errors were low. - Dashboards were green. But users kept complaining. “Why is this so slow?” “Why does it load unnecessary data?” “Why does it break randomly?” We scaled servers. Optimized queries. Added caching. Nothing worked. Until we realized: We weren’t dealing with a performance problem. We were dealing with the wrong API strategy. Here’s what was actually happening: → Frontend needed flexibility → we forced rigid REST → Data relationships were complex → we avoided GraphQL → Multiple services talking → no proper internal API contracts → External integrations growing → zero partner structure So every fix… Was just a patch on a broken foundation. That’s when it clicked: APIs are not just endpoints. They define how your entire system thinks. Choose wrong, and you’ll spend months optimizing the wrong layer. Choose right, and half your problems never show up. Most developers debug code. Few debug their architecture. That’s the real difference. Comment API and I'll send you a complete API reference guide. Follow Antony Johith Joles R for more. #APIs #BackendDevelopment #Java #SystemDesign #WebDevelopment #SoftwareEngineering #Programming #TechLearning
To view or add a comment, sign in
-
-
🚀Unlocking the Power of APIs in Spring Boot: REST vs. GraphQL vs. Reactive When we talk about building APIs with #SpringBoot, there isn’t a one-size-fits-all answer. Depending on your system’s architecture, data needs, and performance requirements, you have powerful options. I’ve put together a visualization (attached below) breaking down the three major API paradigms we work with most often in the Spring ecosystem. Here’s a quick overview: 1️⃣ REST APIs (REpresentational State Transfer) The standard for years. It’s stateless, resource-oriented, and uses HTTP verbs (GET, POST, etc.) for communication. Key Annotations: @RestController, @GetMapping, @PostMapping Use Case: When you need simplicity, caching, or standard protocol adherence (like microservices communication). 2️⃣ GraphQL A query language for APIs. It lets the client define exactly what data they need, avoiding over-fetching or under-fetching. It typically operates through a single endpoint. Key Annotations: @SchemaMapping, @QueryMapping Use Case: Ideal for front-end heavy apps, complex data relationships, and mobile clients with bandwidth constraints. 3️⃣ Reactive APIs (Spring WebFlux) Built for non-blocking, asynchronous communication. It operates on a smaller number of threads to handle a massive number of concurrent requests. Key Types: Mono<T> (0-1 result), Flux<T> (0-N results) Use Case: High-concurrency systems, streaming applications, and IO-bound tasks where thread efficiency is crucial. Which approach are you using for your current projects, and what made you choose it? Let’s discuss in the comments! 👇 #java #springboot #api #restapi #graphql #webflux #microservices #backend #softwareengineering #learncoding #linkedinlearning
To view or add a comment, sign in
-
-
𝐋𝐚𝐭𝐞𝐧𝐜𝐲 𝐢𝐬 𝐮𝐩, 𝐛𝐮𝐭 𝐧𝐨𝐭𝐡𝐢𝐧𝐠 𝐢𝐬 𝐟𝐚𝐢𝐥𝐢𝐧𝐠. 𝐖𝐡𝐞𝐫𝐞 𝐝𝐨 𝐲𝐨𝐮 𝐥𝐨𝐨𝐤 𝐟𝐢𝐫𝐬𝐭? 🔍 One of the worst production situations: Latency is growing 📈 Users feel it 😐 Logs are clean 🧼 Nothing is obviously broken ❌ Most teams waste time here. They search for errors 🔎 Restart pods 🔄 Jump between dashboards 📊 But when nothing is failing, the problem is rarely an exception. It is usually one of these: 1. 𝗦𝗰𝗼𝗽𝗲 𝗳𝗶𝗿𝘀𝘁 🎯 One endpoint or all? One instance or all? Reads, writes, or async? If you skip this, you debug the whole system instead of a slice 2. 𝗧𝗵𝗿𝗲𝗮𝗱 𝗽𝗼𝗼𝗹𝘀 🧵 Active threads, queue size, blocked threads. If all workers are busy, requests are not failing - they are waiting to run. 3. 𝗧𝗵𝗿𝗲𝗮𝗱 𝗱𝘂𝗺𝗽 📸 Look for: * repeated stack traces * WAITING / BLOCKED threads * DB connection waits * socket reads * lock contention This shows where execution is actually stuck. 4. 𝗚𝗖 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 ♻️ Pause time, frequency, heap pressure. If latency spikes in waves, GC is often involved. 5. 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗽𝗼𝗼𝗹𝘀 🧩 DB, HTTP clients, Redis, broker. Exhausted pool = requests wait instead of fail. Classic “slow but no errors”. 6. 𝗤𝘂𝗲𝘂𝗲𝘀 & 𝗹𝗮𝗴 📊 Queue depth, consumer lag, retries. The system may look fine while work silently accumulates. 7. 𝗗𝗼𝘄𝗻𝘀𝘁𝗿𝗲𝗮𝗺𝘀 🌐 DB, internal services, external APIs. Your service might be slow because it is efficiently waiting on something else. The key shift: No errors does not mean no problem. ❗ It usually means the bottleneck is in waiting, saturation, contention, or backlog. Stop hunting for exceptions first. Start finding where time is spent. How do you usually localize the bottleneck first in this situation? 🤔 #backend #java #springboot #observability #performance #distributedsystems #productionengineering
To view or add a comment, sign in
-
-
🚀 Deep Internal Flow of a REST API Call in Spring Boot 🧭 1. Entry Point — The Gatekeeper DispatcherServlet is the front controller. Every HTTP request must pass through this single door. FLOW: Client → Tomcat (Embedded Server) → DispatcherServlet 🗺️ 2. Handler Mapping — Finding the Target DispatcherServlet asks: “Who can handle this request?” It consults: * RequestMappingHandlerMapping This scans: * @RestController * @RequestMapping FLOW : DispatcherServlet → HandlerMapping → Controller Method Found ⚙️ 3. Handler Adapter — Executing the Method Once the method is found, Spring doesn’t call it directly. It uses: * RequestMappingHandlerAdapter Why? Because it handles: * Parameter binding * Validation * Conversion FLOW : HandlerMapping → HandlerAdapter → Controller Method Invocation 🧭 4. Request Flow( Forward ): Controller -> Service Layer (buisiness Logic) -> Repository Layer -> DataBase 🔄 5. Response Processing — The Return Journey Now the response travels back upward: Repository → Service → Controller → DispatcherServlet -> Tomcat -> Client. ———————————————— ⚡ Hidden Magic (Senior-Level Insights) 🧵 Thread Handling * Each request runs on a separate thread from Tomcat’s pool 🔒 Transaction Management * Managed via @Transactional * Proxy-based AOP behind the scenes 🎯 Dependency Injection * Beans wired by Spring IoC container 🧠 AOP (Cross-Cutting) * Logging, security, transactions wrapped around methods ⚡ Performance Layers * Caching (Spring Cache) * Connection pooling (HikariCP) ———————————————— 🧠 The Real Insight At junior level i thought: 👉 “API call hits controller” At senior level i observe: 👉 “A chain of abstractions collaborates through well-defined contracts under the orchestration of DispatcherServlet” #Java #SpringBoot #RestApi #FullStack #Developer #AI #ML #Foundations #Security
To view or add a comment, sign in
-
-
We didn’t have a performance issue. We had an API identity crisis. Everything looked fine on the surface. - Requests were fast. - Errors were low. - Dashboards were green. But users kept complaining. “Why is this so slow?” “Why does it load unnecessary data?” “Why does it break randomly?” We scaled servers. Optimized queries. Added caching. Nothing worked. Until we realized: We weren’t dealing with a performance problem. We were dealing with the wrong API strategy. Here’s what was actually happening: → Frontend needed flexibility → we forced rigid REST → Data relationships were complex → we avoided GraphQL → Multiple services talking → no proper internal API contracts → External integrations growing → zero partner structure So every fix… Was just a patch on a broken foundation. That’s when it clicked: APIs are not just endpoints. They define how your entire system thinks. Choose wrong, and you’ll spend months optimizing the wrong layer. Choose right, and half your problems never show up. Most developers debug code. Few debug their architecture. That’s the real difference. Comment API and I'll send you a complete API reference guide. Follow Nishika Verma for more. #APIs #BackendDevelopment #Java #SystemDesign #WebDevelopment #SoftwareEngineering #Programming #TechLearning
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development