Setting Realistic Deadlines

Explore top LinkedIn content from expert professionals.

  • The AI-RAN Taking Shape I'm thrilled to announce our latest research contribution that fundamentally transforms how we design, deploy, and test key functionalities of cellular networks. Our new paper "ALLSTaR - Automated LLM-Driven Scheduler Generation and Testing for Intent-Based RAN" represents three major industry firsts: ⚡ First-Ever Automated Scheduler Generation: We've developed LLM agents that automatically convert research papers into functional code, generating 18 different scheduling algorithms directly from academic literature using OCR and AI. No more months of manual implementation in ns-3 or Matlab! Automatically generated schedulers are automatically deployed in a live network as dApps through a CI/CD pipeline - without the need to change a single line of code in the gNodeB implementation (CU or DU);  ⚡ Intent-Based Scheduling: Network operators can now express high-level requirements in natural language ("prioritize users with bursty traffic") and ALLSTaR automatically translates these into optimized scheduling policies according to operator’s intent. ⚡ World's First O-RAN Compliant AI-RAN Testbed: All validation conducted on X5G with AutoRAN, production-grade, multi-vendor 5G infrastructure with GPU acceleration, AI-for-RAN and AI-and-RAN capabilities, demonstrating real-world viability at scale. This work also introduces a methodological paradigm shift: instead of implementing one algorithm at a time, we can now systematically evaluate a vast body of scheduling literature in production-like environments. We're moving from manual, months-long integration processes to automated, intent-driven networks that adapt in real-time. This is the Open RAN and the AI-RAN vision - and a pathway toward 6G that builds on our national strengths and open ecosystem. Full paper: https://lnkd.in/eTNWPNRR Open6G www.open6g.us #ORAN #AIRan #OpenRAN #5G #WirelessResearch #AI #MachineLearning #Telecommunications #Research Our brilliant team: Maxime Elkael Michele Polese Reshma Prasad Stefano Maxenti Office of the Under Secretary of Defense for Research and Engineering NSF AI-EDGE Institute National Telecommunications and Information Administration (NTIA) Qualcomm

  • View profile for Akhil Sharma

    System Design · AI Architecture · Distributed Systems

    24,365 followers

    Designing an AI System That Doesn’t Collapse Under Latency Spikes A single user query passes through multiple stages — tokenization → batching → GPU scheduling → model execution → post-processing → response assembly. Now picture this: A few heavy prompts take 5× longer than average. Your batching layer waits to fill the “perfect batch.” Meanwhile, the queue grows. Requests start timing out. Retries stack up. That’s when you realize: You’re not running out of compute. You’re running out of control. Here’s how you design for resilience instead of collapse 👇 1️⃣ Bounded Queues Never let latency scale linearly with load. Bound your input queues and shed load proactively — either by dropping excess requests or serving degraded responses. Unbounded queues are silent killers — they delay backpressure, causing cascading timeouts. Think of it like circuit breakers for inference — graceful denial is better than system-wide collapse. 2️⃣ Adaptive Batching Static batch sizes look great in benchmarks and terrible in production. Instead, make batch sizes dynamic — continuously tuned based on GPU occupancy, queue length, and recent tail latency percentiles (P95/P99). At low load, batch small for lower latency. At high load, batch large for throughput — but with strict timeouts. The goal is elasticity without unpredictability. 3️⃣ Token-Aware Scheduling Batching by request count is naive. In LLM workloads, token length determines cost. A single 10,000-token prompt can stall 15 smaller ones if batched together. Token-aware schedulers measure total token budget per batch and allocate GPU time accordingly. This ensures fairness and consistent latency curves even under mixed workloads. 4️⃣ Partial Caching Most engineers cache final model outputs. That helps little. What actually saves time is pre- and post-compute caching — tokenized inputs, embeddings, and prompt templates. These are deterministic and cheap to reuse, shaving milliseconds off critical paths. Combine that with vector cache lookups to skip redundant reasoning altogether. 5️⃣ Deadline-First Scheduling In multi-tenant inference systems, not all requests are equal. Prioritize requests based on expected completion deadlines instead of FIFO order. This minimizes tail latency and improves QoS across traffic tiers. It’s the same principle airlines use — business class boards first, but everyone still gets there. This is where systems engineering meets AI infrastructure. Because LLM inference at scale isn’t just about throughput — it’s about temporal predictability. Inside my Advanced System Design Cohort, we go deep into these challenges — how to design AI systems that don’t just scale, but stay stable under load. If you’ve been leading distributed systems or AI infra and want to sharpen your architectural depth, there’s a link to a form in the comments — apply, and we’ll check if you’re a great fit.

  • View profile for Mihir Jhaveri (PMP, F.IOD)

    Chief Commercial Officer | Industry 4.0 Platforms & Enterprise Performance Management (EPM) - OneStream | Building Scalable Revenue, Partner Ecosystems & Market Credibility | Rejig Digital | Solution Analysts

    37,668 followers

    Smart PPS (Production Planning and Scduling) : Redefining the Role of the Planner in Manufacturing - QeMFG Every manufacturing shopfloor has one silent warrior- the Planner. Balancing customer demands, production constraints, machine capacities, and supplier dependencies is no small feat. Yet, too often, planners find themselves stuck in Excel sheets, chasing updates, and firefighting issues rather than truly planning. This is exactly where Smart Production Planning & Scheduling (Smart PPS) transforms the game. 👉 From Firefighting to Foresight Smart PPS shifts planners from reactive problem solvers to strategic decision-makers. By digitizing and automating the core planning process, it ensures that production is not just scheduled, but intelligently orchestrated. 👉 What Planners Gain with Smart PPS Real-Time Visibility A unified dashboard highlights machine status, material availability, and workforce allocation - giving planners complete control at a glance. No more running around the shopfloor to gather updates. Dynamic Rescheduling Sudden changes—machine breakdowns, urgent customer orders, or material delays—are handled instantly with auto-rescheduling. Planners can adapt without disruption. Seamless ERP & IoT Integration Sales orders flow directly from ERP, and IoT-enabled machines send live production data. This keeps planning aligned with reality, not assumptions. Scenario Simulations “What if” analysis allows planners to evaluate multiple options before committing. Whether it’s adding a shift, re-prioritizing an order, or balancing supplier delays, decisions are powered by data - not guesswork. Cross-Functional Collaboration Procurement, Quality, and Shopfloor Supervisors all work on the same updated schedule, reducing miscommunication and rework. The Results Speak for Themselves 👉 Improved on-time delivery 👉 machine utilization 👉 Reduced idle time and bottlenecks Less stress for planners, more focus on strategy A stronger link between planning and Why It Matters When planners succeed, the entire shopfloor succeeds. And when the shopfloor runs smoothly, businesses not only meet deadlines - they win customer trust and unlock new growth opportunities. At QeMFG, our vision with Smart PPS is simple: empower the planner, elevate the production ecosystem, and create a future-ready manufacturing floor. 👉 Curious to see how Smart PPS can transform your planning process? Let’s connect. #SmartPPS #Manufacturing #Engineering #ProductionPlanning #ShopfloorExcellence #ERP #Industry40 #SmartManufacturing #QeMFG

  • View profile for Nitin Gupta

    5G & O-RAN Architect | Guiding 46K+ Engineers to Master LTE , 5G NR, AI-Ml In Telecom , DevOps for Telecom

    46,337 followers

    🔷 Day 14: Reinforcement Learning in 5G Resource Allocation Optimizing spectrum, power, and scheduling through AI that learns from the network itself. 📌 Why Reinforcement Learning (RL) in 5G? Unlike supervised models that rely on labeled data, RL uses trial-and-error — learning from its environment through feedback (rewards). 5G resource allocation is dynamic and context-aware — RL fits perfectly. 📌 Key Resource Challenges in 5G NR Scheduling PRBs under ultra-low latency constraints Power control in dense small cell environments Mobility and handover management Interference-aware resource reuse Slice-specific QoS assurance 📌 How RL Solves These Agent: The network function (e.g., scheduler, SMO, RIC) State: Network KPIs like CQI, buffer size, UE mobility, demand Action: Allocate PRBs, select MCS, adjust transmit power Reward: Higher throughput, lower latency, reduced packet drop Over time, the RL agent learns to take optimal actions to maximize overall network performance. 📌 Practical Use Cases We Covered Dynamic PRB scheduling in congested cells Beam selection based on prior user movement patterns RAN slicing with real-time policy enforcement Intelligent power allocation to balance SINR across users 📌 What Makes RL Ideal for 5G? Operates in real-time environments Learns from unpredictable user behavior Scales across multi-agent setups (e.g., CU-DU split) Adapts to dynamic interference and load patterns 📘 Technical References ITU-T Y.3173 – Framework for ML in future networks O-RAN WG2 – Near-RT RIC AI Training & Inference 3GPP TR 38.891 – Study on AI/ML for 5G NR #5G #AIin5G #ReinforcementLearning #RANOptimization #5GNR #O_RAN #TelecomAI #NitinGupta #Day14 #ResourceAllocation #RIC #SON #5GTraining #WhatsAppLearning

  • View profile for Sione Palu

    Machine Learning Applied Research

    37,877 followers

    The Flexible Job Shop Scheduling Problem (FJSP) represents a critical advancement in industrial optimization, extending the classical Job Shop Scheduling Problem (JSSP) by introducing a dual-decision layer. While JSSP requires determining the sequence of operations on pre-assigned machines, FJSP adds the complexity of 'machine assignment', where each operation can be processed by any machine from a compatible set. This flexibility is essential for modern smart manufacturing, as it allows production systems to adapt to machine breakdowns and varying workloads, directly impacting operational efficiency and resource utilization in high-stakes environments. Historically, FJSP has been tackled using traditional exact methods like Integer Programming and meta-heuristics such as Genetic Algorithms (GA) or Taboo Search. More recently, Deep Reinforcement Learning (DRL) has emerged as a dominant approach, utilizing GNNs and Transformers to learn scheduling policies that can generate solutions in real-time. These neural net based methods treat the scheduling environment as a dynamic graph or sequence, attempting to map complex shop floor states to optimal dispatching rules. Despite their potential, current automated solvers face significant bottlenecks. The primary challenge lies in the 'curse of dimensionality' and sequence length. As the number of jobs and machines increases, the scheduling sequence grows quadratically, causing standard Transformers to suffer from extreme computational overhead due to their O(L^2) complexity. Furthermore, GNN-based methods often struggle to capture long-range dependencies between operations scheduled far apart in time, leading to sub-optimal machine assignments and increased makespan. To address the shortcomings highlighted above, the authors of [1] introduce M-CA (Mamba-CrossAttention), a novel architecture that replaces the standard self-attention mechanism with Selective State Space Modeling (Mamba). Mamba offers linear scaling O(L) with respect to sequence length, allowing the model to process much larger scheduling horizons efficiently. The M-CA framework specifically utilizes a 'Mamba-based Encoder' to capture global temporal dependencies and a 'Cross-Attention Decoder' to focus on the immediate machine-operation compatibility. This hybrid approach is superior because it maintains the high-fidelity global context of the entire factory state while drastically reducing the memory footprint and inference time required by traditional Transformers. Experiments show M-CA consistently outperforms state-of-the-art DRL baselines, Transformer-based models, and traditional heuristics across problem scales, achieving lower makespans and up to 5× faster inference. Mamba’s superior 'forgetting and remembering' mechanism drives scalability and robust performance by filtering out irrelevant scheduling noise to focus on critical constraints. The link to the paper [1] is posted in the comments.

  • View profile for Dan Murray

    Co-Founder of Heights I Angel Investor in over 100 startups I Follow for daily posts on Health, Business & Personal growth.

    226,970 followers

    Time blocking fails when you underestimate duration, create rigid schedules, and never adjust the system. Here's how to make it work: Track real task durations for one week, then multiply estimates by 1.5. The planning fallacy means we underestimate by 40% on average. If writing takes 90 minutes, block 2 hours. Block categories, not individual tasks. "9am-11am: Deep Work" beats "Reply to email 10:15-10:30" because one delay won't collapse your entire day. Build in flex blocks. Add 30 minutes before lunch and mid-afternoon. If the day runs smooth, use them for planning. If chaos hits, they absorb it. Calendar the invisible work first: commute time, email processing, meals, recovery after meetings. Then plug your to-do list into actual remaining capacity. Weekly 15-minute review: which blocks worked, which tasks took longer, where did interruptions happen. Adjust your template accordingly. Aim for 70% adherence, not perfection. The system works when it evolves with your reality, not against it. ------------------------------------------------- Follow me Dan Murray for more on habits and leadership. ♻️ Repost this if you think it can help someone in your network! 🖐️ P.S Join my newsletter The Science Of Success where I break down stories and studies of success to teach you how to turn it from probability to predictability here: https://lnkd.in/d9TnkzdH

  • View profile for Sawan S Laddha
    Sawan S Laddha Sawan S Laddha is an Influencer

    Growth Specialist for Startups & MSMEs | Founder, Workie Office Spaces | 22,000+ Seats Delivered | Investor | Founding Member YPO MP | President TiE MP | Building businesses by unlocking scale, space & talent

    36,073 followers

    I am a solo founder scaling 2 companies, and here is how I maximise my day each day with 14+ hours of work. As entrepreneurs, we often juggle numerous tasks and meetings, making work feel overwhelming at times. The key to overcoming this? A well-balanced approach to managing time. Over the years, I've discovered that using the right time management has not only boosted my productivity but also led to great ideation and planning ahead.    Here are my best techniques to save you extra hours of work: 1️⃣ Eisenhower matrix: The concept of this technique is to organise your list into four separate quadrants. Sort them with the parameters important vs. unimportant and urgent vs. not urgent.   Urgent tasks are the ones that need immediate action, and important tasks are the ones that contribute to your long-term visions. The goal is to work on the tasks that are in the top two quadrants, and the ones in the remaining can be deleted or delegated.   2️⃣ Time blocking: Elon Musk is known to work 80 hours a week, and his secret to getting everything done is this technique. For every task that you take up, allocate a time block and stick to it no matter what. Scheduling tasks with time blocks and buffer breaks allows you to perform high-impact work in minimum hours, yielding maximum output.   3️⃣ Eat that frog: In this, we begin our day by working on the most challenging tasks. When you focus your mental energy into performing the tough tasks, it fills you up with more drive and motivation to seize the day.   Using these techniques, I have been able to save X+ hours of work every week and have been able to devise growth strategies, and this has helped us retain more clients, achieve bigger targets, and crack better deals.   What’s your best technique that helps you manage your time?

  • View profile for Andre Heeg, MD

    MD | BCG Partner | Executive health that survives your actual week | The Upward ARC

    11,648 followers

    It's almost 2026. Time to block your calendar before everyone else does it for you. Here are the blocks I put in first: 𝗗𝗲𝗲𝗽 𝗪𝗼𝗿𝗸 𝗕𝗹𝗼𝗰𝗸𝘀 2-hour blocks. Same time each day works best. Phone off. Email closed. Notifications off. Give your hardest work your best hours, not whatever's left over. 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗧𝗶𝗺𝗲 90 minutes every week. Same day, same time. Work on your business, not just in it. Where are you going? What's broken? What needs to change? 𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗧𝗶𝗺𝗲 Friday afternoon or Sunday evening. Review the week. What actually mattered? What was noise? Track what moved you forward. 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗕𝗹𝗼𝗰𝗸𝘀 30 minutes daily beats 3 hours once a month. Pick one skill. Schedule it or it won't happen. 𝗛𝗲𝗮𝗹𝘁𝗵 𝗮𝗻𝗱 𝗪𝗲𝗹𝗹𝗻𝗲𝘀𝘀 Exercise, walks, breaks. Your output depends on recovery. Treat these like client meetings. 𝗥𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 Coffee meetings. Events. Follow-ups. Schedule time for the people who matter to your work. Block these first. Fit everything else around them. I wasted years letting my calendar fill up reactively. What you block this week sets the tone for your entire 2026. Who has done this already?

  • View profile for Daniella Genas MA. MBA

    Strategic Advisor to Founders & CEOs | Creator of the VISSA™ Framework | Curator of the Founders Dinner

    9,510 followers

    What do you do when you have lots of tasks to complete and are restricted by time? I time block. I thought everyone did until I was recently on a call with several people who were totally new to the concept. As a business owner who serves multiple clients, a mother, a trustee of First Class Foundation UK a vice chair for Ethical Equity and all the other hats I wear, I couldn’t function without time blocking. I follow this exact process to ensure that I am able to remain productive, complete tasks, manage my time effectively and reduce the chances of me becoming overwhelmed. 1. **List all of the the tasks I have to complete in the time period** 2. **Break the tasks down into a checklist in Trello. Typically a big task will require a number of sub tasks in order to complete. These form the checklist** 3. **I estimate how long it would take me to complete a task, based on how long it normally would take me. I then allocate a block of time to work on this task. I normally over estimate to be safe. If I think it will take 30 minutes, I will allocate 50 minutes.** 4. **I set start and end times for my entire day, including the documented tasks but also including breaks.** 5. **Once a time slot is complete, I move on to the next.** If you often find yourself overwhelmed with all of your tasks, give the above a try and see how it goes. Do you already time block? Let me know in the comments. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Hey 👋 I'm Daniella, a business growth strategist, passionate about empowering Entrepreneurs, Founders & Business Owners to grow & scale their businesses whilst reclaiming their time. I can help you to get razor sharp clarity on your vision, streamline your operations to save time & boost team performance and transform your mindset so you can Think BIG Take ACTION & Keep PUSHING to Success. 📈 I share posts mainly about business growth/scaling and business systemisation, with a sprinkling of motherhood and general musings. Drop me a DM to find out how I can help you grow or scale your service business and transform your life 😁

  • View profile for Mohammad Hasibul Haque

    Sr. Contract Manager || DOHWA Engineering Co., Ltd

    3,922 followers

    Practice Standard for Scheduling – Third Edition The PMI Practice Standard for Scheduling (3rd Edition) provides a comprehensive framework for creating and maintaining effective project schedules. Core Concepts: ~ Distinguishes between schedule models (the planning tool), schedule instances (snapshots at specific points), and schedule presentations (outputs for stakeholders) ~ Emphasizes that scheduling is more than just setting dates—it's about creating a dynamic model that responds predictably to changes What's New in the Third Edition: ~ Expanded coverage of Agile and adaptive approaches (Scrum, Kanban) ~ Introduction to emerging trends: location-based scheduling, BIM (Building Information Modeling), lean scheduling ~ Enhanced guidance on earned schedule as a monitoring technique ~ Forensic schedule analysis for understanding variances Key Methodologies Covered: ~ Critical Path Method (CPM) - the foundation ~ Critical Chain - resource-focused approach with buffer management ~ Rolling Wave Planning - progressive elaboration ~ Monte Carlo simulation for risk analysis Essential Components: The standard identifies 111 scheduling components divided into: ~ 36 core-required components (mandatory for any schedule) ~ Conditional components for resource-loaded schedules, EVM integration, and risk management ~ Optional components for enhanced sophistication Practical Value: ~ Provides a conformance index to assess schedule quality ~ Emphasizes good practices: no open-ended activities, minimal constraints, proper logical relationships ~ Stresses that proper scheduling enables better communication, risk identification, and project control Bottom Line: Whether you're using predictive, adaptive, or hybrid approaches, this standard offers the blueprint for creating schedules that serve as true project management tools—not just compliance documents.

Explore categories