I see Finance teams spending days and weeks building Excel forecasts that break the moment business patterns shift. There's a better way. I just published a walkthrough showing how to implement 𝗠𝗟-𝗯𝗮𝘀𝗲𝗱 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴 𝗶𝗻 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗙𝗮𝗯𝗿𝗶𝗰 - achieving >95% accuracy with a setup that takes hours, not the days/weeks Excel requires. Once configured in Fabric notebooks, forecasts refresh automatically. No more monthly Excel gymnastics. CFOs get conservative/baseline/stretch scenarios from the same model. And it adapts to trend changes without manual recalibration. The approach works beyond AR (Accounts Receivable) - I've used similar frameworks for sales forecasting, inventory planning, and capacity projections across Telco, Oil & Gas, and Pharma clients. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝘁𝘂𝘁𝗼𝗿𝗶𝗮𝗹 𝗰𝗼𝘃𝗲𝗿𝘀: • Prophet framework for automatic seasonality detection • 12-month cash flow predictions with confidence intervals for scenario planning • Lakehouse integration for automatic Power BI refresh • Cross-validation workflow that tunes parameters automatically 𝗥𝗲𝗮𝗹 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗺𝗲𝘁𝗿𝗶𝗰𝘀: With the sample data I was able to achieve 3% MAPE (Mean Absolute Percentage Error) - that's $50K average variance on $1.5M monthly collections. Industry target is under 5%. 𝗧𝘂𝘁𝗼𝗿𝗶𝗮𝗹 𝗮𝗻𝗱 𝗻𝗼𝘁𝗲𝗯𝗼𝗼𝗸 𝗹𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 🎥👇 ____ #MicrosoftFabric #PowerBI #MachineLearning #DataAnalytics #Forecasting
Adaptive Forecasting Systems
Explore top LinkedIn content from expert professionals.
Summary
Adaptive forecasting systems are advanced tools that automatically adjust predictions as new data and external factors emerge, helping businesses stay agile when conditions change unexpectedly. These systems combine machine learning, statistical models, and contextual signals to provide more accurate and flexible forecasts than traditional methods.
- Embrace automation: Set up forecasting systems that refresh predictions automatically, saving you time and minimizing manual adjustments.
- Incorporate external signals: Include relevant influences such as weather, promotions, or holidays in your forecast models to improve accuracy and make results more actionable.
- Use flexible frameworks: Adopt forecasting solutions that handle different types of data and adapt to changing trends, so your predictions remain reliable across various scenarios.
-
-
What if your 12-month forecast is 80% wrong? That’s not bad forecasting. It’s bad strategy. Too many forecasts are treated like crystal balls: 🔮 Rigid. 🔮 Unrealistic. 🔮 Quickly irrelevant. The truth? Forecasts don’t need to be perfect. They need to be adaptable. Here’s how I build forecasts that empower instead of paralyze: 📊 Rolling forecasts that shift with reality—not once a year, but monthly or quarterly. 🧭 Scenario planning that prepares leaders for the unexpected—not just the most likely case. 📉 Sensitivity analysis that highlights which assumptions actually move the needle. 🛠️ Driver-based models so every number ties to real-world levers. 🧠 And above all: decision-useful insight > spreadsheet perfection. In today’s volatile environment, static forecasts break. Agile finance wins. 💬 How are you building flexibility into your forecasts this year? #CFOInsights #Forecasting #AgileStrategy #ScenarioPlanning #OperationalFinance #FinancialLeadership #StrategyExecution
-
Are we really delivering the best possible forecasts with state-of-the-art foundation models if our models stop at historical patterns and ignore the external signals shaping the future? In the last few years, we've all seen how foundation models started transforming time series forecasting — unlocking strong zero-shot performance and making high-quality predictions possible without task-specific tuning. But the problem is that most of these models are univariate: they treat time series as isolated signals, leaving out exogenous factors that are often critical for accurate prediction. And that's not how forecasting works outside of a benchmark. Promotions, holidays, weather, pricing — these external influences often explain as much of the future as the past itself. Ignoring them leads to wider prediction intervals and forecasts that are harder to translate into real business decisions. So the real challenge now is: how do we bring that missing context into foundation models? That's the problem Chronos-2 was designed to solve. We built Chronos-2 to handle covariates and multivariate data in a zero-shot manner, and on benchmarks focused on these tasks, it achieves significant reductions in forecast error. But building a foundation model that can handle such diverse, context-dependent signals is not straightforward. Each forecasting task is unique — the number of features, their semantic meaning, and their interactions differ. The solution is a model that can adapt with in-context learning (ICL). Chronos-2 tackles this with two key components: 1. Architecture. In addition to standard temporal attention, we introduce group attention layers that enable information mixing across dimensions, allowing the model to learn from exogenous signals. 2. Training data. Multivariate and covariate time series data are extremely scarce, so we use synthetic data augmentation, adding multivariate structure on top of the univariate series commonly used for pretraining. The result is strong empirical performance across domains. In retail, Chronos-2 captures the impact of promotions on sales. In energy, it learns how weather influences energy consumption. In both cases, incorporating covariates significantly improves forecast accuracy and narrows prediction intervals — making forecasts more actionable. Chronos-2 is available under the Apache 2.0 license and ready to use. Give it a try and let us know what you think! 📄 Technical report: https://lnkd.in/d4RZG8Rq 💻 GitHub: https://lnkd.in/d9mvFT5B 📓 Example notebook: https://lnkd.in/dz69pCyu Abdul Fatir Ansari, Jaris Küken, Andreas Auer, Yuyang (Bernie) Wang, George Karypis, Huzefa Rangwala, Michael Bohlke-Schneider, Nick Erickson, Boran Han, Pedro Mercado, Syama Sundar Rangapuram, Huibin Shen, Lorenzo Stella, Amazon Science
-
Forecasting infectious diseases remains challenging because purely data-driven models often overfit, while traditional compartmental models struggle to adapt when transmission conditions change. Our newly published study in Journal of the Royal Society Interface examines whether physics-informed neural networks (PINNs) can bridge this gap by embedding a full epidemiological ODE system directly into a deep learning model. Using COVID-19 data from California, we evaluate how well this hybrid approach predicts cases, hospitalizations, and deaths across 1–4 week horizons. We find that PINNs produce more stable and more accurate forecasts than naïve baselines and common sequence models (RNNs, LSTMs, GRUs, Transformers), while remaining simpler to implement than large state-space models. The work suggests a viable path toward forecasting frameworks that learn from data while staying anchored to disease dynamics. 🔗 https://lnkd.in/eFkW_egm #Forecasting #COVID19 #AI #MachineLearning #ScientificMachineLearning #PhysicsInformedNeuralNetworks #ComputationalEpidemiology #EpidemiologicalModeling #InfectiousDiseaseModeling #TimeSeriesForecasting #DynamicalSystems
-
Great news, everyone! After years of development in R, the first release of the smooth forecasting package is now available for Python. Why is this great news? Let me explain. Full post with details, code examples, and benchmark results is here: https://lnkd.in/e2xD6ZbY Why does smooth exist? There are plenty of ETS and ARIMA implementations out there (R/Python/Julia), but they typically cover standard models with basic functionality. smooth is built around the idea of flexibility. Its core function, ADAM (Augmented Dynamic Adaptive Model), unifies ETS, ARIMA, and regression in a single state-space framework. It supports multiple seasonalities, non-normal distributions, automatic model selection, intermittent demand modelling, advanced loss functions, and much more - features you simply won't find combined in any other Python forecasting package. On top of that, the R functions have been refined over years to handle almost any real-life situation robustly and at speed. A bit of history The smooth package has been available in R since 2016, growing from v1.4.3 to v4.4.0 over a decade. As R's share in business declined, the case for a Python port became clear. In early 2023, Leonidas Tsaprounis stepped up to help set up the C++ bindings using pybind11 and carma, laying the technical groundwork. Later that year, Filotas Theodosiou joined the team and took on the enormous task of translating thousands of lines of R code into Python, using his LLM friends to accelerate the process. After two years of collaborative effort, v1.0.0 was reached in January 2026, with ADAM/ES now producing identical model selections and parameter estimates in both R and Python. How does it perform? The package was benchmarked on 5,315 time series from the M1, M3, and Tourism datasets against statsforecast, skforecast, and sktime. Here is the notebook: https://lnkd.in/eaawPJpP Key findings (see the second image in the post): - In terms of median RMSSE, smooth's ES with backcasting beats all other implementations tested. - smooth ADAM/ES with backcasting is the second most accurate overall in terms of mean RMSSE, outperforming skforecast and sktime on both RMSSE and SAME. - ADAM with backcasting is faster than skforecast and sktime, and records the lowest maximum computation time across difficult series. - Nixtla's statsforecast AutoETS is the fastest and most accurate on average (well done, guys!). What's next? Planned additions include explanatory variables (ETSX and ARIMAX), full ARIMA support, intermittent demand modelling, simulation functions, model diagnostics, and the porting of CES, MSARIMA, GUM, and SMA from R. Plenty to look forward to across 2026. So why wait? Install it via "pip install smooth" and give it a try! :) P.S. A big thank you to Gustavo Niemeyer for transferring the smooth name on PyPI to us! #forecasting #datascience #machinelearning
-
Generative AI and simulations are revolutionizing forecasting by transforming it from a static, historical-based approach into a dynamic, adaptive system. Traditional forecasting methods struggle to handle complex interactions, sudden market shifts, and limited data. In contrast, AI-driven simulations can integrate real-time data, generate multiple possible future scenarios, and adjust predictions continuously as new information becomes available. By leveraging synthetic data generation, businesses can forecast even in situations where historical data is lacking, such as emerging markets or new product launches. AI also enhances risk management by enabling organizations to explore best-case, worst-case, and most likely scenarios, making forecasts more realistic and actionable. In the example below, we see AI agents mimicking the behavior of traders in the futures market, each assuming different roles with distinct behaviors and interests. These agents must open or close positions as various factors influence their strategies. The feedback mechanism relies on real market data, using mathematical approximations based on daily Open Interest, trading volume, price direction, volatility, and other key indicators.
-
Annual budgets give a comforting illusion of control. You debate decimal points in December, then watch the plan fall apart by February. I’ve sat through three-hour meetings where the C-suite argued over 0.2% revenue differences. By the time we hit Q1 close, none of it mattered. Across nearly 20 companies as a CFO, board member, and advisor, I've learned: The best operators don’t abandon budgeting. They reinvent it. They turn it into a living system: Here’s how... 1️⃣ Rolling 4-Quarter Outlook → Update monthly with actual results → Always look 12 months ahead → Keeps leadership focused on what's next, not last year's plan 2️⃣ Three Scenarios → Base case: most likely → Upside: when execution outperforms → Downside: when key risks land harder than expected 3️⃣ Monthly Reality Check → Compare actuals vs. forecast → Adjust assumptions based on what you've learned → Make resource calls in real time This isn't easy. Getting alignment on scenarios and assumptions takes work - especially when the CEO and board see the future differently. But it's far more valuable than clinging to a static plan everyone stopped believing in months ago. Why it works: ⚡ Speed - Plans evolve monthly instead of annually. 📊 Reality - Assumptions tested continuously, not once a year. 🌟 Focus - Energy shifts to execution, not process. How to start: ✅ Identify your top five drivers of performance ✅ Build base / upside / downside assumptions for each ✅ Update monthly with actuals and review at the leadership table The companies that run this way adapt faster and decide smarter. The ones chasing false precision? They’re still defending documents no one reads. ➡️ I help CFOs and leadership teams shift from static planning to adaptive finance - turning budgets from theater into a real decision system.
-
The forecast was solid. The assumptions were fine. We still lost $6.7M. 1 year ago, I sat across from a telecom CFO who looked like he hadn’t slept in a week. “We’re already $4.2M over budget. And that’s before the fiber supplier pushed their rates. If this goes south, I’m the one holding the bag.” I knew that feeling. When your forecast is outdated, vendors are behind, and everyone’s chasing you for answers, it’s not just stressful, it’s dangerous. Here’s the truth: Forecasts don’t fail because they’re wrong. They fail because they’re TOO slow. In telecom, delays pile up fast - fiber, permitting, contractors, energy access. By the time your spreadsheet catches the signal, it’s already too late. That month almost broke us. We were rolling out a $68M fiber expansion across two metro markets. The model was solid. Phased CapEx. Built-in buffers. Then came the hits: Week 4: vendor delay. Week 6: trenching paused. Week 9: OT surges. Week 11: $6.7M projected overrun. Too late. Commitments were locked. Our dynamic model wasn’t dynamic at all it took 5 days and 2 analysts to update. And margin leaked out while we refreshed Excel tabs. So we ran a test. We piloted an AI forecasting tool on one segment. No system overhaul. It pulled from our existing data - vendor logs, field reports, real-time updates and learned fast. After 60 days: – AI forecast deviation: 2.9% – Traditional model: 11.6% Same inputs. 4x tighter accuracy. 6 months later: – Forecast cycle time cut 85% – Overrun risk down 28% – Payment delays down 41% – 1.8 FTEs freed from manual work More importantly => we stopped reacting. We started steering. I wasn’t explaining misses. I was leading with insight. Here’s what I learned: AI doesn’t replace finance. It gives it foresight. 90% of Infrastructure CFOs sit on fragmented data. PM tools, vendor reports, accounting systems - but can’t act fast enough. That’s not a data problem. It’s a visibility problem. And it’s costing you. Start small. One wedge: CapEx, vendor risk, tower maintenance. Run a 6-week sprint. Compare the results. Then scale. You don’t need a massive AI team. You need faster insight. If I could go back, I’d start earlier. In infrastructure, delay is the real killer - not bad forecasts, but slow ones. Found this helpful? → Repost it. Someone in your network needs it. → DM me “Clarity” and I’ll send the sprint blueprint we use with telecom, energy, and logistics CFOs. → Follow me for weekly playbooks on AI, finance, and infrastructure. Let’s stop playing defense - and make your margin predictable.
-
Google Just Changed Time-Series Forecasting Forever (And Hardly Anyone's Talking About It) If you thought foundation models were only for text, code, or chatbots, think again. Google just dropped something called TimesFM-ICF (In-Context Fine-Tuning) and it might be the biggest leap in forecasting AI we've seen in years. What's so special? It can learn and adapt at inference time without retraining and achieve accuracy better than fine-tuned models Let me explain in simple words In traditional forecasting, companies face two bad choices: 1. Fine-tune a model for every new dataset expensive, slow, and painful. 2.Use one big model for everything easier, but often inaccurate. TimesFM-ICF breaks this trade-off. It uses a few in-context examples (basically, real data snippets you give it during prediction) and instantly adapts on the fly. No retraining. No gradient updates. No massive pipelines. Result: 6.8% more accurate than the base model Matches or beats fine-tuned performance Takes 4 mins vs 115 mins to adapt Why This Matters for Builders Think about forecasting in retail, finance, or energy: Predicting product demand Anticipating energy usage Estimating stock trends Modeling traffic flow Right now, every company trains a new model for each problem. With TimesFM-ICF, one model can handle all those use cases, just by feeding it a few examples for context. This is like giving ChatGPT a custom instruction mid-conversation… but for numerical forecasting My Favorite Part: The "Separator Token" Here's the nerdy but cool part: Google added a learnable separator token– a tiny signal between different time-series examples. This prevents the model from mixing them up and lets it understand patterns across multiple series without confusion. This one change lets the model act like an LLM for time series – learning from context, not just training data. Why It's a Big Deal for the Future of AI One model for many forecasting tasks No more complex re-training pipelines Fast, accurate adaptation in minutes Better performance than fully fine-tuned systems This is more than just a research paper it's a new paradigm. We're moving from "train and deploy" → to "adapt and predict." And that shift will completely change how companies build forecasting products. My take: This is what the next era of AI looks like models that learn from context, not just data. They're flexible, reusable, and incredibly powerful and they'll power the next wave of predictive intelligence across industries. Want to dive deeper? Google TimesFM-ICF the research paper is worth a weekend read. https://lnkd.in/gEp7ASqU Now tell me: If you could predict anything in your business using a model like this what would you forecast first? Repost!! #AI #Forecasting #GoogleAI #BuildWithAI
-
Because a wrong demand forecast ruins everything else... This infographic shows statistical forecast vs machine learning (ML) in demand forecasting: ✅ Approach 🧮 Statistical forecast: relies on historical data patterns, with limited capacity for external variables 🤖 Machine learning (ML): uses advanced algorithms to detect complex patterns, incorporating economic indicators, social trends ✅ Best to Use For 🧮 Statistical forecast: stable demand patterns with minimal external variables 🤖 Machine learning (ML): changing demand with diverse external influences (e.g., promotions, weather) ✅ Accuracy 🧮 Statistical forecast: works well for simple, well-defined time-series patterns (e.g., seasonality, trends) 🤖 Machine learning (ML): more accurate for complex, high-dimensional data; forecast accuracy rates are 10-20% higher ✅ Model Type Examples 🧮 Statistical forecast: exponential smoothing, moving averages 🤖 Machine learning (ML): neural networks, random forests, XGBoost ✅ Adaptability 🧮 Statistical forecast: requires manual intervention for changing trends or patterns 🤖 Machine learning (ML): highly adaptable to changing demand patterns with retraining ✅ Scalability 🧮 Statistical forecast: has limited scalability; small datasets or simple SKU portfolios 🤖 Machine learning (ML): scales easily for large datasets and complex SKU portfolios ✅ Team 🧮 Statistical forecast: Supply Chain Team can build most of these models themselves 🤖 Machine learning (ML): Supply Chain Teams with Data Scientists are required Any others to add?
Explore categories
- Hospitality & Tourism
- Productivity
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development