Business Forecasting Methods

Explore top LinkedIn content from expert professionals.

  • View profile for Olga Berezovsky

    Head of Data & Analytics

    22,077 followers

    Forecasting is hard. Finding analysts who do it well is even harder. Too often, I see forecasting either: 1. Overcomplicated: Applying complex ML models just to predict a moving average (?!), or 2. Oversimplified: Running regressions without understanding what the coefficients even mean. I personally use 4 forecasting methods to model a range of outcomes, from conservative to aggressive: 1. ARIMA - Smooths time series data, w/o seasonality adjustment. 2. SARIMAX -  Like ARIMA, but accounts for seasonality. Likely to be the safest and conservative forecast. 3. Prophet -  Captures non-linear trends and seasonality. Often the most accurate. My favorite model for growth forecasts. 4. Manual Projection – aka Olga's secret, overly complicated manual projection. I plot every available metric’s historical D/D, W/W, M/M, and Y/Y % change and analyze their: (a) correlations and relationships (b) seasonal thresholds. It takes ages to complete, but it delivers the most precise forecast. If done right. If I can account for everything the teams are doing. Which is rarely the case. 😬 When reporting, I typically present only Prophet alongside my Projection, keeping ARIMA and its variations for myself as checks. There are many time series models out there: MA, AR, ARMA, ARIMA, SARIMA, Exponential Smoothing, VAR, and more. Forecasts are fun.

  • View profile for Kristen Kehrer
    Kristen Kehrer Kristen Kehrer is an Influencer

    AI & Data Strategy | Author 4x | [In]structor | Helping Leaders Understand AI Systems

    103,983 followers

    Modeling something like time series goes past just throwing features in a model. In the world of time series data, each observation is associated with a specific time point, and part of our goal is to harness the power of temporal dependencies. Enter autoregression and lagging -  concepts that taps into the correlation between current and past observations to make forecasts.  At its core, autoregression involves modeling a time series as a function of its previous values. The current value relies on its historical counterparts. To dive a bit deeper, we use lagged values as features to predict the next data point. For instance, in a simple autoregressive model of order 1 (AR(1)), we predict the current value based on the previous value multiplied by a coefficient. The coefficient determines the impact of the past value on the present one only one time period previous. One popular approach that can be used in conjunction with autoregression is the ARIMA (AutoRegressive Integrated Moving Average) model. ARIMA is a powerful time series forecasting method that incorporates autoregression, differencing, and moving average components. It's particularly effective for data with trends and seasonality. ARIMA can be fine-tuned with parameters like the order of autoregression, differencing, and moving average to achieve accurate predictions. When I was building ARIMAs for econometric time series forecasting, in addition to autoregression where you're lagging the whole model, I was also taught to lag the individual economic variables. If I was building a model for energy consumption of residential homes, the number of housing permits each month would be a relevant variable. Although, if there’s a ton of housing permits given in January, you won’t see the actual effect of that until later when the houses are built and people are actually consuming energy! That variable needed to be lagged by several months. Another innovative strategy to enhance time series forecasting is the use of neural networks, particularly Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks. RNNs and LSTMs are designed to handle sequential data like time series. They can learn complex patterns and long-term dependencies within the data, making them powerful tools for autoregressive forecasting. Neural networks are fed with past time steps as inputs to predict future values effectively. In addition to autoregression in neural networks, I also used lagging there too! When I built an hourly model to forecast electric energy consumption, I actually built 24 individual models, one for each hour, and each hour lagged on the previous one. The energy consumption and weather of the previous hour was very important in predicting what would happen in the next forecasting period. (this model was actually used for determining where they should shift electricity during peak load times). Happy forecasting!

  • View profile for Manish Kumar, PMP

    Demand & Supply Planning Leader | 40 Under 40 | 3.9M+ Impressions | Functional Architect @ Blue Yonder | ex-ITC | Demand Forecasting | S&OP | Supply Chain Analytics | CSM® | PMP® | 6σ Black Belt® | Top 1% on Topmate

    14,412 followers

    I was interviewing a bright candidate for a demand planner role a while back. To gauge his practical thinking, I posed a scenario. "Imagine your system generates a baseline forecast of 10,000 units for a key product next month. The statistical model is sound. What's your next action?" He gave a textbook-perfect answer about reviewing historical trends, model accuracy, and checking for outliers. All crucial steps. I paused. "That's an excellent start. But what if that 10,000, as precise as it looks, is missing the most critical piece of information?" He seemed curious, waiting for the answer. This is a scenario I see play out in many organizations. We invest heavily in sophisticated forecasting systems that are brilliant at analyzing the past. But they often lack forward looking context. A forecast is just a number until we enrich it. Many industry studies highlight that forecasts relying purely on historical data can often miss the mark significantly, sometimes by as much as 30-40% for more volatile items. The plan becomes a mathematical exercise, disconnected from the commercial realities on the ground. This is where the concept of Demand Enrichment becomes invaluable. It is the structured process of layering qualitative intelligence on top of the quantitative baseline. In a previous role, we transformed our forecast accuracy not by buying new software, but by changing our process. Our statistically generated forecast was our starting point, not our destination. We built a simple but disciplined enrichment framework: - Collaborative Input: We worked with the sales team to capture insights on key account promotions, new listings, or potential risks. This wasn't a casual chat; it was a structured input into the plan. - Marketing Integration: The marketing team’s activity calendar was overlaid onto the demand plan. We could now quantify the expected uplift from a specific campaign instead of just hoping for the best. - Market Intelligence: We dedicated a small part of our demand review meeting to discussing competitor activity and market trends, translating these discussions into tangible assumptions in our plan. Suddenly, the number had a narrative. 10,000 units was no longer just a point on a graph. It became "10,000 units, which is composed of 8,500 baseline sales, an anticipated lift of 2,000 units from the 'Summer Sale' campaign, offset by a potential 500 unit loss due to a competitor launching a similar product." This enriched number is something the entire organization can understand, align on, and execute against. It transforms the forecast from a passive prediction into an active planning tool. ----- If you have any questions about Demand and Supply planning, feel free to ask using the links in the Bio. P.S. How does your organization go beyond the numbers to tell the full story of your demand?

  • View profile for Bill Stathopoulos

    CEO, SalesCaptain | Clay London Club Lead 👑 | Top lemlist Partner 📬 | Investor | GTM Advisor for $10M+ B2B SaaS

    20,880 followers

    🔥 The lead scoring blueprint you wish you had 3 quarters ago. Built on Clay’s internal prioritization model, and it’s the same system we apply internally at SalesCaptain and with our clients. At SalesCaptain, we work with go-to-market teams across industries. And this prioritization matrix consistently drives impact. Why? Because it aligns sales, marketing, and growth around the ONLY two questions that matter: 1. Is this account the right fit? 2. Are they showing meaningful engagement right now? We walked through this in our recent webinar with Clay, where we shared a practical 2x2 matrix that drives everything from outbound plays to PLG routing to paid campaigns. 👉 If you only update one thing in your GTM motion for 2026, make it this. Here is how the "2026 GTM Prioritization Matrix" works ✅ Account Fit Score We look at indicators like: - B2B vs B2C - GTM motion (PLG + SLG) - Stack: Salesforce, HubSpot, Snowflake, Clay...etc. - ICP signals: size, vertical, hiring patterns - Similarity to past closed-won accounts ➡️ This tells us if this account worth pursuing at all? ✅ Engagement Score We track behaviors like: - Pricing page visits - LinkedIn engagement - Webinar attendance - Product activation - Positive replies to outbound ➡️ This tells us: are they leaning in, right now? Then we tier every account accordingly: 🟥 Tier 4: De-prioritize → Low fit, low engagement → No sales effort. Light nurture via PLG motion 🟦 Tier 3: Opportunistic Sales → High engagement, low fit → Route to PLG. Sales steps in only when signals are strong 🟨 Tier 2: Marketing Nurture → High fit, low engagement → Warm up with content, events, and thought leadership 🟩 Tier 1: Target Accounts → High fit, high engagement → AE multi-threading, dinners, BOFU ads, the full pipeline play This matrix now powers every core GTM workflow we run: * Clay-based scoring + tiering * CRM enrichment * Real-time Slack alerts * Tier-specific outbound messaging * Dynamic paid campaigns * Internal dashboards * Client workflows No matter if you’re running outbound, PLG, ABM (or all of the above) this system adapts and scales. We’ve deployed versions of it for category leaders, high-velocity startups, and bootstrapped teams. It works, it scales, and it gets your entire GTM speaking the same language. These strategies separate good GTM from elite GTM. Save this post and share it with your team.

  • View profile for Christian Martinez

    Finance Transformation Senior Manager at Kraft Heinz | AI in Finance Professor | Conference Speaker | Published Author | LinkedIn Learning Instructor

    68,363 followers

    Big news: Google released Gemini 3! This is what it can do and what it means for Finance and FP&A Teams 1. Instant Strategic Synthesis (Gemini Chat) Gemini 3’s deep reasoning can ingest and analyze large financial documents (reports, PDFs, CSVs) instantly, moving beyond simple summaries to complex, contextual insight. As always, check what you can upload to an AI tool depending on which subscription you have and your company policy. ✅ Variance Analysis: Upload your Budget vs. Actuals report and prompt: "Identify the three largest, most actionable drivers of the Opex variance, and suggest where to reallocate the budget." ✅ Cohort & Trend Insight: Feed in your customer data and prompt: "Plot the 12-month ARPU trend for all Q1 cohorts and flag the one showing the steepest decay, linking it to the relevant marketing spend." ✅Executive Briefing: Paste a 50-page board deck and instruct: "Act as a CFO. Provide a concise, three-point strategic briefing for the CEO. Focus on the single largest financial risk and the biggest growth opportunity." 2. Apps Creation (Google AI Studio) Use Gemini 3’s agentic coding to build working, interactive finance applications from a single, natural language prompt ✅Dynamic Forecasting: Ask the model to "Vibe code a single-file, responsive Driver-Based Forecasting Simulator" complete with adjustable sliders (e.g., New Customer Acquisition %, Churn Rate) that automatically recalculates and charts resulting ARR and CLV. ✅Custom Dashboards: Develop an AI-powered "CFO Strategic Impact Dashboard" that ingests Excel data, calculates KPIs (Quick Ratio, Gross Margin), and uses Gemini to generate a live, strategic risk/opportunity briefing panel. 3. Advanced Modeling and Automation (Google Colab) For data-intensive tasks, use Gemini in Colab to automate complex financial analysis and modelling. ✅DCF Modelling: Prompt Gemini 3 to "Create a 5-year DCF model." The model generates the boilerplate code, including functions for Free Cash Flow (FCF), terminal value, and Net Present Value (NPV) which you can review and analyze. ✅Automatically ingest raw monthly P&L data (Actuals vs. Budget), calculate key variances, generate professional charts, and draft the required executive summary narrative—all from one Python script generated by Gemini 3. Prompt for example: "Using the P&L DataFrame, calculate the YoY Variance and Budget-to-Actual Variance for all key line items (Revenue, Gross Profit, OpEx). Then, generate a Seaborn Bar Chart that visually highlights the three largest Budget-to-Actual variances, color-coded by expense category." Some images of my tests of this model below! Let me know in the comments if you have any other task you would want me to test. Or also let me know if you want the files with the results. Bonus: One last use case, using Gemini to generate Google Slides and/or Power Point decks! https://lnkd.in/e-ZE5gPA

  • View profile for Carl Seidman, CSP, CPA

    Premier FP&A + Excel education you can use immediately | 300,000+ LinkedIn Learning | Adjunct Professor in Data Analytics @ Rice University | Microsoft MVP | Join my newsletter for Excel, FP&A + financial modeling tips👇

    91,331 followers

    Most small businesses default to two forecasting methods: top-down or bottom-up. But they both share the same problem. The "why" behind performance isn't explained. These approaches are easy to model and are used all the time. But they can easily fail as companies grow larger and more driver based. (1) Top-down forecasting Many companies favor top-down because it's simple and aligned with strategic goals. But the biggest drawback is it's often completely disconnected from an operational reality. I use it for high-level financial forecasting and hardly ever for operational planning. • Leadership sets growth or margin targets • The P&L is segmented into business units • These targets cascade down the statements • Line-items are forecast on high-level assumptions (2) Bottom-up forecasting Bottom-up forecasting is based upon detailed inputs such as sales to customers, sales by SKU, hiring plans by individual versus job category or department, expense budgets, etc. The benefit of bottoms-up is it's detailed and grounded in operations. But it's usually time-consuming, fragmented, and hard to roll up consistently. • Individual contributors come up with their numbers • They share it with an accountant or financial analyst • The accounting/finance person puts it into a model • The model is updated constantly with new details (3) Driver-based forecasting Rather than come up with high-level assumptions that don't tie into operations, or granular detail that doesn't separate signal from noise, driver-based combines the best of both. In this example for a professional staffing company, we can tie future revenue to placements per recruiter, contract duration, markup percentage, bill rates, and recruiter headcount. This allows FP&A the ability to flex operating assumptions, test them, and quickly see what can be done on the ground to influence. Differences between the 3 methods matter: Top-down may set revenue at $50 million based upon an 8% growth rate. We can ask "how do we increase growth?" Bottoms-up may set revenue at $50 million based upon a monthly forecast of 200 customers. We can ask "what do we expect from each customer?" Driver-based planning may arrive at the same $50 million but ask "what operational levers can we press to truly move revenue and margin?" The result is forecasts that are faster, more explainable and easier to update. 💡 If you want to explore next-level modeling techniques, join live with 200+ people for Advanced FP&A: Financial Modeling with Dynamic Excel Session 2. https://lnkd.in/emi2xFdZ

  • View profile for Soledad Galli

    Data scientist | Python developer | Machine learning instructor & book author

    43,301 followers

    Machine learning beats traditional forecasting methods in multi series forecasting. In one of the latest M forecasting competitions, the aim was to advance what we know about time series forecasting methods and strategies. Competitors had to forecast 40k+ time series representing sales for the largest retail company in the world by revenue: Walmart. These are the main findings: ▶️ Performance of ML Methods: Machine learning (ML) models demonstrate superior accuracy compared to simple statistical methods. Hybrid approaches that combine ML techniques with statistical functionalities often yield effective results. Advanced ML methods, such as LightGBM and deep learning techniques, have shown significant forecasting potential. ▶️ Value of Combining Forecasts: Combining forecasts from various methods enhances accuracy. Even simple, equal-weighted combinations of models can outperform more complex approaches, reaffirming the effectiveness of ensemble strategies. ▶️ Cross-Learning Benefits: Utilizing cross-learning from correlated, hierarchical data improves forecasting accuracy. In short, one model to forecast thousands of time series. This approach allows for more efficient training and reduces computational costs, making it a valuable strategy. ▶️ Differences in Performance: Winning methods often outperform traditional benchmarks significantly. However, many teams may not surpass the performance of simpler methods, indicating that straightforward approaches can still be effective. Impact of External Adjustments: Incorporating external adjustments (ie, data based insight) can enhance forecast accuracy. ▶️ Importance of Cross-Validation Strategies: Effective cross-validation (CV) strategies are crucial for accurately assessing forecasting methods. Many teams fail to select the best forecasts due to inadequate CV methods. Utilizing extensive validation techniques can ensure robustness. ▶️ Role of Exogenous Variables: Including exogenous/explanatory variables significantly improves forecasting accuracy. Additional data such as promotions and price changes can lead to substantial improvements over models that rely solely on historical data. Overall, these findings emphasize the effectiveness of ML methods, the value of combining forecasts, and the importance of incorporating external factors and robust validation strategies in forecasting. If you haven’t already, try using machine learning models to forecast your future challenge 🙂 Read the article 👉 https://buff.ly/3O95gQp

  • View profile for Andrey Gadashevich

    Operator of a $50M Shopify Portfolio | 48h to Lift Sales with Strategic Retention & Cross-sell | 3x Founder 🤘

    12,385 followers

    Ever wonder why some e-commerce brands always seem to have the right products in stock, while others struggle with overstock or empty shelves? It all comes down to demand forecasting—and in 2025, it’s getting an AI-powered upgrade. ● From guesswork to precision Traditional forecasting relies on historical sales data. AI-driven tools now go beyond that, integrating real-time factors like weather, local events, and even social media trends. The result? Forecasts with 90%+ accuracy instead of the usual 50%. ● GenAI: the next step Generative AI takes it further by analyzing unstructured data (customer reviews, trends, emerging demand signals) and answering questions in plain language. No more complex spreadsheets—just instant insights for better inventory planning. ● AI tools leading the way: ✔ Simporter – AI-powered forecasting that integrates multiple data sources to predict sales trends. ✔ Forts – uses AI for demand and supply planning, ensuring optimized inventory. ✔ ThirdEye Data – AI-driven forecasting that factors in seasonality and customer behavior. ✔ Swap – AI-based logistics platform that enhances inventory management. ✔ Nosto – AI-driven personalization that recommends the right products at the right time. ● Why this matters for #ecommerce? ✔️ Avoid stockouts that frustrate customers ✔️ Reduce excess inventory and free up cash ✔️ Adapt quickly to market shifts How are you managing demand forecasting in your store? #shopify

  • View profile for Sharat Chandra

    Blockchain & Emerging Tech Evangelist | Driving Impact at the Intersection of Technology, Policy & Regulation | Startup Enabler

    48,537 followers

    Predicting #financialmarket stress has long proven to be a largely elusive goal. Advances in artificial intelligence and #machinelearning offer new possibilities to tackle this problem, given their ability to handle large datasets and unearth hidden nonlinear patterns. In the BIS paper , the authors have developed a new approach based on a combination of a recurrent neural network (RNN) and a large language model. Focusing on deviations from triangular arbitrage parity (TAP) in the Euro-Yen currency pair, our RNN produces interpretable daily forecasts of market dysfunction 60 business days ahead. To address the “black box” limitations of RNNs, our model assigns data-driven, time-varying weights to the input variables, making its decision process transparent. These weights serve a dual purpose. First, their evolution in and of itself provides early signals of latent changes in market dynamics. Second, when the network forecasts a higher probability of market dysfunction, these variable-specific weights help identify relevant market variables that we use to prompt an LLM to search for relevant information about potential market stress drivers.  - Source Bank for International Settlements – BIS

  • View profile for Gambar Oruj

    B2B Marketing Strategist · Deep-Tech & Industrial GTM · Product Marketing · AI-Driven Marketing Operations · Eindhoven

    10,460 followers

    90% of marketing campaigns fail to reach their full potential, and $37B is wasted annually on ineffective digital ads. Why? Because they rely on outdated assumptions instead of real-time intelligence. What if you could predict campaign performance before launch? Enter Digital Twin Marketing Models—a game-changer in 2025. Originally used in manufacturing, this technology now allows marketers to simulate entire campaigns, customer behaviors, and market responses in real time. 🔹Test & optimize campaigns before spending a dollar 🔹Predict customer reactions & refine messaging instantly 🔹Eliminate guesswork with AI-driven scenario planning 🔹Map and improve customer journeys with live data 🔹Align marketing, sales, and product teams around a single source of truth Marketing is shifting from reactive to predictive. Brands leveraging digital twins are seeing higher ROI, better engagement, and reduced campaign risks. The question is—will your next campaign be built on assumptions or real-time intelligence? 🚀 #MarketingStrategy #DigitalTwins #AIinMarketing #MarketingInnovation

Explore categories