You can’t just do “test minus control” and call it incrementality. That’s not a model. That’s a middle school math problem. Or at best, it’s an entry level difference-in-differences analysis. Yes, technically, comparing a test group to a control group can work, if you’ve perfectly matched them and nothing weird happens. No weather issues, no influencer shoutouts, no backend outages. No noise. But that’s rarely the case. Real life is messy. And models exist to clean that mess up. BSTS (Bayesian Structural Time Series) doesn’t just subtract. It builds a counterfactual. A “what would have happened” scenario based on trends, seasonality, day-of-week patterns, and a bunch of invisible math gremlins doing some very serious forecasting. It says, “Based on the last 4 months of data and what’s happening in similar markets right now, here’s what revenue should’ve looked like in the test group.” Then it compares actuals to that forecast, complete with error bands so you’re not just guessing. It’s not perfect, but it’s a hell of a lot more reliable than eyeballing a revenue chart and calling it science. Another approach we use is weighted synthetic control. Instead of choosing a single control market and hoping it behaves, you build a custom control made up of multiple regions, each weighted based on how well they historically match the test group. So instead of saying “Denver is our control,” you’re saying “It’s 20 percent Denver, 30 percent Austin, 10 percent Boise,” and so on. Different methods, same goal: a cleaner, more accurate baseline. In other words, we’re not just subtracting numbers. We’re accounting for all external factors that could give us false positives. Because when millions in ad spend are on the line, you don’t just want an answer. You want an accurate one. And “test minus control” might look simple, but unless you’ve controlled for everything else, it’s usually wrong. Tools like Stella make this incredibly simple and surprisingly affordable for marketing teams. Message me if you want more accurate marketing measurement.
Bayesian Forecasting Models
Explore top LinkedIn content from expert professionals.
Summary
Bayesian forecasting models use probability to predict future trends by continuously updating their estimates as new information becomes available. These models help organizations understand uncertainty and make data-driven decisions by incorporating prior knowledge and modeling real-world complexities like seasonality, trends, and group differences.
- Embrace uncertainty: Use probability curves to better assess risk and resilience instead of relying on single-point forecasts.
- Model real-world factors: Incorporate trends, seasonality, and group-specific data to simulate scenarios and improve forecasting accuracy.
- Update with new data: Continuously refine your models by integrating fresh information to keep predictions relevant and reliable.
-
-
Forecasting Risk in Today’s Power System Electricity prices follow human decisions, not formulas. They move with the weather, demand, fuel cost, and strategy. In 2012, we built a model that used Bayesian learning and stochastic games to forecast price distributions rather than single points. It worked then. It’s essential now. The system has changed. North America’s grid is managed through six NERC regional entities. ISOs and RTOs run about two-thirds of U.S. demand. Market operators now rely on probabilistic and Monte Carlo analysis for planning, pricing, and reliability. The old deterministic view is gone. The numbers show the shift. U.S. electricity demand set records in 2024 and again in 2025. Growth comes from data centers, electric vehicles, and manufacturing. The U.S. will add 63 gigawatts of new capacity this year, 81 percent of which will come from solar and batteries. Utility-scale storage will pass 65 GW by 2026. Renewables’ share of generation will climb from 23 percent in 2024 to 27 percent in 2026. Natural gas will decline toward 39 percent, and coal will fall below 14 percent. The key lessons remain. 1. Learn continuously. Bayesian updating incorporates new data—weather, bids, outages—to keep forecasts up to date. 2. Model real behavior. Prices form from competing decisions under limits, not from ideal equations. 3. Show the full range. A probability curve gives investors, traders, and planners the truth about exposure and resilience. The tools are better. GPU computing and scenario reduction now make real-time probabilistic forecasting routine. ISOs use stochastic unit commitment and risk-based adequacy methods. These drive real investment and operational choices, not academic models. The outcome is clear. Forecasting means measuring uncertainty, not hiding it. The most resilient organizations are those that see risk early, price it correctly, and act before others react. We forecast risk because risk drives every real decision—capital, reliability, and trust. The grid’s future will belong to those who treat uncertainty as information, not noise. — Sources: NERC State of Reliability 2025; EIA Today in Energy (May–Oct 2025); FERC Market Reports; ISO/RTO Council Data; Amin & Peck Probabilistic Price Model. #AI #Analytics #Bayesian #Data #Energy #Engineering #Foresight #Forecasting #Grid #Innovation #Leadership #Mathematics #Modeling #Optimization #Probability #Resilience #Risk #Simulation #Sustainability #Systems #Technology
-
In today’s fast-paced tech landscape, understanding the true causal impact of business decisions is more critical than ever. Whether you're launching a new feature, running a marketing campaign, or testing operational changes, it’s essential to go beyond correlation and uncover what actually drives outcomes. In a recent blog post, a data scientist from Walmart explains what Bayesian Structural Time Series (BSTS) models are and how they can be used to measure causal impact. BSTS is a flexible time series modeling approach that breaks down data into components like trend, seasonality, and regressors—enabling teams to simulate what would have happened without an intervention. The post does a great job of explaining the methodology with clear, real-world examples. It’s a valuable read for anyone working on experimentation, marketing measurement, or causal inference at scale. #DataScience #MachineLearning #CausalInference #Analytics #BayesianModeling #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gFYvfB8V -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gzSZcSh8
-
𝗪𝗵𝘆 𝗕𝗮𝘆𝗲𝘀𝗶𝗮𝗻 𝗛𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹 𝗠𝗼𝗱𝗲𝗹𝘀 𝗔𝗿𝗲 𝗦𝗼 𝗣𝗼𝘄𝗲𝗿𝗳𝘂𝗹 A common challenge in data science is dealing with #heterogeneous data, because different regions, customer segments, or product categories may have vastly different amounts of data. Traditional approaches either 𝗺𝗼𝗱𝗲𝗹 𝗲𝗮𝗰𝗵 𝗴𝗿𝗼𝘂𝗽 𝘀𝗲𝗽𝗮𝗿𝗮𝘁𝗲𝗹𝘆, leading to noisy estimates when data is scarce, or force a 𝘀𝗶𝗻𝗴𝗹𝗲 𝗺𝗼𝗱𝗲𝗹 𝗮𝗰𝗿𝗼𝘀𝘀 𝗮𝗹𝗹 𝗴𝗿𝗼𝘂𝗽𝘀, ignoring real differences. 𝗕𝗮𝘆𝗲𝘀𝗶𝗮𝗻 𝗵𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹 𝗺𝗼𝗱𝗲𝗹𝘀 offer a different solution. They allow parameters to vary at 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗹𝗲𝘃𝗲𝗹𝘀 𝗼𝗳 𝗱𝗮𝘁𝗮 𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽𝘀, letting us incorporate not just the data itself but also its underlying structure, #metadata, and the way it was collected. They capture shared #patterns while accounting for group-specific differences. This flexibility makes them ideal for data that’s nested or structured across multiple dimensions. In 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝗮𝗹 𝘀𝗰𝗶𝗲𝗻𝗰𝗲, Bayesian hierarchical models are widely used because they allow scientists to measure effects at different locations, over time, or at different latitudes, all while capturing broader trends. You can read about such one example here: https://lnkd.in/d6ERwa7q In a business use case, such as 𝗿𝗲𝘁𝗮𝗶𝗹 𝗱𝗲𝗺𝗮𝗻𝗱 𝗳𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴, Bayesian hierarchical models provide: • 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗱𝗮𝘁𝗮 𝗮𝗰𝗿𝗼𝘀𝘀 𝗿𝗲𝗴𝗶𝗼𝗻𝘀, 𝘀𝘁𝗼𝗿𝗲𝘀, 𝗮𝗻𝗱 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝗰𝗮𝘁𝗲𝗴𝗼𝗿𝗶𝗲𝘀, capturing both global trends and local variations. • 𝗦𝗲𝗮𝘀𝗼𝗻𝗮𝗹𝗶𝘁𝘆 𝗺𝗼𝗱𝗲𝗹𝗶𝗻𝗴, assuming common patterns across regions but also allowing for regional differences. • 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 #sparse 𝗱𝗮𝘁𝗮, borrowing information from related datasets to improve #accuracy. You can read more about this application: https://lnkd.in/dnkcKi4b In both cases, I used #PyMC for Bayesian modeling. By allowing flexibility and borrowing strength from related data, Bayesian hierarchical models offer a robust approach to #forecasting, 𝗲𝘃𝗲𝗻 𝘄𝗶𝘁𝗵 𝗹𝗶𝗺𝗶𝘁𝗲𝗱 𝗼𝗿 𝘂𝗻𝗲𝘃𝗲𝗻 𝗱𝗮𝘁𝗮. Let me know if you've used Bayesian hierarchical models, I'd love to hear about other use cases. #BayesianInference #HierarchicalModels #DataScience #MachineLearning #Forecasting #RetailAnalytics #PyMC #EnvironmentalScience #DemandForecasting #StatisticalModeling #BusinessAnalytics #GeospatialModeling #PredictiveModeling #DataAnalysis
-
Probabilistic Time Series Analysis: Opportunities and Applications This article by Dr. Juan Camilo Orduz from the PyMC Labs discusses the role of probabilistic forecasting in supporting data-driven decision-making, with a focus on Bayesian methods. ✅ Describes how predictive distributions provide a better picture of future outcomes than point estimates. ✅ Shows how Bayesian models can incorporate prior knowledge and adapt to data constraints. ✅ Includes case studies from industries like logistics and energy to illustrate real-world applications. https://lnkd.in/gAEFSUqJ #timeseries #bayesian #datascience #python
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development