People new to MMM: don't make the same mistakes you made with MTA. • MMM: marketing mix modeling • MTA: multi-touch attribution I've been in this industry a long time, and have helped to build out custom multi-touch attribution systems using fingerprinting, DMPs, clickstream parameters, IP matching, and all of the 1PD/3PD bells and whistles you can think. Most brands never got that advanced, instead they just: • Bought a tool • Used Google Analytics • Trusted the tool and used it to make decisions The biggest mistake -- which I'm begging you not to repeat -- is the third bullet above. You can't just blindly trust ANY methodology. And I'm seeing it happening right now at fusepoint and beyond. The measurement gurus (myself included) can be very convincing, and people who aren't as close to this are just eager to finally "get some answers." And while MMM is undoubtedly a much better method than MTA, it isn't without its flaws. In general, you should have healthy suspicion of anything. I'll give you an example: • Many MMM use Bayesian time-series methods (we do too) • Bayesian allows you to inject some "opinions" and priors • Bad priors means a bad model output A highly simplified version of this might be something like: • Run an MMM for a brand with Meta, Google, TikTok • Put ROI "priors" (read: assumptions) for each at --- Meta: 1-2x --- Google: 2-3x --- TikTok: 3-4x • Compute the model (with these priors) The results will be heavily influenced by those ROI ranges that we put in from priors. This is a "good" thing if you have those priors calculated from recent incrementality experiments or a body of historic models, but if you're just putting them in cold it can really invalidate results. For this brand they might have completely different results in reality, even though the model is saying something like: • Meta ROI of 1.5x • Google ROI of 2.5x • TikTok ROI of 3.5x In this case the brand would likely conclude that they should divest from Meta and more heavily reallocate into TikTok. However if the model was allowed to run "cold" with few priors, it might have come back with totally different results. For six months maybe they continue to allocate budgets in this way, and they see that their performance isn't getting any better (or actually worse). So they decide to run some incrementality experiments and see: • Meta ROI is actually 4x • Google ROI is actually 1x • TikTok ROI is really only 2x Completely different from what the assumptions were input as priors to the model. Maybe those assumptions were based on a different brand or some other kind of "opinion" from whoever was running the model. Happens all the time. Takeaways: • All measurement methodologies have flaws • Humans have a tendency to overly trust these things • Develop a healthy skepticism and scientific approach • Keep validating what you're doing is showing up in the P&L Stay scientific out there.
Marketing Mix Modeling Insights
Explore top LinkedIn content from expert professionals.
-
-
The Econometric Illusion: When Marketing Models Mislead In today's data-driven marketing world, econometrics promises to unveil hidden connections between media spend and sales. But as we delve into regression analyses and p-values, a troubling question emerges: Are we building strategies on statistical quicksand? The shift from gut feelings to data-driven decisions in media planning has been revolutionary. Marketers now confidently declare that a 10% increase in TV spend will yield a 2.3% uplift in sales. These precise figures, presented as scientific fact, can make even seasoned professionals feel like soothsayers with supercomputers. But many of these pronouncements rest on shaky statistical foundations. At the core is the misuse of statistical significance, particularly p-values. A p-value below 0.05 often transforms tenuous correlations into "significant findings" driving million-pound decisions. This fixation has led to a replication crisis in marketing research, mirroring issues in psychology and medicine. Another pitfall is mistaking correlation for causation. When models show a strong relationship between social media engagement and sales, it's tempting to assume direct causation. But marketing is a web of interconnected factors, and apparent causal relationships may be spurious correlations. Even with sound methods, data quality remains a challenge. Capturing clean, comprehensive data on consumer behaviour is Herculean. External factors like economic downturns can derail sophisticated models. The final hurdle is interpreting and implementing insights. Overconfidence in model outputs can lead to rigid strategies ignoring inherent uncertainty. Focus on marginal gains can distract from transformative initiatives. Consider these cautionary tales: - An FMCG company increased TV ad spend based on models showing strong sales correlation. Failing to account for diminishing returns, they wasted millions on ineffective advertising. - A bank shifted budget to digital channels, missing the crucial role of traditional media in building trust. This led to a decline in valuable long-term customer relationships. - A supermarket optimized promotions using models that didn't account for cannibalization, eroding overall category profitability. Should we abandon econometrics? No. We need a more nuanced approach: - Embrace uncertainty. No model captures full market complexity. - Foster statistical literacy among marketers. - Prioritise data quality and comprehensive measurement. - Reaffirm the value of human judgment alongside data. The future of marketing econometrics lies in understanding the interplay between data, analysis, and strategic thinking. By combining rigorous analysis with strategic insight and creativity, we can develop strategies both data-informed and attuned to human complexity.
-
10 common mistakes I’ve seen MMM consultants make: 1. Measuring only in-sample accuracy which encourages over-fitting and bad modeling practices. Modelers should focus on predictive accuracy on data never seen before. 2. Assuming that marketing performance doesn’t change over time. No one believes that marketing performance is constant over time, so why make that assumption? 3. Assuming seasonality is additive and doesn’t interact with marketing performance. This will generate nonsensical results like telling you to advertise sunscreen in the winter and not in the summer! 4. Using automated variable selection to account for multicollinearity. Automatic variable selection methods (including methods like ridge regression and LASSO) don’t make any sense for MMM since they will “randomly” choose one of two correlated variables to get all of the credit. 5. Assuming that promotions and holidays are independent of marketing performance, rather than directly impacted by it. 6. Using hard-coded long-time-shift variables to account for “brand effects” that aren’t actually based in reality. By “assuming” long time shifts for certain channels they can force the model to assign way too much credit to that channel. 7. Allowing the analyst/modeler to make too many decisions that influence the final results. If the modeler is choosing adstock rates and which variables to include in the model, then your “final” model will not show you the true range of possibilities compatible with your data. 8. Assuming channels like branded search and affiliates are independent of other marketing activity rather than driven by it. 9. Only updating infrequently to avoid accountability – if your results are always out of date then no one can hold the model accountable. 10. Forcing the model to show results that stakeholders want to hear instead of what they need to hear. With a sufficiently complex model, you can make the results say anything. Unfortunately, this doesn’t help businesses actually improve their marketing spend.
-
I made my media mix model lie and then I made it lie again. My PyMC-based MMM had beautiful R-squared scores and impressive MAPEs. It even nailed the train-test splits. But guess what? The results were still completely misleading. How could I tell? Because the outputs failed the sniff test. Channels known from real-world experience to drive revenue weren't showing up as impactful, and some minor channels were inflated beyond reality. Good-looking statistical measures don’t guarantee an accurate reflection of your marketing reality, especially if your data isn't telling the whole story. Here's what actually went wrong: My model lacked enough meaningful variation—or "signal"—in key marketing channels. Without clear fluctuations in spend and impressions, even sophisticated Bayesian models like PyMC can't accurately infer each channel's true incremental impact. They end up spreading credit randomly or based on spurious correlations. Here’s what I do differently now: I always start client engagements with a signal audit. Specifically, this means: * Reviewing historical spend patterns and ensuring sufficient spend variation across weeks or regions. * Checking for collinearity between channels (e.g., Google Search branded and non-branded), which can cause misleading attribution. * Identifying channels stuck in “steady state” spending—these need deliberate experimentation to create fluctuation. Once the audit flags weak-signal channels, I run deliberate, controlled lift tests (such as holdout tests or incrementality experiments) to create the necessary data variation. Only after these signal issues are fixed and lift tests integrated do I trust the model: * I feed the experimental data into the model * I validate the model against domain knowledge, sanity-checking contributions with known benchmarks and incrementality test results. * And only then do I let the model drive budgeting and channel allocation decisions. Bottom line: Great statistical fit isn't enough. Your model must pass both statistical tests and practical, real-world "sniff tests."
-
Ever watched a market mix model bend reality to fit a senior exec’s hunch? That’s a bad prior at work. In Bayesian MMM we start with beliefs (priors) and let data update them. Done right, priors guide us toward plausible answers fast. Done wrong, they blindfold the model and force it into bad answers. Where it goes off the rails... In a few examples. Firstly, self-serving priors: An external party bakes in a high TV elasticity so the post-analysis screams “double your GRPs.” Secondly, internal wish-casting: A BI team hard-codes “brand search drives 40 % of sales” because it always has in last-click. So how can you keep your MMM models honest? Interrogate the priors. Ask exactly which distributions are pinned down and why. “Industry benchmarks” without proof is not an answer. Stress-test them. Swap in weak priors and compare ROI swings. More than ±20 %? Your prior is steering the ship. Demand hold-out accuracy. A model that can’t predict next month isn’t worth your budget. Bad priors are going to be the bane of the modern Bayesian MMM stack. Treat them like any other financial assumption - and challenge until they break or prove themselves. Models should be stable, fast and subject to scrutiny. Anything less is going to turn MMM into MTA 2.0. And that's bad for everyone.
-
You’re making 7-figure decisions off MMMs that weren’t built, validated, or interpreted by real experts, and that should scare you. Most teams don’t realize this. They trust the slide. The chart. The ROAS curve. They forget that modeling is fragile, and without the right hands on it, dangerously easy to get wrong. Would you get heart surgery from a guy who says, "I'm not a full-time surgeon, but I’ve read the playbook and watched some YouTube videos"? No? So why are you trusting your growth strategy to someone who built a media mix model as a side-gig? Most in-house teams and agencies mean well. But MMM isn’t something you just figure out. And when it’s not your full-time focus, you miss things like: • No check for multicollinearity • No backtesting or posterior predictive checks • No out-of-sample validation • No support for impression-based tactics like podcasts or influencer posts • No adjustments for seasonality, promotions, or macro shifts • No process for updating the model as your strategy changes These aren’t just things to scare you. These are the risks of trusting false MMM outputs. And here’s the part no one wants to say out loud: Your agency is probably running your data through Meta’s free Robyn package or Google’s Meridian (which, to be fair, is actually pretty good) Then charging you $5,000 to $10,000 to tell you what to cut and what to scale. That’s fine as long as you understand the results may be confidently wrong. And you know the right questions to ask them. If you don't know the right questions, start with these: • How do you handle multicollinearity between similar channels? • What does your backtesting or validation process look like? • Can the model measure tactics like podcasts or influencers? • What non-media variables are included to control for outside effects? Media mix modeling isn’t about the open-source package that is used. It’s about judgment. About knowing what assumptions must be challenged and what blind spots quietly distort the answer. If your model is built by people who don’t do this full time, don’t be surprised when it quietly sends your business in the wrong direction. Alright, I am closing LinkedIn for the day, I some freelance heart surgery gigs I have to prep for. 🩺
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development