Incrementality in Retail Media: Key Insights Following up on my previous post about key questions for incrementality models, here’s what strong answers look like: 1.Bias Handling Approach: We use propensity score matching and covariate balancing to ensure test and control groups are comparable. Why It Matters: These methods create fair comparisons between groups exposed and not exposed to marketing, ensuring accurate assessments. 2.Core Assumptions Approach: Our model assumes SUTVA (Stable Unit Treatment Value Assumption) and no hidden confounders, which we rigorously test. Why It Matters: Ensures one customer's behavior doesn't influence another's, enhancing result reliability. 3.Causal Inference Techniques Approach: We apply difference-in-differences, synthetic control methods, and regression discontinuity designs as appropriate. Why It Matters: These techniques isolate the true impact of marketing efforts from other variables. 4.Visual Models Approach: We use Directed Acyclic Graphs (DAGs) to map causal relationships and identify confounders, refining them with domain experts. Why It Matters: DAGs visualize complex factor interactions, clarifying causal pathways. 5.Data Granularity Approach: We leverage transaction-level data with privacy-preserving techniques and apply ecological inference for aggregated data. Why It Matters: Detailed data enables precise incrementality estimates; ecological inference aids insights from group data. 6.Handling Unusual Data Approach: We employ multiple imputation for missing data, robust regression for outliers, and sensitivity analyses for anomalies. Why It Matters: These methods address real-world data issues, ensuring data integrity. 7.Model Validation Approach: We perform A/B tests, backtesting, out-of-sample validation, and compare with traditional marketing mix models. Why It Matters: Validates our model’s accuracy and reliability across different scenarios. 8.Time-Based Adjustments Approach: We incorporate Bayesian structural time series models to account for seasonality, trends, and external events. Why It Matters: Captures temporal patterns like holiday spikes and market shifts. 9.Sample Size Requirements Approach: We conduct power analyses and use adaptive sampling to balance statistical significance and cost-efficiency. Why It Matters: Ensures sufficient data for reliable insights without resource waste. 10.Model Flexibility Approach: Our model utilizes transfer learning to adapt to various campaign types and objectives, from awareness to conversion. Why It Matters: Enables consistent measurement across diverse marketing strategies. #RetailMedia #Incrementality #MarketingAnalytics #DataScience
Attribution Model Validation Techniques
Explore top LinkedIn content from expert professionals.
Summary
Attribution model validation techniques help businesses confirm that their marketing measurement models accurately identify which channels and tactics drive real results. These techniques use experiments, surveys, and data modeling to reveal how marketing efforts impact sales, separating actual influence from misleading signals.
- Run incrementality tests: Compare groups exposed to marketing versus those not exposed to determine the true impact of your ads or campaigns on revenue.
- Draw insights from multiple models: Use a mix of attribution methods like media mix modeling, post-purchase surveys, and multi-touch attribution to cross-check results and avoid relying on a single data source.
- Build trust with randomized control groups: Set up holdout samples to challenge assumptions and reveal what genuinely drives business growth, rather than relying on potentially biased tracking data.
-
-
I’m done playing games with my marketing attribution… So I did something bold and I’m never looking back. With browsers and platforms cracking down on cookies, our trusty attribution models are becoming less reliable. Each platform is fighting for credit, and we’re left piecing together a fractured picture. If we’re talking about bringing data analysis back to a science… We have to acknowledge that no scientist would use a complex multi-modal mix to analyze the impact of a drug. To them, it sounds wrong and riddled with bias. They would use a randomized double-blind, placebo-controlled trial to isolate the drug's impact. Our equivalent in marketing is a randomized control group. We started using randomized control groups to measure incremental revenue lift and realized our assumptions were wrong. It was mind-blowing how often these insights differed from unreliable last touch attribution models. We began to understand what truly drove lift and what was noise. Last touch models optimize for last touch experiences. You start believing your abandon cart email or holiday sale offer was the true driver of revenue. Imagine a relay race between a marathon runner and a sprinter… If you only monitor the sprinter’s impact on crossing the finish line, you’ll obsess over shaving seconds off their time. The bigger impact comes from shaving minutes off the marathon runner's time. The greatest competitive advantages come from first principles thinking. When you trust your measurement system, your strategy is rooted in reality. If you’re using a last touch centric model, you may be living in a fantasy scripted by biased platforms wanting your budget. And to clarify... we still use other models to measure lift but view it purely as directional data. The only fully trusted source is our randomized hold out based attribution metrics. Get back on that juice cleanse and bring your numbers back down to reality. From a strong foundation, opportunities to drive step-change impact on your business will become abundantly clear. How are you navigating the attribution puzzle?
-
Meta, Google, TikTok, and other ad channels are misleading you. Third-party attribution tools like Triple Whale and North Beam aren't better—they’re flawed too. Tracking has always relied on estimated models, not hard numbers. After iOS 14, tracking became harder, leading to a surge in third-party solutions. But these also provide conflicting data, making it tough to find the truth. So, what is the truth? The only reliable way to measure your marketing efforts is through incrementality tests. These tests answer the question, "What if this channel or ad never existed?" By showing ads to one group and withholding from another, you can measure the true impact on revenue and profit. For example, if you're running Facebook ads and selling on Shopify and Amazon, incrementality tests reveal how Facebook ads impact Amazon sales. Without the initial Facebook touchpoint, an Amazon purchase might not have happened, even though traditional attribution wouldn’t show this. This is why ROAS and third-party attribution aren’t accurate. They use models that can be thwarted by privacy settings and cross-channel purchases. By running incrementality tests, you discover the true impact of your marketing efforts. We ran a 14-day Meta holdout test and found that zip codes shown ads generated 50% more Amazon revenue than those not shown ads, despite sending traffic to Shopify. Now is the perfect time to run these tests. Q3 is calm, free from major holidays that skew results. This is your chance to optimize before Q4. If your brand generates seven figures annually, this should be a top priority to grow profits in Q4.
-
I've spent $100M+ on Meta in DTC space And I use 3 attribution models: Ad platforms are notorious for taking credit for view-through conversions they didn't drive. They do it to bait you into spending more. The issue is that your top 1-2% of ads should drive ~50% of your spend and revenue. If you're relying on bad attribution, you won’t be able to find them. This is why 8-9 figure brands (that NEED their tracking to be faultless), use 3 attribution models: 1. Multi-touch attribution (MTA) - for ad and campaign level optimization. This is your Triple Whale or Northbeam. Great for knowing which ads are performing best, which ones to scale, which to cut. Not as good for comparing channel to channel. It also will overcount total revenue, which you need to be careful about. To make sure your account is well optimized, plot CPA vs Spend on a scatter plot. The top ads should be in the low CPA, high spend zone. 2. Post-purchase survey - for channel level allocation. Get a 35%+ response rate, extrapolate to all new customers, and calculate your cost per new customer response per channel. This tells you which channel to push into. Click-based attribution overvalues lower-funnel performance by up to 250%. Post-purchase surveys catch what click attribution misses - top-of-funnel creative can drive 13X more incremental acquisitions than bottom-of-funnel. 3. Marketing Mix Model (MMM) - for validating direction. You can't use this daily, but it confirms your post-purchase survey is sending you the right way. Then you use post-purchase on a daily basis to optimize channel allocation. Some channels drive low-quality customers that look good on ROAS but don't stick around. MMM helps you optimize for 12-month profit as opposed to just immediate return. The other thing to know is that view-through attribution is poor signal. Make sure your attribution is set up for 7 or 14 day click, depending on your purchase funnel. One day view will overcount. Here's what this gives you: When performance drops, you know exactly where to pull budget to create the smallest impact on revenue while keeping the company profitable. When things are going well, you know exactly where to push budget to scale effectively. Bottom line: -> Use MTA for ads and campaigns. -> Use post-purchase surveys for channel allocation. -> Use MMM to validate you're heading the right direction. This is how 8-9 figure brands figure out where every dollar should go.
-
Triangulation. Why do we need 3 methods to measure the impact of media? We use measurement to... - Identify what worked in the past - Optimize the present - Forecast/Plan the future Unfortunately, there is no single tool that can do everything. But you can use the following methods together: 1.) Media Mix Modeling (MMM) 2.) Experiments (Geo Tests, Lift Studies) 3.) Multi-touch Attribution (MTA) Let's break down what each is good for. 1.) Media Mix Modeling (MMM) This considers your media (impressions, spend) and models it to your outcome (revenue, leads, profit). It answers which factors, channels, and tactics impact that outcome. Pros - Holistic, can measure all channels - Calculates incrementality - Can give you a baseline - Can measure lag (ad-stock effect) - Privacy-proof - Incorporated factors beyond media Cons - Not granular - Can be technically challenging to run We use MMM for... ✅ Measuring the past ❌ Optimizing the present ✅ Plan the future 2.) Experiments Geo-tests are the most popular. This method finds similar geographies (city, state, DMA). Which allows you to measure the impact of pulsing media up/down/off. Pros - Statistically accurate - Calculates incrementality - Privacy-proof Cons - Time-intensive for many channels - Challenges in smaller countries - Lost revenue from holdouts We use experiments for... ❌ Measuring the past ✅ Optimizing the present ✅ Plan the future 3.) Attribution (MTA) This stitches journeys together at the user-level, and assigned credit to the channels/campaigns that the user engaged (click, view). Tools like Google Analytics or even Meta/Google's internal platforms use attribution. Pros - Data is realtime - Easy to get the data - Visitor/User-level data Cons - Blind to offline/non-click channels - Relies on cookies, not privacy-proof - Does not measure incrementality We use MTA for... ✅ Measuring the past ✅ Optimizing the present ❌ Plan the future So, how do mature brands put this all together (triangulation)? 1.) Measure the past using MMM and MTA - What worked? - Which channels were incremental? - What is our baseline? 2.) Use MTA and Experiments to optimize the present - MTA for campaign-level data in a single platform - Experiments to validate the MMM 3.) Forecast and Plan the future - MMM to model and scenario plan What would you change/add about this approach? #triangulation #measurement #methods
-
One of the most common questions we get is: How do you build the perfect attribution model? The answer? There’s no such thing (and that’s ok). But here’s the good news: You don’t need perfection to drive meaningful insights. One of our favorite ways to get as close as possible is through a process we call triangulation testing. Here’s how it works: 👉 Choose a test month (we like February because it’s short and provides quicker results). 1️⃣ Create common-sense holdout tests These are specific experiments where you turn off or adjust spending on certain channels or audiences. Start with the biggest spend areas to maximize impact. For example, you could turn off paid Facebook in California or pause CRM/email campaigns for a specific segment. 2️⃣ Set up control and test markets This helps you measure the lift or decline compared to the baseline. 3️⃣ Analyze results with source-of-truth reporting. Use tools like Google Analytics and Meta Ads Manager to compare against your other data points. Measure how the holdout markets perform against your control to identify trends and create coefficients for your attribution model. 👉 Refine and repeat The key is using all three approaches to develop a methodology, implement it, and revisit it annually. This part is important. Don't revisit it until the next year... otherwise you'll waste too much time and brain cells trying to get to a perfect attribution model... and you'll drive yourself crazy. Clarity comes from combining multiple perspectives, rather than relying on a single data source. Attribution isn’t about perfection—it’s about getting actionable insights and continuously optimizing. If you’re ready to experiment, test, and iterate, you’re already ahead of the curve.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development