Founder: Our ads aren’t converting! Me: Show me your customer journey. Founder: See ad • Buy product. Simple! Me: That’s not a journey. That’s wishful thinking. Here’s the REAL customer journey: Discovery: See ad • Ignore • See again • Ignore • Third time • Click • Slow page • Close • Research: Google brand • Wrong spelling • Exit • Consideration: Retargeting ad • Click • Expensive • Ask people • Sounds fishy • Check competitor • Mixed reviews • Exit • Decision: Back to you • Add cart • Shipping shock • Abandon • Discount email • Finally buy • And you thought it was "See ad • Buy product." Customer journey isn’t a sprint. It’s a marathon with tea breaks. P.S. : This is an extremely simplified journey!
Marketing Campaign Evaluation
Explore top LinkedIn content from expert professionals.
-
-
A company runs an A/B test. Version B wins—12% lift, statistically significant. Champagne. 🎉 Six months later, revenue is flat. What happened? They averaged over their customers. Rookie move. (I've done it too.) Version B: +20% for new users. But -8% for returning customers. New users outnumbered returners in the test, so B "won." Then the customer mix shifted. More returners. The "winning" variant was slowly bleeding its best users. This is Simpson's Paradox—when aggregate trends reverse at the subgroup level. It's not exotic. It's everywhere. Data-driven teams walk into this constantly when the first rule of being data-driven is "run the test and trust the average." The fix isn't more data. It's asking: for whom did each version win? Averages describe populations. They don't describe people. The most dangerous phrase in analytics isn't "we don't have data." It's "the data is clear." For those wrestling with weird A/B results—I see you! Ask who's in your sample before you pop the champagne.
-
Your most effective channel is losing you sales. You can often make campaigns more effective by moving money to less effective channels. What? Marketing Science maestro Simon Toms explains how: In the example image, the blue line represents a channel that’s 2x more effective than the pink one at every spend level. $1M invested in Channel 1 returns $2M in incremental revenue (A). But split the $1M between Channel 1 and 2 (50:50) and you’d drive $2.5M total incremental revenue (B + C). That’s 25% more revenue from investing in a “less effective” channel. So what? Don't accept average metrics alone, always look to understand the marginal returns. Ideally you should know the curves for all your investments. MMM can obviously help with this, but incrementality testing typically provides more detailed curves based on actual sales rather than modelled ones. Incrementality testing is not A/B testing. It's test and control - the test group see the ad, the control group (who match the ad audience but are withheld from the ads) don't. The difference is the incremental impact. (In an A/B test you do not withhold a segment of your audience from seeing the ad, so it can't measure incremental impact.) Here's where curves from incrementality testing can help: 1. Optimal Full Funnel Different optimisations have very different curves. The curve for reach spend is very different to conversion spend which can be very different to ASC activity etc. Plotting curves helps you understand where you should pull back investment and where you should double down, critical insights for maximizing incremental returns. 2. Channel synergy The curve for one channel changes depending on your investment in others. Charlie Oscar found that social reach improves paid search performance by 32%, YouTube improves email by up to 25%, most crazy of all, 70% of the value from social and video channels is their impact on other channels with only 30% direct. 3. Plan at the margins Don't use average ROIs to determine where to shift your budget. It depends on the curve, not the average. Incremental returns show which channels to invest in, marginal returns show how much. Your most effective channel isn't often where you should put your next $. Bottom line: To make your campaigns work harder, you need to understand how each investment works at the margins. That's the route to higher returns across the mix.
-
I used to have this FOMO...Is my brand even visible? When we started, our scooters were just black. Plain black. And to make them look like Zypp, we slapped a small green sticker on them. That was our brand identity... just a sticker. I’d stand on balconies, rooftops, scanning the streets, trying to spot a Zypp scooter. But with only a thousand on the road, it wasn’t easy. Every 10th or 20th scooter might be ours, but that tiny green sticker on black? Almost invisible. And that’s when it hit me. "If I can’t spot my own brand, how will the world?" So, I made a call. What if the entire scooter was green? Not just a sticker...yes, the whole thing. And we started manually wrapping them. Every single one. A small branding fix that turned into a game changer. Then we went to OEMs and said, "We need green. Nothing else. Get it registered as green, or we won’t buy." At first, they hesitated. Big brands. Small startup. Who listens to us? But when they saw our volume commitment, they aligned. Today, our green scooters are everywhere. You don’t look for Zypp anymore... you see Zypp. That’s the power of one bold decision. A small tweak. A massive impact. Today some people say, “Hara hai to Zypp hai.” Definitely we’ve come a long way on that. P.S. Branding isn’t just about visibility. It’s about owning space in people’s minds. And sometimes, all it takes is a color. #branding #marketing #color #logo #brand #startup #green #zypp
-
Machine learning for dynamic pricing optimization offers businesses a competitive edge by enabling them to adjust prices in real-time, ensuring they remain responsive to market demands, customer behavior, and competition, ultimately maximizing revenue and profitability. Machine learning, a subset of AI, allows systems to learn from data and improve without explicit programming, identifying patterns and making predictions from historical data. In pricing optimization, it helps set prices strategically by considering demand, competition, costs, and customer perception. Fundamental data types used include sales history, market trends, competitor pricing, customer behavior, demographics, seasonality, and search trends. Standard algorithms, such as regression, decision trees, neural networks, clustering, and reinforcement learning, are applied to predict demand shifts. Dynamic pricing then adjusts prices in real-time, boosting revenue and competitiveness. For business implementation, ML models can be integrated with existing systems like sales, ERP, and CRM, allowing for real-time price adjustments. Challenges include maintaining high data quality, investing in technology and skills, and addressing ethical and regulatory concerns regarding dynamic pricing, customer perception, and compliance. #ai #MachineLearning #Pricing #CRO #COO
-
👀 Lessons from the Most Surprising A/B Test Wins of 2024 📈 Reflecting on 2024, here are three surprising A/B test case studies that show how experimentation can challenge conventional wisdom and drive conversions: 1️⃣ Social proof gone wrong: an eCommerce story 🔬 The test: An eCommerce retailer added a prominent "1,200+ Customers Love This Product!" banner to their product pages, thinking that highlighting the popularity of items would drive more purchases. ✅ The result: The variant with social proof banner underperformed by 7.5%! 💡 Why It Didn't Work: While social proof is often a conversion booster, the wording may have created skepticism or users may have seen the banner as hype rather than valuable information. 🧠 Takeaway: By removing the banner, the page felt more authentic and less salesy. ⚡ Test idea: Test removing social proof; overuse can backfire making users question the credibility of your claims. 2️⃣ "Ugly" design outperforms sleek 🔬 The test: An enterprise IT firm tested a sleek, modern landing page against a more "boring," text-heavy alternative. ✅ The Result: The boring design won by 9.8% because it was more user friendly. 💡 Why It Worked: The plain design aligned better with users needs and expectations. 🧠 Takeaway: Think function over flair. This test serves as a reminder that a "beautiful" design doesn’t always win—it’s about matching the design to your audience's needs. ⚡ Test idea: Test functional designs of your pages to see if clarity and focus drive better results. 3️⃣ Microcopy magic: a SaaS example 🔬 The test: A SaaS platform tested two versions of their primary call-to-action (CTA) button on their main product page. "Get Started" vs. "Watch a Demo". ✅ The result: "Watch a Demo" achieved a 74.73% lift in CTR. 💡 Why It Worked: The more concrete, instructive CTA clarified the action and benefit of taking action. 🧠 Takeaway: Align wording with user needs to clarify the process and make taking action feel less intimidating. ⚡ Test idea: Test your copy. Small changes can make a big difference by reducing friction or perceived risk. 🔑 Key takeaways ✅ Challenge assumptions: Just because a design is flashy doesn’t mean it will work for your audience. Always test alternatives, even if they seem boring. ✅ Understand your audience: Dig deeper into your users' needs, fears, and motivations. Insights about their behavior can guide more targeted tests. ✅ Optimize incrementally: Sometimes, small changes, like tweaking a CTA, can yield significant gains. Focus on areas with the least friction for quick wins. ✅ Choose data over ego: These tests show, the "prettiest" design or "best practice" isn't always the winner. Trust the data to guide your decision-making. 🤗 By embracing these lessons, 2025 could be your most successful #experimentation year yet. ❓ What surprising test wins have you experienced? Share your story and inspire others in the comments below ⬇️ #optimization #abtesting
-
If we take a public stance on a social issue, will it boost brand perception or backfire? This is one of the hottest debated questions I hear from founders, CMOs, and brand leaders. While the answer is nuanced, it’s worth studying brands that have built campaigns on taking a stand. Consider E.L.F. BEAUTY In 2024, the brand launched its audacious “So Many Dicks” campaign, highlighting gender and racial disparities in U.S. boardrooms by pointing out that there were more men named Richard, Rick, or Dick on public company boards than women or underrepresented groups. The campaign was data-backed, paired with tangible initiatives, and generated a huge cultural conversation. Marketing industry accolades poured in. That said, I was curious to learn how it impacts consumers' awareness, consideration, and perception of the brand. According to Tracksuit data (March ‘25 to Aug ‘25), in the US Skincare & Makeup category: ✅ 19% brand preference – nearly 2 in 10 shoppers say e.l.f. is their top choice, one of the hardest metrics to grow. ✅ 45% agree the brand is “for people like me.” ✅ 55% agree its products are “worth the price,” leading against brands like L’Oreal, Colorpo, and NYX. e.l.f. has built its reputation by moving fast, breaking conventions, and leaning into culture. By choosing an issue aligned with its values and executing a strong campaign, it converted attention into brand strength. 💡 Takeaways for #challengerbrands: Making a big campaign bet is high stakes, but with always-on brand data, you can: - Get pre-launch data to understand what audience to target, and what channels to reach them. - Prove it shifted awareness, consideration, and perceptions, not just reached people. - Demonstrate ROI on brand activity beyond the buzz That’s why I’m glad tools like Tracksuit make this data accessible to small challenger brands as well. 💬 Your turn: Which brand do you want me to explore next in this challenger brand series? Drop your pick in the comments ⬇️ 📎 Link to download the Tracksuit Brand Playbook is in the first comment. #BrandStrategy #marketing #data #partnership
-
If you've heard "half the money I spend on advertising is wasted", then this paper might be for you. A new study using Nielsen household-level data across 40 brands (mostly colas, cereals, and other low-differentiation goods) empirically found several nuggets of wisdom. TLDR for my fellow marketing science folks? It provides a microfoundation for positive and negative spillovers in advertising, especially among habitual buyers. 1️⃣ When consumers are exposed to multiple ads in short succession, their ability to remember any single ad suffers. The interference leads to misattribution (you remember seeing an ad, but forget for whom) or simple forgetting (you saw something, but it didn’t stick). Application: If you're running simultaneous campaigns across competing brands or categories, be careful. One campaign may cannibalize the effectiveness of another (yours or your competition!). 2️⃣ Advertising has positive spillovers when it reinforces memory for similar products (e.g. Coke and Pepsi), but negative spillovers when ads are for very different goods (e.g. cereal and shampoo). The framework helps explain both over- and underperformance in cluttered advertising environments. Application: This gives a theoretical grounding for media mix optimization. Context (category similarity, consumer habits, ad sequencing) matters just as much as spend level. 3️⃣ The strongest effects of advertising (good and bad) are seen among habitual consumers. Ads increase their probability of purchasing again, but also make them vulnerable to interference or confusion when exposed to rival ads. Application: Targeting loyalists with retention ads? Great. But make sure your competitors aren’t tagging along for the ride. Behavior isn't linear (surprise, surprise) because memory, attention, and habit shape how messages land. That means it has implications for measurement, budget optimization, and even MMM specs. 🔍 I'm attaching the NBER version of the paper. Check it out and let me know what your experiences are. Have you seen these effects in the wild?
-
Your campaign grew 30%. Your actual return? 2%. I ask this one question in almost every interview. "How do you measure the return on a marketing campaign you ran?" And almost every time, I get some version of "we tracked impressions and engagement" or "sales went up during the campaign". A very superficial assessment of the campaign. Here's why that answer worries me. If you don't know what actually worked, how do you decide what to scale? A typical campaign runs TV, digital, print, radio, and influencer collabs simultaneously. Can you really isolate what drove the result? Not really. Large brands use Market Mix Modelling (MMM) to understand the impact of multiple inputs, such as trade promo vs consumer promo vs pricing, on sales revenue. If you are a small brand, you can build in layers, start with one medium, measure impact, then add another. The sum of two should be greater than one. If it's not, something isn't working. Studies by Les Binet and Peter Field show that adding an incremental medium should improve performance. Next, pick a control market and a sample market. Run your input in the sample, keep the control clean. No promos, no other activity running there. And this is where people slip up. Your control can't be a market where some new trade scheme or promo is launched; that's not a fair comparison. Ideally, both markets should be at a similar maturity. Comparing a city at 10% penetration with one at 5% is not a clean read. The magic word when looking at success is incrementality. If your sample market grows by 30% and your control market grows by 28%, your true return is 2%. Not 30%. That 28% would have happened anyway. This one reframe changes how you evaluate spending completely. Of course, not every situation allows a clean geo-split or audience-split A/B test. In those cases, you have to resort to a simple pre vs post. Conversion was 10% before, now it's 15%, delta is 5%. And for the more evolved version, you build a predicted baseline using historical trendlines for the sample market, then compare actual growth against what would have happened had you not invested. This is especially useful when you genuinely can't find a decent control market. Knowing what worked and scaling it up can significantly improve returns. What's your go-to method for measuring whether a marketing input actually moved the needle, or do you find most teams still rely on gut? #marketingroi #incrementality #experimentdesign #brandstrategy #marketingmeasurement PS: Views expressed are personal
-
Here’s what actually works, Integration. Run experiments to get ground truth on what’s driving incremental sales. Use MMM to understand the macro picture across all channels. Use attribution to find relative performance within channels, across campaigns. Each method covers the gaps in the others. Experimentation gives you causality but limited coverage. MMM gives you a comprehensive channel view but works on correlation. Attribution gives you real time granularity but can’t tell you what’s incremental. Use one in isolation and you’ll get precise numbers that are very wrong, or fuzzy numbers that miss the future. The companies getting this right aren’t picking one method and hoping it works. They’re combining all three and validating them against each other. Measurement is either done right or it's done easily. #MarketingMeasurement #MMM #Incrementality #MarketingScience #PerformanceMarketing
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development