I fell for it. And I’m a marketer. Walked into Zudio. Walked past the fragrance section. Saw a bold, bright “FREE” written on a perfume box. My Indian brain short-circuited for a second. “FREE?! Really?” Instantly picked it up. Spoiler: it wasn’t free. It’s literally the name of the perfume “FREE by Zudio (For Men)”. But guess what? That one moment of curiosity did its job. I noticed the product. Touched it. Engaged with it. Maybe even considered buying it. That’s the first win for any brand. Zudio didn’t just sell a fragrance here they played with visual hierarchy, cultural cues, and psychology. Here’s why it worked so well: 1. Bold typography + emotional trigger word: “Free” in India is like a magnet. It taps into a deep-rooted love for value (or perceived value). 2. Visual Disruption: The design broke the pattern. It didn’t blend in with the rest, it stood out. So even in a shelf full of options, your eye goes there first. 3. The Power of Touchpoint: The moment a customer physically interacts with a product, the chances of purchase spike dramatically. You’ve already imagined owning it. 4. Curiosity-Driven Engagement: Even if you don’t buy it today, you remember it. It leaves a cognitive imprint. Marketing takeaway? Sometimes, you don’t need a discount to draw attention. You just need to know how people think, feel, and react. So here’s to brands like Zudio for giving us a real-time case study in consumer behaviour - one bold font at a time. (P.S. No, I didn’t buy it. But I almost did. And that’s still a marketing win.) #ConsumerPsychology #RetailMarketing #Zudio #Branding #MarketingStrategy #ImpulseBuying #PerceptionMatters #IndianRetail #UXinRealLife #BrandExperience #QuirkyMarketing
Advertising Design Effectiveness
Explore top LinkedIn content from expert professionals.
-
-
👀 Lessons from the Most Surprising A/B Test Wins of 2024 📈 Reflecting on 2024, here are three surprising A/B test case studies that show how experimentation can challenge conventional wisdom and drive conversions: 1️⃣ Social proof gone wrong: an eCommerce story 🔬 The test: An eCommerce retailer added a prominent "1,200+ Customers Love This Product!" banner to their product pages, thinking that highlighting the popularity of items would drive more purchases. ✅ The result: The variant with social proof banner underperformed by 7.5%! 💡 Why It Didn't Work: While social proof is often a conversion booster, the wording may have created skepticism or users may have seen the banner as hype rather than valuable information. 🧠 Takeaway: By removing the banner, the page felt more authentic and less salesy. ⚡ Test idea: Test removing social proof; overuse can backfire making users question the credibility of your claims. 2️⃣ "Ugly" design outperforms sleek 🔬 The test: An enterprise IT firm tested a sleek, modern landing page against a more "boring," text-heavy alternative. ✅ The Result: The boring design won by 9.8% because it was more user friendly. 💡 Why It Worked: The plain design aligned better with users needs and expectations. 🧠 Takeaway: Think function over flair. This test serves as a reminder that a "beautiful" design doesn’t always win—it’s about matching the design to your audience's needs. ⚡ Test idea: Test functional designs of your pages to see if clarity and focus drive better results. 3️⃣ Microcopy magic: a SaaS example 🔬 The test: A SaaS platform tested two versions of their primary call-to-action (CTA) button on their main product page. "Get Started" vs. "Watch a Demo". ✅ The result: "Watch a Demo" achieved a 74.73% lift in CTR. 💡 Why It Worked: The more concrete, instructive CTA clarified the action and benefit of taking action. 🧠 Takeaway: Align wording with user needs to clarify the process and make taking action feel less intimidating. ⚡ Test idea: Test your copy. Small changes can make a big difference by reducing friction or perceived risk. 🔑 Key takeaways ✅ Challenge assumptions: Just because a design is flashy doesn’t mean it will work for your audience. Always test alternatives, even if they seem boring. ✅ Understand your audience: Dig deeper into your users' needs, fears, and motivations. Insights about their behavior can guide more targeted tests. ✅ Optimize incrementally: Sometimes, small changes, like tweaking a CTA, can yield significant gains. Focus on areas with the least friction for quick wins. ✅ Choose data over ego: These tests show, the "prettiest" design or "best practice" isn't always the winner. Trust the data to guide your decision-making. 🤗 By embracing these lessons, 2025 could be your most successful #experimentation year yet. ❓ What surprising test wins have you experienced? Share your story and inspire others in the comments below ⬇️ #optimization #abtesting
-
Ginfluence Packaging isn't just a wrapper. It's a trigger. Before we've even read a word, we've made up our minds. This looks expensive. That feels cheap. This one's for me. That one isn't. Design bypasses logic. Goes straight to the gut. That's where neuromarketing steps in. Not what people say they like. What their bodies show they do. The Ginnasium project set out to prove it. Five Italian designers. Two gin bottles each. No logos. No words. Just shape, stock, colour, texture. In partnership with UPM Raflatac, VETROelite, Luxoro, Vinolok and more, they tested how every design choice triggers instinct. Closure clicks. Glass weight. Label textures. Foil detail. Curves. Then came the science. Eye trackers. Bio trackers. Brain trackers. Not shelf appeal. Shelf reaction. Texture built trust. Heaviness suggested premium. Curves changed expectation. The smallest shifts made the biggest impact. Design didn't just influence consumers. It left fingerprints. Ginnasium turned instinct into evidence. And proved what we already know. Every choice matters. Every surface speaks. Every second counts. Design influences. This time, it was measured. Still trusting opinions over instinct? Thoughts on the project findings? 📷UPM Raflatac
-
+4
-
Publisher experiments fail when they start with tactics, not hypotheses. A/B testing has become a staple in digital publishing, but for many publishers, it’s little more than tinkering with headlines, button colours, or send times. The problem is that these tests often start with what to change rather than why to change it. Without a clear, measurable hypothesis, most experiments end up producing inconclusive results or chasing vanity wins that don’t move the business forward. Top-performing publishers approach testing like scientists: They identify a friction point, build a hypothesis around audience behaviour, and run the experiment long enough to gather statistically valid results. They don’t test for the sake of testing; they test to solve specific problems that impact retention, conversions, or revenue. 3 experiments that worked, and why 1. Content depth vs. breadth: Instead of spreading efforts across many topics, one publisher focused on fewer topics in greater depth. This depth-driven strategy boosted engagement and conversions because it directly supported the business goal of increasing loyal readership, and the test ran long enough to remove seasonal or one-off anomalies. 2. Paywall trigger psychology: Rather than limiting readers to a fixed number of free articles, an engagement-triggered paywall is activated after 45 seconds of reading. This targeted high-intent users, converting 38% compared to just 8% for a monthly article meter, resulting in 3x subscription revenue. 3. Newsletter timing by content type: A straight “send time” test (9 AM vs. 5 PM) produced negligible differences. The breakthrough came from matching content type to reader routines: morning briefings for early risers, deep-dive reads for the afternoon. Open rates increased by 22%, resulting in downstream gains in on-site engagement. Why most tests fail • No behavioural hypothesis, e.g., “testing headlines” without asking why a reader would care • No segmentation - treating all users as if they behave the same • Vanity metrics over meaningful metrics - clicks instead of conversions or LTV • Short timelines - stopping before 95% statistical confidence or a full behaviour cycle What top performers do differently ✅ Start with a measurable hypothesis tied to business outcomes ✅ Isolate one behavioural variable at a time ✅ Segment audiences by actions (new vs. returning, skimmers vs. engaged) ✅ Measure real results - retention, conversions, revenue ✅ Run tests for at least 14 days or until reaching statistical significance ✅ Document learnings to inform the next test When experiments are designed with intention, they stop being random guesswork and start becoming a repeatable growth engine. What’s the most valuable experimental hypothesis you’re testing this quarter? Share with me in the comment section. #Digitalpublishing #Abtesting #Audienceengagement #Contentstrategy #Publishergrowth
-
At Swiggy every product feature goes live via A/B testing. Experimentation is such a goldmine for decision-making, and as someone who didn't even know how to do it right a year back - below is my step-by-step approach to statistical analysis. The problem statement is - You’re working to improve the conversion rate for a product signup flow. You’ve implemented several changes—now, how do you figure out which one really makes a difference? 1️⃣ Define the Hypothesis Before diving into the test, clearly define what you're testing. Example: "Will the new signup flow increase the conversion rate (CVR)?" 2️⃣ Define Clear Metrics What does success look like? Are you aiming to increase the percentage of sign-ups, or reduce drop-offs at a specific funnel stage? Success Metric: Conversion rate or step completion rate. Check Metric: Have a secondary metric to ensure nothing else breaks (e.g., page load times or errors). 3️⃣ Test One Change at a Time Testing multiple changes (e.g., a new form layout and an incentive like a discount) at once won’t help you pinpoint which worked. 4️⃣ Split Your Traffic Determine the sample size you need and split users randomly into two groups: - Group A (Control): Users see the current version - Group B (Test): Users see the new version 5️⃣ Collect Data & Analyze Monitor key metrics like CTR, conversion rates, or churn - choose metrics tied to the business goal. Example: Track how the new signup form impacts the completion rate of the entire signup process. 6️⃣ Analyze Statistical Significance You see a 15% increase in conversions with the new form—great! But is it statistically significant, or could it be due to random variation? Use p-values and Z scores to validate if the changes are meaningful and not just due to chance. 7️⃣ Interpret Results & Take Action Once you’ve confirmed statistical significance, interpret the results in a business context. Example: If the new form significantly increases conversion but doesn’t impact overall user satisfaction, it’s time to implement it at scale. 💡 What’s are some of your go-to strategies for an effective A/B test? Share your insights in the comments! ___________ 🔔 Follow Sanya Swain ♻ Repost to help others find it 💾 Save this post for future reference #businessanalysis #dataanalytics #dataanalyst #analytics #businessinsights #womenintech #product #sql #datascience #abtesting
-
Three designs to compare, do you run a bunch of t-tests, or just one ANOVA? Imagine you’re comparing three designs in a UX study. Pretty common, right? Now here’s where many teams slip: they run multiple t-tests, Design A vs B, A vs C, and B vs C. On the surface it feels simple, but statistically it’s a trap. The problem is that running multiple t-tests inflates the chance of false positives (Type 1 Error). Each test carries a small risk of being wrong, and when you stack them up, the error rate compounds. You might think a design is “better” when in reality you’ve just rolled the dice too many times. This isn’t just a technical detail, it can mislead teams, waste resources, and steer products in the wrong direction. The alternative? Use ANOVA (Analysis of Variance). ANOVA tests whether there are meaningful differences across all designs in one go, keeping error rates under control. If there is a difference, you can then use post hoc tests (like Tukey or Bonferroni) to see which designs truly stand apart. Another option, especially in modern UX research, is Bayesian modeling, which gives richer insight into the probability that one design outperforms another. These approaches are safer, more informative, and ultimately lead to better design decisions.
-
Why Your Customer #Personas Might Be Completely Wrong The image below says it all. Two men. Both male. Both born in 1948. Both raised in the UK. Both married twice. Both live in castles. Both wealthy and famous. On paper, #Prince Charles and #Ozzy Osbourne look exactly the same. In reality… they couldn’t be more different. And that’s the problem with how many #companies still build their customer personas. If your “persona” is just demographics, you #don’t have a persona. You have a #census report. Great personas go #deeper. They reflect: ✔ Motivations ✔ Fears ✔ Jobs-to-be-done ✔ Values ✔ Frustrations ✔ Desired outcomes ✔ Context and behaviours Because people don’t make decisions based on age or gender. They make decisions based on #goals, #challenges, and #emotions. #Demographics tell you #who your customer is. #Psychographics tell you #why they buy. If you want better #targeting, better content, better products — build personas that capture the human #story, not just the statistical snapshot. #Marketing #CustomerExperience #Personas #CX #ProductMarketing #DesignThinking #Segmentation #AIinMarketing
-
𝗡𝗲𝘂𝗿𝗼𝗺𝗮𝗿𝗸𝗲𝘁𝗶𝗻𝗴 𝘂𝘀𝗲𝗱 𝘁𝗼 𝗯𝗲 𝗮 𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘀𝘁 𝘁𝗼𝗼𝗹. 𝗜𝘁 𝗶𝘀𝗻'𝘁 𝗮𝗻𝘆𝗺𝗼𝗿𝗲. 𝘞𝘩𝘢𝘵 𝘐 𝘵𝘰𝘭𝘥 𝘵𝘩𝘦 𝘴𝘵𝘶𝘥𝘦𝘯𝘵𝘴 𝘢𝘵 𝘐𝘜𝘓𝘔 𝘔𝘪𝘭𝘢𝘯 𝘵𝘩𝘪𝘴 𝘸𝘦𝘦𝘬 I just delivered the 𝘓𝘦𝘤𝘵𝘪𝘰 𝘔𝘢𝘨𝘪𝘴𝘵𝘳𝘢𝘭𝘪𝘴 at IULM University in Milan, one of the oldest and most respected communication schools in Europe. My central argument was simple: 𝙣𝙚𝙪𝙧𝙤𝙢𝙖𝙧𝙠𝙚𝙩𝙞𝙣𝙜 𝙞𝙨 𝙣𝙤 𝙡𝙤𝙣𝙜𝙚𝙧 𝙖 𝙣𝙞𝙘𝙝𝙚 𝙞𝙣𝙨𝙩𝙧𝙪𝙢𝙚𝙣𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝙛𝙚𝙬. 𝙄𝙩 𝙞𝙨 𝙗𝙚𝙘𝙤𝙢𝙞𝙣𝙜 𝙖 𝙨𝙩𝙖𝙣𝙙𝙖𝙧𝙙 𝙩𝙤𝙤𝙡 𝙞𝙣 𝙚𝙫𝙚𝙧𝙮 𝙢𝙖𝙧𝙠𝙚𝙩𝙚𝙧'𝙨 𝙩𝙤𝙤𝙡𝙠𝙞𝙩. 𝘼𝙣𝙙 𝘼𝙄 𝙞𝙨 𝙩𝙝𝙚 𝙧𝙚𝙖𝙨𝙤𝙣 𝙬𝙝𝙮. Here's a concrete example of what that looks like in practice. During my talk, I ran a live analysis of a recent Nike ad using a Neurons neuroscience AI. Based on the results, the Neurons system not only predicted consumer responses but also provided insights and recommendations, ultimately using generative AI to create assets that performed significantly better. The image shows the results. The original scored 5.6 on the 𝙉𝙚𝙪𝙧𝙤𝙣𝙨 𝙄𝙢𝙥𝙖𝙘𝙩 𝙎𝙘𝙤𝙧𝙚, which sits below our recommended launch threshold of 7.0. Three brand placements in a single frame, yet brand attention was suboptimal. Emotional engagement was low. The ad had decent comprehension and memory, but it wasn't earning attention where it mattered. The diagnosis took minutes. So did the fix. Using the Creative AI Loop, the system itself generated improved versions. The iterative improvement lifted performance by 9%. The creative suggestion, a more saturated and boosted variant, pushed that to 11%. Total time: under 10 minutes. This demo represents something bigger than a single ad optimization. It represents what happens when neuroscience stops being exotic and starts being operational. The implications run deeper than faster creative testing: • 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻, 𝗲𝗺𝗼𝘁𝗶𝗼𝗻, 𝗺𝗲𝗺𝗼𝗿𝘆, 𝗮𝗻𝗱 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝗼𝗻 stop being abstract concepts and become measurable design inputs • 𝗠𝗮𝗿𝗸𝗲𝘁𝗲𝗿𝘀 𝗱𝗲𝘃𝗲𝗹𝗼𝗽 𝗴𝗲𝗻𝘂𝗶𝗻𝗲 𝗳𝗹𝘂𝗲𝗻𝗰𝘆 in how consumers actually process communication • 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲 𝗴𝗼𝗲𝘀 𝘂𝗽. In our data, 100% of our enterprise users report higher confidence in the campaigns they take to market after running them through AI-powered neuroscience tools That last number is not a marketing claim. It reflects something real: when you can see how a brain responds to your ad before you spend the budget, you make better decisions. This is what democratization actually means in practice. Not lower standards. Higher ones, available to more people. The future of neuromarketing is not the lab. It is the marketing team. #neuromarketing #consumerneuroAI #advertising #brandstrategy Check out more: https://lnkd.in/ebEjQmSv
-
Day 5 - CRO series Strategy development ➡A/B Testing (Part 1) What is A/B Testing? A/B testing, also known as split testing, is a method used to compare two versions of a marketing asset, such as a webpage, email, or advertisement, to determine which one performs better in achieving a specific goal. Most marketing decisions are based on assumptions. A/B testing replaces assumptions with data. Here’s how to do it effectively: 1. Formulate a Hypothesis Every test starts with a hypothesis. ◾ Will changing a call-to-action (CTA) button from green to red increase clicks? ◾ Will a new subject line improve email open rates? A clear hypothesis guides the entire process. 2. Create Variations Test one element at a time. ◾ Control (Version A): The original version ◾ Variation (Version B): The version with a change (e.g., a different CTA color) Testing multiple elements at once leads to unclear results. 3. Randomly Assign Users Split your audience randomly: ◾ 50% see Version A ◾ 50% see Version B Randomization removes bias and ensures accurate comparisons. 4. Collect Data Define success metrics based on your goal: ◾ Click-through rates ◾ Conversion rates ◾ Bounce rates The right data tells you which version is actually better. 5. Analyze the Results Numbers don’t lie. ◾ Is the difference in performance statistically significant? ◾ Or is it just random fluctuation? Use analytics tools to confirm your findings. 6. Implement the Winning Version If Version B performs better, make it the new standard. If no major difference? Test something else. 7. Iterate and Optimize A/B testing isn’t a one-time task—it’s a process. ◾ Keep testing different headlines, images, layouts, and CTAs ◾ Every test improves your conversion rates and engagement Why A/B Testing Matters ✔ Removes guesswork – Decisions are based on data, not intuition ✔ Boosts conversions – Small tweaks can lead to significant growth ✔ Optimizes user experience – Find what resonates best with your audience ✔ Reduces risk – Test before making big, irreversible changes Part 2 tomorrow
-
Using insights from tens of thousands of A/B tests, I break down Ramp’s homepage hero ➡️ highlighting both the smart bets and areas for optimization. 1) Social proof at the top of homepages is typically a poor bet. Ramp includes both G2 stars above the header and a logo bar beneath the hero. In our testing, when brands place third-party review stars above headers, these versions consistently lose (I cover this in detail in my LinkedIn article, The Problem with Social Proof). Why? 🔎 Distrust of third-party reviews, often perceived as pay-to-play. 🔎 Key content and CTAs get pushed down, especially on mobile. 🔎 Unintentional signaling, for example "4.8 stars from 200+ reviews" can actually make the brand seem small. 2) Embedded email capture + clear CTA text is a winning combo. About two years ago, Ramp A/B tested an embedded email capture form versus a standard button. The embedded form won. Since then, across dozens of site iterations, they’ve kept it. Brands like Buffer and Rippling have similarly tested into and retained embedded capture forms. Their CTA text, “Get Started for Free,” is also strong: it clearly communicates that it’s a free trial. The only improvement I’d suggest is adding reassurance text below the CTA, clarifying that no credit card is required. We’ve seen this small detail improve conversions in multiple A/B tests (see Twilio’s homepage for a good example). 3) Secondary CTAs are a good bet. Ramp’s secondary CTA, “Explore Product,” beneath the main CTA is smart. We’ve seen extensive testing on one vs. two CTAs in homepage heroes for B2B SaaS and fintech brands. Two CTAs typically win. Why? Most of these companies have both self-service and enterprise buyers, with varied traffic sources (and intent levels). Offering two clear paths lets each group choose their preferred next step. 4) Product imagery works. Across hundreds of tests, product imagery consistently outperforms stock photos, branded graphics, or stylized backgrounds. Prospects want a preview of the actual product. 5) Customer logo bars typically underperform. I’ve written extensively on this, but here’s a quick recap of why logos usually lose: 🚧 Logo blindness: If you’re an industry leader, customers assume you serve top brands, so listing them adds little credibility. 🚧 Logo fit: Irrelevant logos create disconnect. Prospects want proof that companies like theirs trust your product. 🚧 Logos mislead: Many sites display big-brand logos when just a small team or individual used the product, or worse, when that company has already churned. If you do use logos, make them interactive or segmented. Brands like Clay and Hex link logos to case studies, providing depth. Others, like 7shifts, segment logos by industry to improve relevance. Hope this is helpful. Any other brands you would love to see analyzed based on DoWhatWorks's database of tracked tests?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development