Is your multi-touch attribution data lying to you? Your MT reporting is probably making everything look good. Here's why: Most companies attribute pipeline/revenue to ALL touchpoints from ALL contacts under an account. Then look at the total # and $ value of opportunities influenced. The result? • High-volume channels look amazing (even when they're not) = volume bias • Every marketing activity appears to influence deals = if everything is working, is anything 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 working? There's a better way to analyze MT data (see image): Look at win rates relative to channel/campaign touchpoints. This strips out volume bias and shows you what's moving deals forward vs generating noise. Example: Paid Search: • Influenced ~1400 deals BUT the average win rate of those deals is 20% C-suite dinner: • Influenced 300 deals BUT the average win rate is 40% If you just looked at total influence, you'd think that the dinners are underperforming paid search. But when you look at influence conversion, it tells you the opposite. Linkedin influencers will tell you MT sucks. But it's more nuanced than that. It's actually the way most companies set up their reports misleads them. We need to be smarter about how we leverage the data. ______________ p.s. also worth saying no attribution model, report, or dashboard will be perfect. Each version has pros/cons and tells a different story. The goal is to leverage multiple methods to help triangulate what is working to help make better decisions going forward.
Multi-Touch Attribution in Lead Scoring
Explore top LinkedIn content from expert professionals.
Summary
Multi-touch attribution in lead scoring means tracking and assigning credit to all the marketing interactions a potential customer has before buying, rather than just the last one. This helps companies understand which channels and activities actually drive revenue, making lead scoring more accurate and useful for decision-making.
- Analyze win rates: Compare the win rates for deals influenced by each channel or campaign, instead of just counting the number of touches, to see which efforts actually move deals forward.
- Review the full journey: Track and score every meaningful interaction across multiple touchpoints—including ads, emails, events, and website visits—so you don’t overlook valuable channels that might not show up in traditional reports.
- Upgrade your models: Consider using advanced attribution tools or AI to identify true buying groups and capture the real impact of your marketing, especially when CRM data is incomplete or complex.
-
-
"If I only looked at last-touch attribution, I would have killed everything driving our growth." Kacie Jenkins 🎁 uncovered a scary truth about B2B marketing metrics: Sendoso's best-performing channel is direct website traffic. But traditional attribution missed that those "direct" visitors had already: + Interacted with partners + Opened nurture emails + Seen organic content + Taken a product tour + Engaged at events + Received a gift The pipeline was there. The attribution wasn't. If you saw their multi-touch data, you'd see something fascinating about these "direct" visitors... Most of them had interacted with the exact channels that looked like they were failing. The same channels a finance team would have flagged for cuts. This pattern kept showing up: High-intent buyers were consuming 7-8 different marketing touches. None of them showed up in pipeline reports. Then they'd visit the website directly and convert. Without multi-touch analytics, every investment driving those conversions looked worthless. That's when they made a radical change to their attribution model. The results transformed not just their pipeline reporting, but their entire relationship with finance. Your "worst performing" marketing channels might actually be your best. Most CMOs get forced cut them before they ever find out. If you're looking to transition away from being a lead-gen machine, this is the way.
-
If you're a B2B CMO still using last-touch attribution to make budget decisions, read this: We had an inbound lead from February 2024 close last week. Here’s the full journey: ↳ POC sees 9 LinkedIn Ads in Feb 2024 ↳ Responds to the offer at the end of the month ↳ Joins an initial strategy call with our team ↳ Then... ghosted ↳ CFO lets us know the original POC left ↳ Team sends a slick response to the CFO ↳ No response... ↳ Company visits the website in December 2024 ↳ New POC comes in via direct traffic from a blog post in April 2025 ↳ Submits a Marketing Plan request ↳ Closes one month later (same CFO signs the contract) Last-touch attribution would give all the credit to direct traffic. As if the prospect just guessed the URL of a high-value blog post. (Spoiler: they didn’t.) But it gives zero credit to: ↳ Paid social, which did the groundwork ↳ The slick sales email ↳ The in-depth blog that showcased our expertise If we based budget decisions on last-touch, we’d cut paid social—even though it did most of the work. Here’s the bottom line: CAC is rising. Ad costs are rising. You're expected to do more with less. Last-touch is no longer fit for purpose. At a minimum, you should be using a data-driven attribution tool like Dreamdata or HockeyStack. These analyze your full user journey using incrementality and machine learning to show what’s actually driving revenue—and what’s not. But even that doesn’t give you the whole picture. You should also invest in Marketing Mix Modeling (MMM) or an equivalent, alongside lift analysis tools. When combined, these give you a clearer view of which marketing activity is actually growing your business. That leads to smarter budget decisions—and healthier conversations with your CEO and CFO. Win-win. 🤘 — P.S. What are your thoughts on the state of attribution right now? Let me know in the comments.
-
Most attribution models are “directionally correct”… right? We decided to test that. Attribution is messy in real life: multiple buyers, long cycles, patchy CRM data, and lots of unstructured activity across email, meetings, website, and events. So most teams (my old self at Branch included 🙋♀️) reach for something simple: - Look at all touchpoints before an opp is created - Apply a rules-based model ( account-level multi-touch when contact roles are not well updated) A customer recently asked us to put that assumption under a microscope. They were using a pretty advanced, account-level multi-touch model for one of their key campaigns. Because opportunity contact roles were spotty, their model spread influence across any engaged contact at the account and created touchpoints only for the last activity before key milestones. We ran the same campaign through Upside’s AI Deep Research Agents and compared results. What we found: 1️⃣ Over-attribution on New Business On New Business deals, the rules-based model over-credited influenced opportunities by ~42% (by deal count). Why? Because it attributes to all engaged contacts at the account. It can’t tell the difference between real buying group members and random people who happened to engage with the campaign but were never part of the deal. Our AI reconstructs the actual buying group (from meetings, email threads, engagement patterns, etc.) and only counts campaign touches that involved those people. ⸻ 2️⃣ Massive under-reporting of overall influence At the same time, the rules-based model dramatically under-reported campaign impact. • Upside detected campaign influence on 763 opportunities • Only 134 of those showed up in the rules-based attribution That’s a 5.7x gap in influence. The reason: their custom rules setup only created a touchpoint for the last activity before a milestone, which is pretty standard for rules-based systems. If the meaningful campaign engagement didn’t happen to be that last activity, it simply vanished from the model. Timing quirks in the CRM became the difference between “this campaign influenced the deal” or “it didn’t exist.” How did we look at it? Our Deep Research Agents: • Reconstruct the true buying group for each opportunity • Scan all structured + unstructured signals (meetings, emails, web, events) • Identify when and how that campaign touched the real decision-makers Once you have accurate influence detection, you can layer any credit model you want on top - zero-sum, weighted, position-based, whatever fits your philosophy, and we can help. But if the underlying detection is broken, no amount of clever rule-writing will save the model. Here are a few screenshots from the analysis in the carousel 👇 Curious: if you’re using a rules-based attribution system today, how confident are you that it’s actually directionally correct?
-
One of the toughest jobs in marketing is figuring out what contributes to revenue. I’ve chatted with over 15 marketing measurement experts in the past 10 months to help marketing ops leaders get some clarity. While searching for fresh perspectives, one name that kept coming up was Constantine Yurevich. He’s the CEO and Co-Founder of SegmentStream. They’ve built a way to blend lead scoring with multi touch attribution and incrementality. Most measurement models are based on conversions. Seems logical. But in a lot of cases, conversions are tiny, especially in B2B. By analyzing user behavior patterns in each session, you gain 10X more optimization signals than conversion tracking alone. Even better, session behaviors predict future outcomes, regardless of whether a purchase happens in that visit. They evaluate every visit with a score and immediately attributes it to the traffic source that initiated this visit. Essentially evaluating things like time spent on site, high intent page views, depth of research behavior, patterns that mirror successful buyers, contextual relevance to conversions, etc. Upper-funnel campaigns suffer most from broken attribution. Visit scoring captures the value of those ads or that content even when users show high engagement but purchase later or when users switch devices mid-journey and return through different channels before buying. They also send those signals of visit value back to ad platforms even when actual conversions don't occur, like a synthetic conversion, so you can start targeting genuine purchase intent. In our conversation on Humans of Martech, Constantine breaks down what’s broken with measurement methodologies today and what led them to build their startup. We covered: • Why MTA still matters despite its flaws • Why MMM is a utopia for 99% of companies • Why most get the science wrong on geo test • Why marketing will never achieve true causation • Why only paid traffic sources are worth measuring • And a bunch more stuff! YouTube, Spotify and Apple links in the comments 👇 – Thank you to our sponsors for supporting the podcast: 📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior. 🦸 RevenueHero: B2B scheduling and routing product to instantly connect prospects with the right sales reps to drive qualified meetings. 🦩 Census: Universal data layer that unifies & cleans data from all your sources and makes it available for any app and AI agent to use. 🎨 Knak: No-code email and landing page creator to build on-brand assets with an editor that anyone can use. -- My top 3 takeaways: 💡Stop making budget decisions based on average ROAS. Calculate your marginal ROAS instead. 💡Geo holdout tests can be really expensive. Save it for when you suspect a channel delivers zero value. 💡Stop obsessing about measuring channels you'll use anyway. Focus exclusively on paid channels with big budget decisions.
-
HubSpot didn’t break your attribution. It just exposed it. Most B2B teams assume that because everything lives in HubSpot, ROI must live there too. It doesn’t. HubSpot is exceptional at tracking activity. Where things fall apart is when teams try to use it to explain profitability. Not what happened. But what actually drove revenue. On its own, HubSpot was never designed to: → Ingest real advertising spend → Resolve anonymous → known journeys across channels → Roll influence up at the account level → Produce CAC, payback, and ROAS numbers finance will trust So teams fill the gaps with: → Last-touch reports. → Platform-reported conversions. → Spreadsheets that don’t agree. The result is familiar: Marketing sees performance. Finance sees risk. Boards see conflicting stories. After working with hundreds of HubSpot-powered teams, the pattern is consistent: the CRM is doing its job — but ROI lives somewhere else. That’s why multi-touch, account-level attribution matters. When spend, identity, and revenue are connected inside HubSpot: → Buyer journeys reflect how B2B actually works → Every meaningful touch is counted — not just the last one → CAC, ROAS, payback, and LTV:CAC become auditable, not directional Something important changes when those numbers live directly in the CRM. Marketing stops defending activity. Finance stops reconciling spreadsheets. Leadership finally operates off one version of the truth. I’ve unpacked this framework recently with HubSpot for Startups, and this Complete HubSpot Multi-Touch Attribution Guide is the complete version — pulling together the strategy, mechanics, and real-world application. It covers: → Why HubSpot-native attribution breaks down in B2B → How multi-touch and account-level models complete the picture → What “true CAC” actually requires to be trusted → Why exportable, finance-grade data matters more than more dashboards → How HubSpot becomes ROI-complete with the right financial layer The takeaway is simple: Attribution doesn’t replace HubSpot. It completes it. 👇 Full Complete HubSpot Multi-Touch Attribution Guide linked in the first comment.
-
+4
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development