💡 In B2B marketing, the fundamental unit of purchase decision-making is the Buyer Group. In other words, getting the different members of the Buyer Group to agree on a specific vendor is what has to happen to close a deal. LinkedIn partnered with Bain & Company and NewtonX to research the main drivers of Buyer Group decision-making. We found that [1] being known across the Buyer Group of a target account and [2] being trusted by the marketplace were the most important dimensions when being bought. 🎯 81% of purchases were made of vendors that everyone or almost everyone in the Buyer Group knew on Day One. 🎯 Only 4% of final purchases that were made were of vendors/products that only the expert recommenders knew. 🎯 Buyer Group members will pay more for products that their colleagues already know (by a ratio of 3:1), because your colleagues knowing about a vendor makes it a less risky choice. 🎯 Buyer Groups will not fight for products that their colleagues don't know, even if they think that product is better (3:1) because persuading colleagues to take a risk is more difficult than sacrificing potential functionality/quality. 🎯 All other things (like price and product quality) being equal, Buyer Groups will pick a well-known product/vendor over ones that are less well-known (3:1), because less well-known vendors were harder for the group to agree on. 🎯 The further away you are from technical knowledge of the product in your Buyer Group role, the more you rely on brand factors to shape your decision-making. Legal, finance, HR and Procurement, for example, have huge influence here and are much more influenced by brand than product experts and users. We feel our findings open up a few new ways of thinking about how to be successful in B2B marketing and selling - and show the role played by brand in every live sales process. Brand is not just an investment in getting bought in the future. It is a form of decision-insurance and risk-mitigation that makes it one of the main reasons that products get bought right now. Jann Martin Schwarz Jamie Cleghorn Tom Stein Nick Primola Rob Gold Sascha E. Jackie Cutrone #B2BMarketing #BuyerGroupMarketing #B2Believe
B2B Marketing Trends
Explore top LinkedIn content from expert professionals.
-
-
I don’t run TOFU, MOFU, BOFU campaigns anymore. I run two types of campaigns, because there are only two things that matter: ↳ Building mental availability and consideration with out-market prospects ↳ Convincing in-market prospects to choose you over competitors Out-market campaigns hit our total ICP list. We run high-frequency ads built around strategic, data-backed messaging to drive reach and recall. The goal: “When they move in-market, we’re the name they remember.” We mix ad formats but optimize for audience penetration, frequency, engagement rate, and dwell time. To measure success, we track ICP company visits, share of search, and growth in engaged ICP accounts over time. This tells us we're not just hitting vanity metrics, but actually getting on the vendor list. In-market campaigns are laser-targeted. I recently shrunk this audience from 40K to ~8K. Given our full ICP list is ~180K, that tracks as only ~5% are in-market at any time. The goal: convert pipeline, drive revenue, shorten sales cycles, increase AOV and LTV. Here’s what that structure looks like: We ditched generic retargeting (website visits, video views, ad clicks). Instead, we focus on high-intent page visits—service, pricing, offer pages. Add in AI-driven intent-signals from Dreamdata and G2. Then, layer an ICP filter over the top to ensure we're not wasting spend on poor-fit prospects. Unlike most B2Bs, we don’t exclude existing pipeline from targeting. We keep reminding them why we're the best option. They’re not closed until they’re closed. Funnel logic makes sense for a linear funnel. But B2B buying journeys aren't linear. Smart B2Bs market the way buyers actually buy. Not the way they wish they would. 🤘 — P.S. Struggling to make LinkedIn Ads work? Have a KlientBoost Growth Strategist build your custom free marketing plan based on proven playbooks like this one. Hit the link to get yours: https://lnkd.in/eMpcnvQX
-
the odds of success in podcasting are so low right now. tons of supply. discoverability sucks. ad market is softer. so why am I the schmuck who’s relaunching his podcast next week & how do I plan to make it a win? here’s the breakdown: 1. innovate on format longform interviews & cohosted shows are so crowded. i don’t want to play where there’s hyper competition. im going to own the bite-sized (15 min) solo show in my niche (entrepreneurship) 2. make it a win even if you don’t go mainstream first, anchor your show in a valuable niche. B2B advertisers are willing to pay me high CPMs to get in front of founders even if I don’t hit 7-figure downloads per month. second, monetize beyond ads. I use my pod as a top of funnel for all of my businesses that have customer values worth way more than I could ever charge an advertiser. third, find wins beyond money. my pod allows me to memorialize the founder journey so that I can revisit it 50 years from now. plus, I hope to create a culture of founders documenting their business building process as a way of educating the next generation. 3. leverage YouTube & shorts for distribution podcast discovery & sharing sucks. the only way to build a great top of funnel & grow downloads is by making it a vodcast from day 1. this has been huge for shows ranging from mfm to lex fridman to dwarkesh. 4. experiment constantly & lean into short-form the cliche of “a tweet became an email became an essay became a book” has truth. I view every X post or IG video as a cheap experiment to test an idea before putting greater effort into it. i also remind myself constantly that im close-minded to the mission of my podcast (to increase the odds of a founder’s success), but im open-minded to the way in which that mission is fulfilled. hope this helps & sub to my pod (link below) to watch me execute on this plan in real-time. new episode of founder’s journal comes out 3/3!
-
A few years back, when I was working for a travel-tech company, we chose IRCTC as one of the top placements for our brand campaign. The audience segment was very relevant for us to target. However, 97% of the audience from the campaign that landed on our site bounced. They didn't stay on the site for a few seconds or take any meaningful action on our site when they landed. Eventually, we scaled down the campaign. When planning the campaign, the placement site looked like the most obvious place where we would find all our relevant audiences. However, it was the worst audience quality we have received from any brand campaign till then. Whenever we run brand campaigns, we make many assumptions and decisions based on the assumptions. Those assumptions need to be validated faster during the earlier part of the campaign. Otherwise, we end up wasting a lot that we could have avoided. This brings me to today's post. When running a brand campaign, how do you check whether your target audience is correct? Assessing and analyzing brand campaigns is a big topic. However, doing that digitally has a better advantage than any other medium. Essentially, you have to measure the brand campaign performance in the initial stage only at two levels. - Whether you are targeting the right person & - You targetting those people with the proper communication For the latter, we have built-in metrics like CTR and custom metrics like hook rate, video engagement rates, etc. However, to measure whether we are targeting the right person, we should start examining visitor quality soon after the campaign begins. We can define the quality of visitors in many ways. - Scrolled x% of your landing page. - Visited at least two pages. - Spent more than 30 seconds on your landing page. - Played a video on your landing page, etc. Today, while many brands keep checking their communication efficiency, they mostly miss the targeting part. Adjusting your targeting by measuring these metrics from the start is very important for the success of the digital brand campaign.
-
Use chatbots ≠ working with AI. What we actually want: One chat, 500+ apps connected. Tasks handled end to end. RUBE, developed by Composio, acts as an 𝐌𝐂𝐏 𝐬𝐞𝐫𝐯𝐞𝐫, enabling seamless integration between AI chat clients and over 𝟓𝟎𝟎 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐚𝐧𝐝 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐚𝐩𝐩𝐬. What stands out: • It works across tools like Jira, Gmail, and Calendar simultaneously. • It goes beyond completing tasks, surfacing insights like: “Sprint velocity dropped 23% because 40% of tickets lack acceptance criteria.” • “Summarize yesterday’s Slack updates, prioritize my GitHub issues, and schedule my top 3 tasks” → done in 30 seconds. • “Prep me for my 2pm with Sarah” → instantly pulls emails, notes, and Slack threads. • “Why did churn spike last month?” → connects CRM, support data, and analytics for answers. 🔗 Explore here https://lnkd.in/gcUdSy7a
-
Last click measures are 𝗻𝗼𝘁 𝗲𝘃𝗲𝗻 𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗹𝘆 𝗰𝗼𝗿𝗿𝗲𝗰𝘁. A large majority of brands know that last click (and/or MTA) measurement is wrong, but a majority continue to use it as the primary measure of marketing performance. There are typically two main reason why: • 𝗟𝗲𝗴𝗮𝗰𝘆 𝗼𝗳 𝗺𝗲𝘁𝗵𝗼𝗱𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 - This is a big challenge and tough to change quickly. I have shared a few methods we use to help with this which is linked in comments. • 𝗔𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝗰𝗹𝗶𝗰𝗸 𝗱𝗮𝘁𝗮 𝗶𝘀 "𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗹𝘆 𝗰𝗼𝗿𝗿𝗲𝗰𝘁" - Many brands assume that while the data is wrong, it is correct enough to optimise towards success. This is unfortunately not true, many of the strongest performance last click channels show the weakest incremental value. And vice versa. On the chart below we map campaign types on Last Click ROAS index (100 = best performing on last click ROAS) and MMM ROAS Index (100 = best performing on MMM ROAS). The first thing you should notice, is that the correlation is weak. Virtually non existent. But there are some clusters of campaign types: 1. 𝗟𝗼𝘄 𝗜𝗻𝗰𝗿𝗲𝗺𝗲𝗻𝘁𝗮𝗹𝗶𝘁𝘆 𝗭𝗼𝗻𝗲 - Campaigns which look brilliant on last click ROAS but show poor incrementality. These look great on a marketing report, but drive little real value. 2. 𝗚𝗼𝗼𝗱 𝗼𝗻 𝗔𝗹𝗹 𝗠𝗲𝗮𝘀𝘂𝗿𝗲𝘀 𝗭𝗼𝗻𝗲 - These look good on Last Click ROAS and look good on MMM ROAS, campaigns which drive clear measurable performance and with strong incrementality. 3. 𝗗𝗼𝗲𝘀𝗻'𝘁 𝗺𝗮𝘁𝘁𝗲𝗿 𝗵𝗼𝘄 𝘆𝗼𝘂 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗶𝘁 𝘇𝗼𝗻𝗲 - These are bad on Last Click ROAS and bad on MMM ROAS. These campaigns just don't work, not every test succeeds. 4. 𝗡𝗲𝘃𝗲𝗿 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗼𝗻 𝗟𝗮𝘀𝘁 𝗖𝗹𝗶𝗰𝗸 𝗭𝗼𝗻𝗲 - These look terrible on Last Click ROAS, but actually drive strong modelled incremental performance. These campaigns drive really valuable indirect impact, but the last click measurement can't see their value. Normally on a quadrant chart, the bottom left is the troublesome corner. But here the real issues are in top left and bottom right. Campaigns in bottom left get turned off or changed, because they don't work on any measure. It is a failed test, we learn and move on. Campaigns in the top right get continued investment, and will continue to drive business value. The trouble lives in the top left and the bottom right. Campaigns in top left get increased investment because the spreadsheet looks good, while they deliver little value. Campaigns in bottom right get turned off, then everyone wonders why overall performance got worse. While everyone's focus is on moving up on the chart, the 𝗿𝗲𝗮𝗹 𝗳𝗼𝗰𝘂𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗺𝗼𝘃𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘁𝗼𝗽 𝗹𝗲𝗳𝘁 𝘁𝗼 𝘁𝗵𝗲 𝗯𝗼𝘁𝘁𝗼𝗺 𝗿𝗶𝗴𝗵𝘁. It will make your marketing reporting spreadsheet look worse, but make business performance better.
-
"Our funnel is completely clogged, and our CEO and investors are starting to panic," shared a CMO from a $375MM SaaS firm. The other Huddlers sympathized, noting they were facing similar challenges. Sound familiar? The old playbook of flooding the funnel, scoring MQLs, and handing off to sales isn't just broken; it's toxic. Here's why your funnel is clogged and what actually works now: 1. Your data is a disaster. The average customer contact database health score? A pathetic 47%, according to research from BoomerangAI. More than half of B2B companies haven't updated their database in six months—or ever. Bad data isn't just an operational issue. It erodes every layer of your funnel. Fix this first. Assign database ownership cross-functionally. Tie enrichment to your GTM motions. And please activate alumni contact programs. Only 12% of companies have formal programs for contacts who left employers, yet they're gold mines. 2. You're still pitching tours when buyers want tools. Recent TrustRadius research shows that 52% of buyers say prior experience is their #1 decision input. Only 13% say a demo "blew them away." 3. Stop the demo obsession. Launch website-based product exploration tools. Add pricing guidance. Create modular content for AI summarization since 90% of buyers who see AI-generated summaries click through to cited sources. 4. The MQL addiction is killing you. As one CMO put it: "MQLs are problematic... we’re trying to figure out how to get fewer, better leads." Track conversion quality at each funnel stage. Hold weekly demand gen and sales alignment meetings. Ditch vanity metrics for outcome-based KPIs. 5. You're pitching spend instead of displacement. Few CFOs are greenlighting net-new spending, but they will approve reallocation when the ROI is crystal clear. Reframe your pitch: "Invest in this → reduce spend on that." Connect to CFO logic, not just user pain. 6. You're making promises instead of proving value. Buyers want proof in 120 days or less. The "trust us, it'll pay off eventually" era is dead. If you have the data, create 120-day value realization case studies. Use prospect data to build "speed-to-value" narratives. Lead with time-to-value, not feature lists. The companies unclogging their funnels aren't working harder—they're working smarter. They've ditched the old playbook for data-driven precision. Your move. PS - For a longer look at this issue, please check out my May 2025 #HuddleUp newsletter.
-
25% of B2B companies expect to use outcome-based pricing by 2028. That's a 5x increase from today's 5%, according to Kyle Poyar's latest research. This will be a painful, but ultimately healthy transition. Buyers never wanted software in the first place. They wanted solutions. As AI handles more work end-to-end, pricing migrates from inputs (seats, tokens, usage) to outcomes (cases closed, revenue recovered, risk reduced). Less "how much did you use?" More "did it actually work?" Thought experiment: if code becomes a commodity and features ship instantly, value shifts from building features to guaranteeing execution. You’re not selling software—you’re selling outcome insurance. Objections are real—attribution is messy, procurement habits are sticky, and buyers hate surprises. But these are solvable with instrumentation, shared definitions of success, and clear guardrails (Manny Medina). Over time, buyers will demand outcome-based pricing because it reduces their risk. Where outcome-based pricing already fits well: AI-enabled services. Services own end-to-end execution, so attribution is clean and incentives align. Mechanical Orchard is a great example—using AI to move mainframe workloads to the cloud, taking ownership of the entire journey. When you own the “last mile,” charging for success becomes straightforward. AI customer support vendors have also been pioneers of this model. More vendor types are on the horizon. If you’re a founder, here’s a simple path to test outcomes pricing: • Pick one mission-critical outcome your product directly influences. • Define a verifiable metric, baseline, and observation window with the buyer. • Cap downside (floor) and share upside (tiers/bonus) to build trust. • Instrument attribution now—event logs, holdouts, and third-party validation beat hand-waving later. Start with one outcome. One customer. One measurable result you can guarantee. We're still early in this shift, but the direction is clear. For those already experimenting with outcome-based pricing, what's been your biggest surprise? And for those that haven't yet, what's holding you back?
-
About 18 months ago, my co-founder Alan and I launched a podcast called Revenue Rebels & it failed. I thought the name was great. We booked some guests. Talked to revenue leaders. Put episodes out there. And it just... didn't land. No real theme. No narrative arc. No reason someone should pick us over the 50,000 other MarTech and sales tech podcasts fighting for attention. We were throwing spaghetti at the wall and hoping for "traction." So we stopped. Classic founder move. If the numbers don't pop in month one, kill it and move on to the next thing. That was a mistake. I sat down with Dave Gerhardt for Episode 6 of Founder Brand and he called this out directly. He said the single biggest mistake founders make with podcasting is treating it like a short-term campaign. He compared it to training for a marathon. You don't run 5 miles and decide running isn't for you. His own Exit Five podcast only hit its stride after he committed to shipping weekly. He told me the growth curve follows the consistency curve almost perfectly. Not the other way around. But here's the part that really changed how I think about it. A podcast isn't a podcast. It's the engine for everything else. Dave calls it the Content Flywheel. You sit down for one hour. You have a real conversation about your industry. And that single hour becomes: 1/ 5 LinkedIn posts that actually get engagement. 2/ A 1,500-word newsletter built around the one nugget that made your guest laugh. 3/ A serialized series your customers want to follow. You're not blocking time to "create content." You're mining the conversations you already have as a founder, with customers, investors, partners, your team, and turning raw signal into something that compounds. I think about this a lot at Warmly,. It goes like this: - Every week I'm deep in conversations about how B2B teams find and act on buying signals. - Those conversations are full of insights that would take me hours to write from a blank page. But in a 30-minute conversation? They just come out. The other thing Dave said that I can't stop thinking about: as a founder, you are the only person who can do this well. You're the crazy person who decided to start the company. You're the one having the hard conversations with buyers and churned customers and skeptical investors. People want to hear that perspective. Even when your voice is raspy and you're running on half a brain. (Which, for the record, was me during this recording.) Consistency IS the strategy. Full conversation with Dave here: https://lnkd.in/gQA59vEB
-
I analyzed 67+ outbound clients across 12 industries. Every single one falls into 1 of 3 buckets. 1. Top performers. 2. Average performers. 3. And the bottom 10% that outbound probably can’t save. Most teams think they’re one email away from their next deal. But after hundreds of campaigns, one thing’s clear: Your outbound performance mirrors your market position, not your sequence. Here’s what SalesCaptain’s data shows👇 1️⃣ Low Results (8-10 positives per 3K prospects) Usually the teams stuck in commoditized markets. They sound like everyone else, sell like everyone else, and get ignored. 🔹 No product-market fit 🔹 Weak or no offer 🔹 Basic website, no social proof 🔹 Long deal cycles, tiny TAM 🔹 Outbound quality dragged down by the offer itself What we do here: → Run a short test, confirm underperformance, then either help them reposition or pause entirely. 2️⃣ Average Results (15-25 positives per 3K prospects) The healthiest segment of the market. These teams know their ICP, have a solid offer, and play the consistency game. 🔹 Decent PMF 🔹 Clear ICP definition 🔹 Message/market fit 🔹 Entry-point offer that converts This is where most mid-market companies live. They get reliable meetings, not fireworks, but steady growth. 3️⃣ Superior Results (30-40 positives per 3K prospects) This is where we see significantly large ROIs. We see this pattern across B2B SaaS, GTM consultancies, and fast-moving service orgs. 🔹 Strong PMF 🔹 Sharp differentiation 🔹 Medium deal sizes ($10–80K) 🔹 Localized campaigns 🔹 Multi-channel execution (email + LinkedIn + data enrichment) 🔹 High in-market demand What separates these 3 isn’t the toolset, it’s that they know how to structure outbound like a system. The goal is to move up the curve, and that's what we helped 60+ teams do. If your outbound is underperforming, check which bucket you’re really in, and ask whether the problem is your campaign… or your market. DM me if you need help. #outbound #gtm #performance
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development