Utilizing Customer Data Analytics

Explore top LinkedIn content from expert professionals.

  • View profile for Cody Carnes

    $4M+ Closed w/ Cold Email | <3% Bounce Rate @ Findymail.com

    10,399 followers

    “211 SQLs → $7,500 x 9 deals → $67,500 for client Bobby in just 35 days with cold email” “Cody, where have you been in the last month?” → Dialing in hyper-scale cold email systems. Everyone need to become *the best* at One thing. My One thing = high volume cold email. P.S. This does NOT work if you have a small TAM (Total Addressable Market) / account-based-marketing. 🐊: This is, however, *the* best strategy for any B2B that has a product/offer where economics make sense at a large scale. You’ll know if you have a high TAM product & backend to deliver. before the LinkedIn up-bound/around-bound failed outbound agency owners say “outbound is dead, you shouldn’t make that much money from email!!” 🤡 These automated campaigns were a button push. There was little effort after the initial setup other than lead management. It’s absolutely not as easy as it was in 2021. Cold email is difficult. Mass market advice is.. “…buy inboxes from us, VAs to deliver lead lists, copy/paste templates to ICP then hundreds of SQLs appear out of thin air.” The success rate ^ is probably 10%, especially in the 1st month. If a “marketer” tells you otherwise, question them. Here’s what’s been working well: ✅ Weekly Domain blacklist checks ✅ Weekly Inbox Placement Tests + replace bad inboxes [see if you’re going to spam] ✅Open tracking off (Non-negotiable) Don’t send links or videos or images ✅ Apollo / Sales nav to build initial (top of funnel) lists ✅ Clay to qualify accounts at scale + remove non-ICP fits + add in gpt4o personalizations to qualified leads ✅ Findymail to enrich & double-validate email addresses [Findymail is the best email enrichment tool on the market, A/B test it vs every other provider and it will will] ✅After 3-4 months of list building, you’ve built a niched ICP database of your best ideal customers with enriched data/personalization columns. Recycle this list infinitely with new angles. 100% of the time, prospects forget your email the next day, after a few months or sooner reach back out. ✅ Cold outbound messaging based on customer interviews/sales calls, although you can spawn ideas from thin air. Here’s what has NOT been working well in the outbound space in the last 45 days: 👉 Don’t use ANY “private infrastructure” sellers. You will waste your money, 99.9% of the time. I know just about every high level outbound person in this space, we’ve all tried MailScale, Mailforge, InfraForge – they’re not good. They are marketers selling a bad product, they have fantastic marketing angles. Ultimately, make your own choice + question everyone. 👉 Google inboxes sending to other google inboxes. Microsoft is starting to become a foundation to solve this. 👉 Listening to mass market outbound advice. Cody

  • View profile for Arpit Singh
    Arpit Singh Arpit Singh is an Influencer

    GTM, AI & Outbound | LinkedIn Content & Social Selling for high-growth agencies, AI/SaaS startups & consulting businesses | Open for collaborations

    36,500 followers

    Myth: “Your deliverability is a one-time setup.” I learned that late. It cost us pipeline. We had SPF, DKIM, and DMARC in place. The copy was solid. The leads were qualified. But replies were silent. We tweaked subject lines.  Rewrote the CTA. Changed the offer.  Still nothing. The real issue? The emails were landing in Promotions. Some went to Spam. And we had no visibility. That’s when we realized deliverability isn't a checkbox. It's something you monitor consistently. Inbox placement changes. Domain reputation shifts. Even one bad step can tank performance. That’s why we now use Inbox Radar by Saleshandy. → Recurring Tests  Automatically track inbox placement over time. → Manual Tests  Run quick checks before sending a sequence. → External Tests  Check emails sent from Gmail or Outlook with a test ID. It shows where your emails land: Primary, Promotions, or Spam. And what needs fixing if they don’t land right. We don’t guess anymore. We check. We fix. Then we send. If you're running cold outreach, test before you launch. It’s one small habit that protects your entire pipeline. Using anything to monitor your deliverability yet?

  • View profile for Dan Wilson

    Data Behind Behaviour → Chief Data Officer & Co-Founder @ Charlie Oscar | Applying marketing science to modern marketing to understand what actually drives growth

    5,159 followers

    Last click measures are 𝗻𝗼𝘁 𝗲𝘃𝗲𝗻 𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗹𝘆 𝗰𝗼𝗿𝗿𝗲𝗰𝘁. A large majority of brands know that last click (and/or MTA) measurement is wrong, but a majority continue to use it as the primary measure of marketing performance. There are typically two main reason why: • 𝗟𝗲𝗴𝗮𝗰𝘆 𝗼𝗳 𝗺𝗲𝘁𝗵𝗼𝗱𝘀 𝗮𝗻𝗱 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 - This is a big challenge and tough to change quickly. I have shared a few methods we use to help with this which is linked in comments. • 𝗔𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝗰𝗹𝗶𝗰𝗸 𝗱𝗮𝘁𝗮 𝗶𝘀 "𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻𝗮𝗹𝗹𝘆 𝗰𝗼𝗿𝗿𝗲𝗰𝘁" - Many brands assume that while the data is wrong, it is correct enough to optimise towards success. This is unfortunately not true, many of the strongest performance last click channels show the weakest incremental value. And vice versa. On the chart below we map campaign types on Last Click ROAS index (100 = best performing on last click ROAS) and MMM ROAS Index (100 = best performing on MMM ROAS). The first thing you should notice, is that the correlation is weak.  Virtually non existent. But there are some clusters of campaign types: 1. 𝗟𝗼𝘄 𝗜𝗻𝗰𝗿𝗲𝗺𝗲𝗻𝘁𝗮𝗹𝗶𝘁𝘆 𝗭𝗼𝗻𝗲 - Campaigns which look brilliant on last click ROAS but show poor incrementality. These look great on a marketing report, but drive little real value. 2. 𝗚𝗼𝗼𝗱 𝗼𝗻 𝗔𝗹𝗹 𝗠𝗲𝗮𝘀𝘂𝗿𝗲𝘀 𝗭𝗼𝗻𝗲 - These look good on Last Click ROAS and look good on MMM ROAS, campaigns which drive clear measurable performance and with strong incrementality. 3. 𝗗𝗼𝗲𝘀𝗻'𝘁 𝗺𝗮𝘁𝘁𝗲𝗿 𝗵𝗼𝘄 𝘆𝗼𝘂 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗶𝘁 𝘇𝗼𝗻𝗲 - These are bad on Last Click ROAS and bad on MMM ROAS. These campaigns just don't work, not every test succeeds. 4. 𝗡𝗲𝘃𝗲𝗿 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗼𝗻 𝗟𝗮𝘀𝘁 𝗖𝗹𝗶𝗰𝗸 𝗭𝗼𝗻𝗲 - These look terrible on Last Click ROAS, but actually drive strong modelled incremental performance. These campaigns drive really valuable indirect impact, but the last click measurement can't see their value. Normally on a quadrant chart, the bottom left is the troublesome corner. But here the real issues are in top left and bottom right. Campaigns in bottom left get turned off or changed, because they don't work on any measure. It is a failed test, we learn and move on. Campaigns in the top right get continued investment, and will continue to drive business value. The trouble lives in the top left and the bottom right.  Campaigns in top left get increased investment because the spreadsheet looks good, while they deliver little value. Campaigns in bottom right get turned off, then everyone wonders why overall performance got worse. While everyone's focus is on moving up on the chart, the 𝗿𝗲𝗮𝗹 𝗳𝗼𝗰𝘂𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗺𝗼𝘃𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘁𝗼𝗽 𝗹𝗲𝗳𝘁 𝘁𝗼 𝘁𝗵𝗲 𝗯𝗼𝘁𝘁𝗼𝗺 𝗿𝗶𝗴𝗵𝘁. It will make your marketing reporting spreadsheet look worse, but make business performance better.

  • View profile for Jahanvee Narang

    5 years@Analytics | Linkedin Top Voice | Podcast Host | Featured at NYC billboard | AdTech | MarTech | RMN

    32,110 followers

    As an analyst, I was intrigued to read an article about Instacart's innovative "Ask Instacart" feature integrating chatbots and chatgpt, allowing customers to create and refine shopping lists by asking questions like, 'What is a healthy lunch option for my kids?' Ask Instacart then provides potential options based on user's past buying habits and provides recipes and a shopping list once users have selected the option they want to try! This tool not only provides a personalized shopping experience but also offers a gold mine of customer insights that can inform various aspects of a business strategy. Here's what I inferred as an analyst : 1️⃣ Customer Preferences Uncovered: By analyzing the questions and options selected, we can understand what products, recipes, and meal ideas resonate with different customer segments, enabling better product assortment and personalized marketing. 2️⃣ Personalization Opportunities: The tool leverages past buying habits to make recommendations, presenting opportunities to tailor the shopping experience based on individual preferences. 3️⃣ Trend Identification: Tracking the types of questions and preferences expressed through the tool can help identify emerging trends in areas like healthy eating, dietary restrictions, or cuisine preferences, allowing businesses to stay ahead of the curve. 4️⃣ Shopping List Insights: Analyzing the generated shopping lists can reveal common item combinations, complementary products, and opportunities for bundle deals or cross-selling recommendations. 5️⃣ Recipe and Meal Planning: The tool's integration with recipes and meal planning provides valuable insights into customers' cooking habits, preferred ingredients, and meal types, informing content creation and potential partnerships. The "Ask Instacart" tool is a prime example of how innovative technologies can not only enhance the customer experience but also generate valuable data-driven insights that can drive strategic business decisions. A great way to extract meaningful insights from such data sources and translate them into actionable strategies that create value for customers and businesses alike. Article to refer : https://lnkd.in/gAW4A2db #DataAnalytics #CustomerInsights #Innovation #ECommerce #GroceryRetail

  • View profile for Dana DiTomaso

    I help you level up your analytics and digital marketing skills linktr.ee/danaditomaso

    17,183 followers

    GA4's engagement rate metric is a useful diagnostic but it might set the bar too low for what counts as quality traffic when it comes to ad campaigns. Our paid team has been using what we call a "quality traffic" metric (inspired by Jon Loomer), and it's become one of our go-to tools for campaign troubleshooting. The concept is straightforward: you create a composite event that requires two things to be true at the same time before it fires. Time on page AND scroll depth (or visibility of a key element like your CTA). It requires both elements because time alone doesn't work as someone could leave a tab open while making coffee and scroll depth alone doesn't work because someone could skim through through your page in five seconds. But when both conditions are met we have a much better understanding of how much this person actually looked at what we had to say. My newsletter this week walks you through the full GTM setup, how to deploy it across platforms, and how to tune your thresholds so the metric is more useful for your specific circumstances. It's very beginner friendly as well, so if you don't feel like you're super comfortable with GTM, I promise this will be very accessible for you. #GoogleAds #GA4 #PPC #MetaAds #AnalyticsPlaybook

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,373 followers

    Incrementality testing is crucial for evaluating the effectiveness of marketing campaigns because it helps marketers determine the true impact of their efforts. Without this testing, it's difficult to know whether observed changes in user behavior or sales were actually caused by the marketing campaign or if they would have occurred naturally. By measuring incrementality, marketers can attribute changes in key metrics directly to their campaign actions and optimize future strategies based on concrete data. In this blog written by the data scientist team from Expedia Group, a detailed guide is shared on how to measure marketing campaign incrementality through geo-testing. Geo-testing allows marketers to split regions into control and treatment groups to observe the true impact of a campaign. The guide breaks the process down into three main stages: - The first stage is pre-testing, where the team determines the appropriate geographical granularity—whether to use states, Designated Market Areas (DMAs), or zip codes. They then strategically select a subset of available regions and assign them to control and treatment groups. It's crucial to validate these selections using statistical tests to ensure that the regions are comparable and the split is sound. - The second stage is the test itself, where the marketing intervention is applied to the treatment group. During this phase, the team must closely monitor business performance, collect data, and address any issues that may arise.  - The third stage is post-test analysis. Rather than immediately measuring the campaign's lift, the team recommends waiting for a "cooldown" period to capture any delayed effects. This waiting period also allows for control and treatment groups to converge again, confirming that the campaign's impact has ended and ensuring the model hasn’t decayed. This structure helps calculate Incremental Return on Advertising spending, answering questions like “How do we measure the sales directly driven by our marketing efforts?” and “Where should we allocate future marketing spend?” The blog serves as a valuable reference for those looking for more technical insights, including software tools used in this process. #datascience #marketing #measurement #incrementality #analysis #experimentation – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gWKzX8X2 

  • View profile for Sindhu Biswal
    Sindhu Biswal Sindhu Biswal is an Influencer

    Founder@Buzzlab | Ex-FilterCopy, PayTm Insider | Helping brands with content marketing

    50,494 followers

    It's about time you know when your Meta Campaign is not working 1- Consistently Poor Results ↳ Metrics like CTR, engagement, or conversions show no improvement despite adjustments. Tip: Audit your campaign's performance. Review pixel setup, landing page experience, and ad frequency. Small tweaks might save the campaign if issues are technical. 2- Irrelevant Targeting ↳ Ads fail to connect with the right audience, leading to low relevance scores and wasted impressions. Tip: Refine your audience using Custom or Lookalike Audiences. Leverage insights from past campaigns to better align with your target demographic. 3- Creative Fatigue ↳ Users ignore ads due to repetitive visuals and messaging. Even refreshed creatives fail to regain attention. Tip: Test entirely new ad formats, such as carousel or video ads, and focus on storytelling that resonates emotionally with your audience. 4- Low ROI ↳ The campaign’s cost outweighs its returns, even after budget adjustments. Tip: Analyze high-performing placements and allocate your budget strategically. If performance remains poor, shift to channels with better potential for ROI. 5- Changed Business Focus" ↳ Campaign goals no longer align with the brand’s evolving objectives or market conditions. Tip: Reassess your broader strategy. Ensure that future campaigns align closely with your updated business goals and utilize A/B testing to validate new directions. Sometimes, the best move is to pause, regroup, and rebuild your campaign strategy. Meta’s dynamic platform rewards adaptability and a data-driven approach.

  • View profile for Nikhil Mirashi

    B2B SaaS Marketing | Field Marketing | Integrated Marketing | Regional Marketing | Demand Gen | Events | Marketing Advisor, Mentor, Consultant, Speaker & Content Creator

    8,029 followers

    Two important sources of great insight that most marketers tend to miss out are: (1)Win-loss analysis (2)Existing customers (1) A win-loss analysis having accurate and comprehensive data (qualitative as well), can throw insights that are rarely documented anywhere else. Even secondary research will not give you that data. Analyzing these can provide trends about geographies, solution areas, industry, company type/ size, the audience involved in decision making, sponsoring, influencing, objections, FAQs, challenges, your USPs and probably your sweet spot. 💡 Example: In one former role, we realized that we were winning more new logos in non-banking finance apps in a particular market and that too primarily for a particular functionality which we werent highlighting much globally. 💡 Additionally, some cases highlighted that prospects were actively liking a particular type of content we were regularly promoting/ distributing and this was purely due to qualitative win-loss analysis that would never show in any numerical analysis (2) Customer interviews: Talking to existing customers ( in a non-sales situation) can help you to get a real understanding of how they use your product, what exactly is their Aha moment, what challenges they face, what's something they'd love to have and so on. Apart from helping the product teams, this provides useful fodder to address gaps in positioning, improve customer retention or help strengthen the buyer's journey. 💡 Example: In one former role, one reason our enterprise customers went gaga over us was our white glove onboarding and the overall post-sales process (typically handled by customer success). This helped us craft a niche bottom of the funnel campaign around this theme. ➡️ To conclude, the above can then be used in your marketing plans to capitalize on strengths, reducing weaknesses, determining areas to avoid, identifying opportunities etc. Eventually, you'll see a marked reduction in sales cycles and better retention as marketing would have taken care of most obstacles hindering the sales process. #B2BMarketing

  • View profile for Imad Saade
    Imad Saade Imad Saade is an Influencer

    Chief Operation Officer | Managing Director | Strategic Sales Growth & Customer Experience Innovator

    7,143 followers

    When Data Tells a Story You Did Not Expect! One of the most valuable lessons I learned while building Marahb is that data rarely confirms what you want to believe. It confirms what is real. We entered the marketplace with clear expectations about which categories would outperform and which brands would attract the first wave of interest. Instead, the numbers pointed in a completely different direction. Clients were not chasing what was most familiar. They were gravitating toward pieces with a clear sense of identity. Items with visible craftsmanship. Items with character. Items that carried a story about where they came from and the people who made them. McKinsey research reinforces this shift. Products with transparent provenance convert significantly higher because trust is now a core part of the purchase journey. People invest in authenticity, not just aesthetics. They want to understand what stands behind a price tag, especially in premium and luxury categories. What surprised me most was how quickly behaviors became obvious through analytics. Time on page. Repeat visits. Save for later. Click to zoom. These signals spoke louder than any internal assumption. They revealed that clients respond to emotional credibility as much as design. The story behind the product played a direct role in its performance. This is the real power of data. It removes ego from decision-making. It brings clarity to areas where intuition alone can mislead. It protects brands from repeating outdated beliefs and pushes them toward what the customer already feels. Data is not just measurement. It is direction. It is the narrative your customers are writing with their actions. The question is whether leaders are willing to read it and adjust with humility. #DataInsights #RetailAnalytics #MarketplaceStrategy #EcommerceGrowth #ConsumerBehavior #ProductStrategy #CustomerInsights #BusinessIntelligence #GCCMarket #DigitalCommerce

  • View profile for Tilak Pujari

    Fixing what’s breaking your email revenue | Building Mailora (Deliverability Intelligence, without the enterprise complexity) usemailora.com

    15,242 followers

    Your inbox performance can fall off a cliff in 48 hours and your ESP dashboard has no clue what went wrong?! That’s the part that messes with marketers the most. I’ve seen teams spend days rewriting subject lines, swapping templates, blaming creative… When the real issue was something boring and invisible. A DNS change. A broken DKIM record. An unsubscribe flow that quietly pushed people to hit “Report spam.” Deliverability drops don’t usually announce themselves. They whisper. The slides below are the exact 48-hour triage order I use before touching copy: 1. Check authentication alignment (SPF, DKIM, DMARC) 2. Look at mailbox breakdowns (Gmail vs Outlook clues) 3. Audit unsubscribe friction (complaints rise fast) 4. Run an inbox placement test before your next big send 5. Freeze volume spikes until you know what changed This is the rule: Fix trust signals first. Then fix content. Want the printable 1-page checklist? Comment TRIAGE and I’ll send it over. Quick question: when performance drops, what do you check first, copy… or infrastructure? #email #emailmarketing

Explore categories