My top takeaways from Jason M. Lemkin: 1. AI is enabling smaller, leaner teams to punch above their weight. Companies that used to need 10 salespeople to hit a revenue target are now achieving the same results with five or six, thanks to AI-powered automations and ever-smarter agents. 2. AI agents can match human salespeople’s performance when properly trained. SaaStr replaced their entire 10-person go-to-market team with 20 AI agents managed by 1.2 humans, while maintaining the same revenue levels. “It’s not better; it’s not worse. But it’s so much more efficient, and it scales like software scales.” 3. GTM efficiency is now a competitive moat, and AI is the primary lever for achieving it. In a market where growth at all costs is no longer rewarded, Jason sees AI as the way for B2B companies to maintain or accelerate revenue growth while keeping sales and marketing costs flat. Companies that master this balance will be able to outlast and outcompete. 4. The mediocre middle is most at risk. “AI is replacing the jobs people don’t want to do today. And it is displacing the mid-pack and the mediocre—their jobs are very much at risk.” The best salespeople will build their AI superpowers, but those who just show up and don’t truly understand their job are going to be quickly replaced by AI. 5. The traditional advice to hire your first sales rep at $1M ARR is becoming obsolete in an AI-first world. Jason is observing portfolio companies now reaching $5M, even $10M, ARR with founder-led sales supplemented by AI tooling, automation, and highly efficient inbound funnels. This doesn’t mean sales teams disappear, but it does mean the timing, structure, and skill set required are fundamentally different than they were 18 months ago. 6. The traditional playbook of hiring large SDR teams is becoming obsolete as AI tools take over prospecting and initial outreach. Jason has built AI-driven outbound engines that can personalize emails, qualify leads, and book meetings at a fraction of the cost of human SDRs. Founders who continue to scale traditional SDR orgs without testing AI alternatives risk overspending on a function that’s being rapidly commoditized. 7. Cold outbound is not dead, but the bar for what works has skyrocketed—generic spray-and-pray is now filtered out by AI gatekeepers on the buyer side. The companies still seeing success with outbound are using hyper-personalized, research-driven sequences that feel one-to-one, often powered by AI that scrapes LinkedIn, monitors job changes, and tailors messaging to the individual. If your outbound isn’t informed by real-time signals and contextual relevance, it’s already in the spam folder.
Balancing AI and Human Expertise
Explore top LinkedIn content from expert professionals.
-
-
🤝 How Do We Build Trust Between Humans and Agents? Everyone is talking about AI agents. Autonomous systems that can decide, act, and deliver value at scale. Analysts estimate they could unlock $450B in economic impact by 2028. And yet… Most organizations are still struggling to scale them. Why? Because the challenge isn’t technical. It’s trust. 📉 Trust in AI has plummeted from 43% to just 27%. The paradox: AI’s potential is skyrocketing, while our confidence in it is collapsing. 🔑 So how do we fix it? My research and practice point to clear strategies: Transparency → Agents can’t be black boxes. Users must understand why a decision was made. Human Oversight → Think co-pilot, not unsupervised driver. Strategic oversight keeps AI aligned with values and goals. Gradual Adoption → Earn trust step by step: first verify everything, then verify selectively, and only at maturity allow full autonomy—with checkpoints and audits. Control → Configurable guardrails, real-time intervention, and human handoffs ensure accountability. Monitoring → Dashboards, anomaly detection, and continuous audits keep systems predictable. Culture & Skills → Upskilled teams who see agents as partners, not threats, drive adoption. Done right, this creates what I call Human-Agent Chemistry — the engine of innovation and growth. According to research, the results are measurable: 📈 65% more engagement in high-value tasks 🎨 53% increase in creativity 💡 49% boost in employee satisfaction 👉 The future of agents isn’t about full autonomy. It’s about calibrated trust — a new model where humans provide judgment, empathy, and context, and agents bring speed, precision, and scale. The question is: will leaders treat trust as an afterthought, or as the foundation for the next wave of growth? What do you think — are we moving too fast on autonomy, or too slow on trust? #AI #AIagents #HumanAICollaboration #FutureOfWork #AIethics #ResponsibleAI
-
Last week, a senior manager presented me with a strategic roadmap during an advisory session. It was polished, grammatically perfect, and filled with current buzzwords. It looked like a fantastic job but my gut feeling gave me a strange feeling I asked one simple question: "𝘞𝘩𝘺 𝘥𝘪𝘥 𝘺𝘰𝘶 𝘱𝘳𝘪𝘰𝘳𝘪𝘵𝘪𝘻𝘦 𝘤𝘩𝘢𝘯𝘯𝘦𝘭 𝘟 𝘰𝘷𝘦𝘳 𝘤𝘩𝘢𝘯𝘯𝘦𝘭 𝘠 𝘪𝘯 𝘘3?" I was not surprised by the reaction. He froze. He couldn't give a proper answer. Why? Because he hadn't made that decision. The algorithm did. He had fallen for the "𝗢𝗿𝗮𝗰𝗹𝗲 𝗠𝘆𝘁𝗵". He treated the AI as a "know-it-all" guru rather than what it actually is: a high-power probabilistic engine. This passive approach is dangerous. When we view AI as an oracle, we stop analyzing and start obeying. We confuse “𝘨𝘰𝘰𝘥 𝘸𝘳𝘪𝘵𝘪𝘯𝘨” with “𝘨𝘰𝘰𝘥 𝘪𝘥𝘦𝘢𝘴 𝘵𝘩𝘢𝘵 𝘐 𝘳𝘦𝘢𝘭𝘭𝘺 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥 𝘢𝘯𝘥 𝘐 𝘤𝘢𝘯 𝘸𝘰𝘳𝘬 𝘸𝘪𝘵𝘩”. Here is the uncomfortable reality: LLMs do not "reason" in the human sense; they predict the next most likely word based on patterns. They are designed to sound convincing, not to be factually accurate. If you want to survive the Algorithm Era, you must shift from a passive user to an active driver. Here is how to break the AI toxic dependency: • 𝗗𝗲𝗺𝗼𝘁𝗲 𝘁𝗵𝗲 𝗔𝗜: Stop treating ChatGPT as a Vice President of Strategy. Treat it as a brilliant but sometimes intoxicated summer intern. It generates volume but YOU provide the judgment. • 𝗧𝗵𝗲 "𝗝𝗮𝗴𝗴𝗲𝗱 𝗙𝗿𝗼𝗻𝘁𝗶𝗲𝗿" 𝗥𝘂𝗹𝗲: AI excels at creative brainstorming but often fails at simple logical tasks. Never delegate the final decision on high-stakes logic to a black box. • 𝗜𝗻𝘁𝗲𝗿𝗿𝗼𝗴𝗮𝘁𝗲, 𝗗𝗼𝗻'𝘁 𝗝𝘂𝘀𝘁 𝗔𝘀𝗸: Don't just ask for an answer. Ask the AI to show its work. Force it to reveal its "Chain of Thought" so you can verify the logic, not just the result. • 𝗢𝘄𝗻 𝘁𝗵𝗲 𝗪𝗵𝘆: If you cannot explain the rationale behind an AI-generated strategy without looking at your notes, you do not have a strategy. You have a hallucination. Let’s be honest: What is the most plausible lie an AI has told you recently that almost slipped into a final report?. I’ll start: AI confidently claimed a competitor had discontinued a specific product line because it seemed "logical." It hadn't. Let me know in the comments. #AIAugmentedProfessional #HybridIntelligence #AiforExecutives #OracleMyth
-
AI products like Cursor, Bolt and Replit are shattering growth records not because they're "AI agents". Or because they've got impossibly small teams (although that's cool to see 👀). It's because they've mastered the user experience around AI, somehow balancing pro-like capabilities with B2C-like UI. This is product-led growth on steroids. Yaakov Carno tried the most viral AI products he could get his hands on. Here are the surprising patterns he found: (Don't miss the full breakdown in today's bonus Growth Unhinged: https://lnkd.in/ehk3rUTa) 1. Their AI doesn't feel like a black box. Pro-tips from the best: - Show step-by-step visibility into AI processes - Let users ask, “Why did AI do that?” - Use visual explanations to build trust. 2. Users don’t need better AI—they need better ways to talk to it. Pro-tips from the best: - Offer pre-built prompt templates to guide users. - Provide multiple interaction modes (guided, manual, hybrid). - Let AI suggest better inputs ("enhance prompt") before executing an action. 3. The AI works with you, not just for you. Pro-tips from the best: - Design AI tools to be interactive, not just output-driven. - Provide different modes for different types of collaboration. - Let users refine and iterate on AI results easily. 4. Let users see (& edit) the outcome before it's irreversible. Pro-tips from the best: - Allow users to test AI features before full commitment (many let you use it without even creating an account). - Provide preview or undo options before executing AI changes. - Offer exploratory onboarding experiences to build trust. 5. The AI weaves into your workflow, it doesn't interrupt it. Pro-tips from the best: - Provide simple accept/reject mechanisms for AI suggestions. - Design seamless transitions between AI interactions. - Prioritize the user’s context to avoid workflow disruptions. -- The TL;DR: Having "AI" isn’t the differentiator anymore—great UX is. Pardon the Sunday interruption & hope you enjoyed this post as much as I did 🙏 #ai #genai #ux #plg
-
𝗖𝗮𝗻 𝗔𝗜 𝗿𝗲𝗽𝗹𝗮𝗰𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀 𝗮𝗻𝗱 𝗰𝗼𝗻𝘀𝘂𝗹𝘁𝗮𝗻𝘁𝘀? Not today. And not if you know how to actually use it. As someone who works on complex projects with government clients and consulting teams, I’ve seen the promise and the pitfalls of AI in research. But let’s be honest: 𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲 𝘆𝗼𝘂𝗿 𝗯𝗿𝗮𝗶𝗻. 𝗜𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝘀 𝘆𝗼𝘂𝗿 𝗹𝗮𝗰𝗸 𝗼𝗳 𝗽𝗿𝗼𝗰𝗲𝘀𝘀. Take this recent case: 🗞️ Deloitte submitted a $440,000 report to the Australian government that turned into a national embarrassment. Why? It was partly drafted using ChatGPT. And the AI-generated sections included factual errors, lacked citations, and failed to reflect the policy context. Result? Public backlash. Trust erosion. Damaged credibility. So what does this teach us? 𝗔𝗜 𝗶𝘀𝗻'𝘁 𝗱𝗮𝗻𝗴𝗲𝗿𝗼𝘂𝘀. 𝗕𝗹𝗶𝗻𝗱 𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗲 𝗼𝗻 𝗔𝗜 𝗶𝘀. How I Use AI (Without Letting It Think for Me): Draft fast, think slow: I use AI to sketch early structures, not write the final message. Every insight is still mine. ✅ Source triangulation: I always cross-check data points across 3+ reliable sources, especially when AI gives me an answer that sounds “too smooth.” ✅ Summarize, don’t strategize: I let AI summarize policies, news articles, or frameworks. But recommendations? That’s human domain. ✅ Stakeholder sensitivity: No model understands your client like you do. Context isn’t optional, it's what makes your research relevant. ✅ Build frameworks, not just documents: AI can help you spot patterns and cluster insights. I build mind maps and project scaffolding with it. 🔁 Use AI to amplify your thinking. Not replace it. Research is becoming more agile. Clients want insights faster. And AI can help, but only if you’re still the analyst in charge. Let’s make AI a research co-pilot. Not the pilot. #AIinResearch #Consulting #Deloitte #Australia #HumanInTheLoop #ResponsibleAI #ResearchFrameworks #PolicyAnalysis #GovTech #ConsultingLife #LinkedInForAnalysts #AItools #AnalystToolkit
-
It used to take me 8 hours toiling over a hot laptop to write my fortnightly newsletter. Now I do it in half that time. Generative AI, right? Nay-nay, friends... pretty much the opposite. It's about Process. "Process" sounds as much fun as scrubbing dirt from beets. But stick with me. Here's the non-robot, non-A.I., counter-intuitive first step in mine: 1. START WITH PEN + PAPER. Write a list of your key points on paper with a pen. Flesh out those key points with a few bullet points. Don't worry about "writing." You're welding the scaffolding for ideas that will become writing. Why this works: 1️⃣ You write slower than you type. Working with analog tools slows you down. Your high-speed locomotive brain isn't screaming ahead to get to Next Sentence Depot. It has to wait patiently for your hands to catch up, like a car driver at a railroad crossing waiting for the train's caboose. That slower pace ultimately delivers better insights. 2️⃣ You can't backspace or start over. You can only keep going. 3️⃣ It's the ultimate in distraction-free writing. Checking email. Scrolling LinkedIn. Clicking to another tab. You cannot. Because... well, paper. *** Gen AI as a tool can help us write more efficiently (I share an idea or two about that in my Process, too). But the point of our writing is to deliver insights and craft that could come only from us. No amount of AI-fueled efficiency or optimization is going to replace that. Pens and paper and other analog tools can help exponentially. Sometimes, the slowest way is the fastest way. Love, Ann p.s. If you're not on my list, I would love to write to you every two weeks, too: https://lnkd.in/gsiMkmzZ
-
The gap I am focused on most these days when it comes to AI at work, is the gap between employees and employers. We know that 75% of knowledge workers are using GAI on the job, saying it’s not just helping them save time to focus on more important work but also to bring more human skills to their work, like creativity. But we also know that only 39% of those workers have been trained on AI at work, as companies struggle still to come up with a point of view on AI as well as a strategy for workforce development in the age of AI. If your company is struggling on that part, one thing you can do is look to those who are leading the way. IBM and Siemens are great examples of companies who are two steps ahead of most, moving beyond the incremental early days of AI towards the real, transformative benefits. I was inspired by my conversation a few weeks ago with Nickle LaMoreaux and Brenda Discher who are not only innovating with AI at scale, but keeping people at the center of it all. Across those conversations and many others I’m having, a few key foundational steps are emerging: 1️⃣ Have a pro-human AI point of view and strategy in place. AI has the potential to build a world of work where people can bring their full skills and abilities to bear — but we need to believe in the power of our people more than the power of our tech to realize it. 2️⃣ See jobs as tasks, not titles. Once you boil down a job down into a set of tasks, it’s much easier to see where AI is coming in to change or disrupt some of those tasks and where there are uniquely human skills people will spend much more time on then before. In a world where 68% of skills are set to change by 2030, understanding where this change will hit is crucial to helping your teams stay resilient. 3️⃣ Build learning into the day to day of your company’s culture. As skills for jobs change rapidly – learning is no longer a one-off moment at the start of a career. The ability to learn, unlearn, and relearn is what sets teams apart to stay agile and resilient.
-
The real risk with Artificial Intelligence today is not that it’s being used as a tool—but that it’s increasingly treated as a silver bullet. This shift, often described by experts as 𝗔𝗜 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗶𝘀𝗺, reflects a growing belief that AI can resolve complex social, ethical, and organizational problems simply by applying more data and better models. AI has undeniably earned its place as a utility. It automates routine work, improves efficiency, and enables data-driven decisions across domains such as healthcare, finance, and agriculture. But problems emerge when this utility mindset mutates into blind faith. When AI is framed as a universal solution, four failure modes consistently surface: 𝗢𝘃𝗲𝗿-𝗿𝗲𝗹𝗶𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗯𝗶𝗮𝘀 People defer to AI recommendations over their own judgment—especially when outputs sound confident. The result is diminished critical thinking, weaker challenge, and reduced creativity. 𝗙𝗮𝗯𝗿𝗶𝗰𝗮𝘁𝗲𝗱 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 (𝗵𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀) AI systems can produce plausible but false outputs. High-profile legal cases, where generative AI tools fabricated court citations, illustrate how credibility can collapse when verification is skipped. 𝗕𝗶𝗮𝘀 𝗮𝗺𝗽𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 Trained on historical data, AI systems often reproduce—and sometimes intensify—existing racial, gender, and socioeconomic biases, particularly in hiring, credit scoring, and criminal justice applications. 𝗘𝗿𝗼𝘀𝗶𝗼𝗻 𝗼𝗳 𝗵𝘂𝗺𝗮𝗻 𝗮𝗴𝗲𝗻𝗰𝘆 Decision-making responsibility quietly shifts from humans to machines, creating a “responsibility gap” where ethical, political, and accountability judgments are effectively outsourced. 𝗧𝗵𝗲 𝗪𝗮𝘆 𝗙𝗼𝗿𝘄𝗮𝗿𝗱: 𝗛𝘂𝗺𝗮𝗻-𝗶𝗻-𝘁𝗵𝗲-𝗟𝗼𝗼𝗽 Researchers such as 𝗦𝘁𝘂𝗮𝗿𝘁 𝗥𝘂𝘀𝘀𝗲𝗹𝗹 argue that the antidote to AI solutionism is not less AI—but better integration of human judgment. 𝗛𝘆𝗯𝗿𝗶𝗱 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝘄𝗼𝗿𝗸𝘀 𝗯𝗲𝘀𝘁 AI excels at pattern recognition and scale; humans excel at context, values, and judgment. The highest-quality outcomes emerge when the two are deliberately combined. 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝗯𝘆 𝗱𝗲𝘀𝗶𝗴𝗻 Organizations are moving toward principles of transparency, accountability, and human verification—ensuring that consequential decisions are reviewed, challenged, and owned by people. The trend is clear: the future is not about replacing human intelligence, but about building “𝘀𝗮𝗳𝗲-𝗯𝘆-𝗱𝗲𝘀𝗶𝗴𝗻” 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀—assistants that augment human capability rather than substitute for it. AI is powerful. But without humans firmly in the loop, it is not wise.
-
Reliability, evaluation, and “hallucination anxiety” are where most AI programmes quietly stall. Not because the model is weak. Because the system around it is not built to scale trust. When companies move beyond demos, three hard questions appear: →Can we rely on this output? →Do we know what “good” actually looks like? →How much human oversight is enough? The fix is not better prompting. It is a strategy and operating discipline. 𝐅𝐢𝐫𝐬𝐭: Define reliability like a product, not a vibe. Every serious AI use case should have a one-page SLO sheet with measurable targets across: →Task success ↳Right-first-time rate and rubric-based acceptance →Factual grounding ↳Evidence coverage and unsupported-claim tracking →Safety and compliance ↳Policy violations and PII leakage →Operational quality ↳Latency, cost per task, escalation to humans Now “good” is no longer opinion. It is observable. 𝐒𝐞𝐜𝐨𝐧𝐝: evaluation must be continuous, not a one-off demo test. Use a simple loop: 𝐏lan: Define rubrics, datasets, and risk tiers 𝐃o: Run offline evaluations and limited pilots 𝐂heck: Monitor drift and regressions weekly 𝐀ct: Update prompts, data, guardrails, and workflows Support this with an AI test pyramid: →Unit checks for prompts and tool behaviour →Scenario tests for real edge failures →Regression benchmarks to prevent backsliding →Live monitoring in production Add statistical control charts, and you can detect silent degradation before users do. 𝐓𝐡𝐢𝐫𝐝: reduce hallucinations by design. →Run a short failure-mode workshop and engineer controls: →Require retrieval or evidence before answering →Allow safe abstention instead of confident guessing →Add claim checking and tool validation →Use structured intake and clarifying flows You are not asking the model to behave. You are designing a system that expects failure and contains it. 𝐅𝐨𝐮𝐫𝐭𝐡: make human-in-the-loop affordable. Tier risk: →Low risk: Light sampling →Medium risk: Triggered review →High risk: Mandatory approval Escalate only when signals demand it: low confidence, missing evidence, policy flags, or novelty spikes. Review becomes targeted, fast, and a source of improvement data. 𝐅𝐢𝐧𝐚𝐥𝐥𝐲: Operate it like a capability. Track outcomes, risk, delivery speed, and cost on a single dashboard. Hold a short weekly reliability stand-up focused on regressions, failure modes, and ownership. What you end up with is simple: ↳Use case catalogue with risk tiers ↳Clear SLOs and error budgets ↳Continuous evaluation harness ↳Built-in controls ↳Targeted human review ↳Reliability cadence AI does not scale on intelligence alone. It scales on measurable trust. ♻️ Share if you found thisuseful. ➕ Follow (Jyothish Nair) for reflections on AI, change, and human-centred AI #AI #AIReliability #TrustAtScale #OperationalExcellence
-
A software engineer can ask AI to solve a complex programming problem in a few precise words, drawing on years of technical knowledge. But someone without programming experience? They might need paragraphs just to explain what they want, often missing crucial technical context and requirements that would be second nature to a developer. This highlights a key truth: AI is an incredible force multiplier for those who understand the fundamentals. It's not just about giving commands – it's about knowing which problems need solving and how to frame them effectively. Yes, AI democratizes access to programming capabilities. But it simultaneously increases the value of deep technical expertise. The most powerful combination? Domain knowledge + AI fluency.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development