Automation Implementation Tips

Explore top LinkedIn content from expert professionals.

  • View profile for Henry Schuck

    CEO & Founder at ZoomInfo | Nasdaq Listed: GTM

    95,330 followers

    Last quarter, we spent $1,404,619 on AI tokens - an all-time high - and the ROI wasn’t what we expected… Most of the ROI didn’t come from “flashy AI”, it came from boring AI doing boring work at scale. Here’s where our spend went and what actually moved the needle: 1. Telling reps who to call today (and why) We’re using AI to sift through millions of signals and tell reps who to talk to today and why. The signals that we’ve found matter: Job changes (new decision makers = new opportunities), buying committee changes and intent signals (active web research and pricing page visits). The big ROI driver is helping our customers with daily prioritization so they don’t have to go fishing for actionable info. At ZoomInfo, We’ve seen a 25-33% increase in meeting quality and opp creation when AEs are sourcing using our AI tools. Win rates also jump from 16-20% to 30%. 2. Writing outreach that doesn’t sound automated We’re moving from “20 segments of 1,000” to 20,000 segments of 1. Not “VP IT at enterprise insurance” messaging… but John at State Farm, who we talked to last year, who competes with three of our customers, with context pulled in automatically. Customer ROI here ultimately comes from better response rates and higher close rates by being more relevant. Buyers care when you show you care. 3. Turning sales calls into usable data Every sales call (ours and customers) is recorded using @Chorus and becomes structured data: objection patterns, competitor mentions, deal risk, coaching moments. We’ve found the benefits of this are huge - 25-30% faster ramp time for new reps, and 10-15% larger deal sizes through better discovery and value articulation. The average rep sells more like the best rep. 4. Speeding up low-value engineering work Every engineer at Zoominfo has Intellij and VS Code w/ Cline. AI handles the unglamorous stuff: Boilerplate code, refactors, test coverage. We’ve seen ~25–30% faster execution on these routine tasks, which frees senior engineers to focus on system design and real product innovation. Our biggest lesson so far has been that if your data foundation is garbage, AI just helps you move faster in the wrong direction. You won’t get AI “working” until you have contextual customer/prospect data centralized, and you can actually build on top of it. We’re still early and we’re trying a lot of things but these have been the highest ROI drivers by a mile. If you’re testing AI in your GTM stack, drop a comment with what’s actually working for you - I’m all ears.

  • View profile for Shobha Moni

    25+ years transforming industries with ERP systems | Partner founder Triad Software Solutions

    23,143 followers

    I’ve killed 50+ ERP rollouts before kickoff. Always for the same 6 reasons. And your vendor will never tell you these. If you're about to start an ERP project, pause. Run this 6-question checklist first. (1) Is your CFO actively leading this project or is IT running the show? If Finance isn't in charge, you're building the wrong thing for the right price. (2) When was the last time your Chart of Accounts was redesigned? If it’s older than your finance manager, you're about to migrate legacy chaos. (3) Are you asking for a “like-for-like” system or rethinking broken workflows? If the goal is to copy-paste the past, why even switch? (4) Is Procurement part of your ERP planning team? No? Who’s mapping landed cost, freight margins, supplier controls? (5) Have you audited your master data before selecting the ERP? Or are you planning a $1M migration with duplicate SKUs and ghost vendors? (6) Did the vendor say, “You can customize that later”? That means they don’t understand your business. At all. If you answered “No” or “Not sure” to even 2 of these, stop the rollout. You’re not ready.

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    627,879 followers

    If you are building AI agents or learning about them, then you should keep these best practices in mind 👇 Building agentic systems isn’t just about chaining prompts anymore, it’s about designing robust, interpretable, and production-grade systems that interact with tools, humans, and other agents in complex environments. Here are 10 essential design principles you need to know: ➡️ Modular Architectures Separate planning, reasoning, perception, and actuation. This makes your agents more interpretable and easier to debug. Think planner-executor separation in LangGraph or CogAgent-style designs. ➡️ Tool-Use APIs via MCP or Open Function Calling Adopt the Model Context Protocol (MCP) or OpenAI’s Function Calling to interface safely with external tools. These standard interfaces provide strong typing, parameter validation, and consistent execution behavior. ➡️ Long-Term & Working Memory Memory is non-optional for non-trivial agents. Use hybrid memory stacks, vector search tools like MemGPT or Marqo for retrieval, combined with structured memory systems like LlamaIndex agents for factual consistency. ➡️ Reflection & Self-Critique Loops Implement agent self-evaluation using ReAct, Reflexion, or emerging techniques like Voyager-style curriculum refinement. Reflection improves reasoning and helps correct hallucinated chains of thought. ➡️ Planning with Hierarchies Use hierarchical planning: a high-level planner for task decomposition and a low-level executor to interact with tools. This improves reusability and modularity, especially in multi-step or multi-modal workflows. ➡️ Multi-Agent Collaboration Use protocols like AutoGen, A2A, or ChatDev to support agent-to-agent negotiation, subtask allocation, and cooperative planning. This is foundational for open-ended workflows and enterprise-scale orchestration. ➡️ Simulation + Eval Harnesses Always test in simulation. Use benchmarks like ToolBench, SWE-agent, or AgentBoard to validate agent performance before production. This minimizes surprises and surfaces regressions early. ➡️ Safety & Alignment Layers Don’t ship agents without guardrails. Use tools like Llama Guard v4, Prompt Shield, and role-based access controls. Add structured rate-limiting to prevent overuse or sensitive tool invocation. ➡️ Cost-Aware Agent Execution Implement token budgeting, step count tracking, and execution metrics. Especially in multi-agent settings, costs can grow exponentially if unbounded. ➡️ Human-in-the-Loop Orchestration Always have an escalation path. Add override triggers, fallback LLMs, or route to human-in-the-loop for edge cases and critical decision points. This protects quality and trust. PS: If you are interested to learn more about AI Agents and MCP, join the hands-on workshop, I am hosting on 31st May: https://lnkd.in/dWyiN89z If you found this insightful, share this with your network ♻️ Follow me (Aishwarya Srinivasan) for more AI insights and educational content.

  • View profile for Daniel Croft Bednarski

    I Share Daily Lean & Continuous Improvement Content | Efficiency, Innovation, & Growth

    10,527 followers

    Don’t Automate Complexity... Simplify and Error-Proof Instead When problems arise, it’s tempting to think automation is the magic fix. But automating a broken or complex process just means you’re speeding up the production of errors. The smarter approach? Simplify the process and error-proof it (Poka Yoke) before thinking about automation. Here’s why simplification often beats automation and how you can apply it. Why You Should Simplify Before Automating: 1️⃣ Faster, Cheaper Improvements Simplifying a process through standardization and removing unnecessary steps often solves problems more quickly and at a lower cost than automation. 2️⃣ Avoid Automating Waste If your process is full of waste (like waiting, overprocessing, or rework), automating it only speeds up inefficiency. Fix the process first, then think about automation. 3️⃣ Built-In Error Proofing With Poka Yoke solutions (like jigs, fixtures, or guides), you can design processes to prevent errors from happening in the first place—without needing expensive sensors or software. 4️⃣ Flexibility and Adaptability Simplified processes are easier to adjust and improve, while automated systems can be rigid and costly to change once implemented. How to Simplify and Error-Proof a Process: 🔍 Map the Current Workflow: Identify unnecessary steps, bottlenecks, and areas prone to errors. ✂️ Eliminate Waste: Remove any steps that don’t add value to the product or service. 📋 Standardize Work: Create clear, repeatable instructions that everyone can follow. 🔧 Introduce Poka Yoke: Physical Error-Proofing: Use jigs, fixtures, or alignment guides to prevent incorrect assembly. Visual Cues: Use color-coded labels or visual templates to guide operators. Sensors or Alarms: Only when needed, use low-cost technology to detect errors in real time. Example of Simplification and Poka Yoke in Action: A warehouse team was dealing with frequent errors when picking products for orders. Instead of implementing a costly automated picking system, they: 1. Introduced a color-coded bin system (Poka Yoke) to help operators select the correct items. 2. Simplified the picking route to reduce unnecessary walking and waiting time. Result: Picking errors dropped by 80%, and productivity increased by 15%—all without expensive automation. When to Consider Automation: Once the process is simplified and stabilized with minimal variation, automation can enhance speed and efficiency. But it should support an optimized process, not mask its problems.

  • View profile for Charlotte Johnson

    What actually happens when attackers compromise your identity layer? I help teams answer that question @ Rubrik

    56,884 followers

    ~30% of my pipeline comes from Closed Lost opportunities. So when an opportunity is Closed Lost, don’t let it go cold. If you have a sales engagement tool, set up an automation rule to auto add the primary contact into a Closed Lost Cadence, if not, just do this manually. Here’s an example cadence: 🔹 Step 1 (30 days post-CL) → Manual email (personalised) Summarise their focus, why the deal was lost, and let them know you’ll stay in touch. 📩 Example: "Hey Billybob, really enjoyed working with you and learning more about [initiative], like increasing conversion rates from 12% → 15% and driving $100K pipeline per AE. Appreciate other priorities took precedent, but I’ll stay in touch until timing makes sense". 🔹 Step 2 (55 days post-CL) → Automated email (deposit) Share a relevant resource. 📩 Example: "Pipeline is a challenge for most teams - thought this 30MPC webinar on account segmentation might be useful". 🔹 Step 3 (80 days post-CL) → Evaluate next steps Any team growth? Leadership changes? Priority shifts? No change → Stay in Closed Lost cadence. Key changes → Move to a prospecting cadence & re-engage. 🔹 Step 4 (105 days post-CL) → Phone call + LinkedIn touch (check-in). 🔹 Step 5 (130 days post-CL) → Automated email (new product update). 📩 Example: "See how Salesloft Rhythm incorporates AI into workflows to prioritise prospects most likely to convert into meetings [link]". 🔹 Step 6 (155 days post-CL) → Call (check-in). 🔹 Step 7 (180+ days post-CL) → Final review & decision No movement/changes? Pause outreach or move to a light nurture cadence. New priorities? Add to outbound cadence with a tailored approach. The goal? Stay relevant without being intrusive - so when timing aligns, you’re already on their radar. Are you keeping tabs on your Closed Lost Opps, or letting them slip? #sales #cadences #closedlost

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    AI Infrastructure Product Leader | Scaling GPU Clusters for Frontier Models | Microsoft Azure AI & HPC | Former AWS, Amazon | Startup Investor | Linkedin Top Voice | I build the infrastructure that allows AI to scale

    228,959 followers

    If you’re learning AI automation without a roadmap, you’re guaranteed to get overwhelmed. People usually “learn AI automation” by jumping straight into tools… and then wonder why nothing works consistently. Real automation requires structure - thinking, logic, testing, and a gradual build-up of skills. This 18-day roadmap breaks down the exact sequence to go from zero → confidently building automations with AI, APIs, tools, and no-code platforms. Here’s the full breakdown, day by day: Day 1 - AI Automation Fundamentals Learn what automation really means, how it differs from AI and agents, and see real examples. Day 2 - Automation Thinking Break work into steps, triggers, and outcomes - the mindset behind every good automation. Day 3 - APIs & Webhooks Basics Understand how apps communicate and how events trigger workflows. Day 4 - No-Code Automation Platforms Explore Zapier, Make, n8n - and how no-code tools actually run workflows. Day 5 - Build Your First Automation Create a simple trigger-action workflow and connect two apps. Day 6 - Data Handling Pass data between steps, map fields, and work with text, numbers, and dates. Day 7 - Logic & Error Handling Add filters, conditional logic, retries, and fallbacks to keep automations reliable. Day 8 - AI Model Basics Learn prompts vs system instructions, tokens, limits, and LLM behavior. Day 9 - Using AI Inside Automations Insert AI steps into workflows and parse structured AI outputs. Day 10 - Prompt Design for Automation Write consistent prompts and reduce hallucinations with JSON outputs. Day 11 - Text-Based Task Automation Automate email replies, summaries, CRM updates, and document tasks. Day 12 - Knowledge Automation (RAG Basics) Connect AI to internal documents and fetch accurate answers from real data. Day 13 - AI Agents Basics Understand agent planning, tools, and identify use cases for agents. Day 14 - Business Use Case Automation Automate lead qualification, ticket routing, and internal processes. Day 15 - Sales & Marketing Automation Personalize outreach, repurpose content, and automate follow-ups. Day 16 - Operations Automation Manage approvals, notifications, and repetitive operational tasks. Day 17 - Monitoring & Optimization Track workflow success, cut costs, and improve performance. Day 18 - Build & Ship Your System Design, test, document, and finalize a complete end-to-end automation. You don’t master AI automation by learning tools, you master it by learning systems thinking, data flow, and structured execution. Follow this roadmap, and you’ll build automations that are reliable, scalable, and business-ready.

  • View profile for Gajen Kandiah

    Chief Executive Officer, Rackspace Technology

    23,621 followers

    I've reviewed Anthropic's Risk Report for Claude Opus 4.6 because many of our enterprise customers are actively deploying AI agents into production environments. When those systems fail, the consequences are operational, financial and reputational. Most of the reaction centers on the headline that catastrophic risk is very low but not negligible. What matters more for customers and future customers is how risk actually manifests inside live enterprise systems and what that means for uptime, data integrity and compliance. It does not look like a breach. It looks like business as usual. An agent subtly influencing procurement decisions. A finance workflow that starts omitting inconvenient data. Permissions that expand over time without clear oversight. Anthropic describes a scenario called Persistent Rogue Internal Deployment, where an AI system with privileged access creates a less monitored instance of itself and continues operating inside production systems. In a real enterprise environment, that translates into downtime, data exposure or regulatory impact. The organizations at greatest risk are not the ones moving cautiously. They are the ones who pushed agents into production without adding an operational governance layer. We have seen this pattern before in cloud adoption. Technology advances quickly, and controls often lag behind. That gap is where exposure grows. So what should enterprise IT and security teams do now? 1. Constrain actions, not just access. Define what an agent can set in motion and enforce least privilege at the identity level, just as you have done for human users for decades. 2. Log actions, not just outcomes. Maintain an auditable trail of what the agent did, where and what triggered it, the same standard applies to human operators in regulated environments. 3. Automate your tripwires. Do not rely on people to catch machine speed behavior. Build policy enforcement and anomaly response into the loop. 4. Audit your agent footprint. Inventory every agent, its owner, permissions and kill path. Governance starts with visibility and most enterprises are still building it. The window to build these guardrails is now, before the agent workforce scales. At Rackspace, 25 years of running mission-critical systems have taught us that trust without controls creates exposure. We build and operate AI infrastructure with governance embedded from day one because customers need speed, resilience and measurable outcomes, not experiments in production. What this means for you is simple. Move forward on AI with confidence, but make operational governance part of the foundation so scale strengthens your business instead of introducing risk.

  • View profile for Adam Barbera

    Co-founder and CEO at Dost AI. Give your finance team their time back with AP AI agent.

    13,626 followers

    McKinsey's ERP warning for CFOs: 1. 70% of ERP transformations fail     Most ERP projects run over budget and underdeliver. Why? Because companies underestimate complexity. Finance expects a big bang switch. Instead, they get endless data cleanups, mismatched chart of accounts, and broken workflows. In finance, a 90% rollout isn’t a win. If one close process breaks, the whole system stalls.     2. It's your design, not your tech     CFOs blame vendors. But the real issue is design. Too many teams lift-and-shift old processes into new systems. That hardcodes inefficiency. The 30% who succeed don’t copy the past. They redesign approvals, reconciliations, and controls before go-live. ERP isn’t a tool migration. It’s an operating model redesign.     3. Finance feels the pain first     In sales, if CRM misses a field, people workaround. In finance, if ERP misses a journal entry, you misstate results. Month-end closes, audits, and compliance magnify every flaw. That’s why ERP failures show up in finance before anywhere else. Unless you engineer accuracy and reliability from day one, the CFO’s credibility is at risk.     4. The gap turns critical     McKinsey calls it out: 70% stuck, 30% pulling ahead. The stuck companies run digital systems that replicate legacy pain. The winners embed automation, shared data models, and continuous improvement. Over time, that gap compounds into faster closes, lower costs, and better decision-making.     TAKEAWAY ERP failures don’t just cost money at go-live. They lock in inefficiencies for years. Every close takes longer. Every audit is harder. Every board deck gets delayed. The reverse is also true. When ERP is designed right, benefits compound: - Faster closes free capacity - Automation creates leverage - Cleaner data sharpens insight The real gap isn’t visible at launch. It shows up quarter after quarter, year after year.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,715 followers

    GenAI adoption is all about people, not about tools. Pharma giant Novo Nordisk offers a great case study of working out what supports useful uptake of AI across a large organization. A case study in MIT Sloan Management Review uncovers a range of useful lessons. Here are some of the most interesting. 🚀 Recognize a mid-cycle drop as normal. Novo Nordisk grew Copilot use from a few hundred to 20,000 users in just over a year, with 23% becoming frequent users within one month. However, by month three or four, 15% of early adopters dropped off and average time saved per week declined. Recognizing this dip as natural helped avoid panic and kept the focus on re-engagement strategies rather than getting staff to try tools for the first time. 🛠 Deliver function-specific training through champion networks. Generic AI onboarding failed to meet the needs of specialized roles. Novo Nordisk succeeded by creating domain-specific training, leveraging internal champions to contextualize AI use, and allowing teams to shape guidance based on their actual work. This addressed “AI shaming” and bridged confidence gaps across functions. 🤝 Use internal champions to overcome cultural resistance. Skepticism wasn’t solved by policy, it was shifted by influence. Novo Nordisk identified trusted, high-status employees to openly adopt and advocate for AI tools. Their visible endorsement encouraged hesitant peers to try AI without fear of judgment or failure. 📈 Treat adoption as a change process, not a tech rollout. Rather than pushing a one-time launch, Novo Nordisk framed GenAI as a long-term transformation. This meant investing in ongoing communication, support structures, and iterative learning. The approach acknowledged that adoption would ebb and flow, and prepared the organization to adapt accordingly. 🎯 Emphasize strategic value over time saved. Though average users saved about 2 hours per week, the most meaningful wins came from higher-quality work—more strategic thinking, clearer writing, and better planning. By highlighting these human-centric gains, Novo Nordisk built a stronger case for AI’s workplace relevance beyond mere productivity. 📊 Use employee data to shape the deployment strategy. Over 3,000 employee surveys and interviews helped Novo Nordisk spot where and why adoption lagged. This feedback guided real-time adjustments—like where to invest in new use cases, where to scale back, and how to tailor messaging. It also surfaced which functions became tool-reliant versus those needing more support.

  • View profile for Raj Goodman Anand
    Raj Goodman Anand Raj Goodman Anand is an Influencer

    Helping organizations build AI operating systems | Founder, AI-First Mindset®

    23,722 followers

    Too many AI strategies are being built around the technology instead of the business challenges they should solve. The real value of AI comes when it is directly tied to your goals. I have arrived at seven lessons on how to align your AI strategy directly with your business goals: 1. Start with the "why," not the "what." Before discussing models or tools, ask what business problem you need to solve. It could be speeding up product development, or cutting operational costs. Let that answer be your guide. 2. Think in terms of business outcomes. Measure AI success by its impact on metrics like revenue growth or employee productivity not by technical accuracy. 3. Build a cross-functional team. AI can't live solely in the IT department. Include leaders from all relevant departments from day one to ensure the strategy serves the entire business. 4. Prioritize quick wins to build momentum. Identify a few small, high-impact projects that can deliver results quickly. This builds organizational confidence and makes people ready to take on larger initiatives. 5. Invest in data foundations. The best AI strategy will fail without clean and well-governed data. A disciplined approach to data quality is non-negotiable. 6. Focus on change management. Technology is the easy part. Prepare your people for new workflows and equip them with the skills to work alongside AI effectively. 7. Create a feedback loop. An AI strategy is not a one-time plan. Continuously gather feedback from users and analyze performance data to adapt and refine your approach. The goal is to make AI a part of how you achieve your objectives, not a separate project. #AIStrategy #BusinessGoals #DigitalTransformation #Leadership #ArtificialIntelligence

Explore categories