GenAI Implementation and Impact

Explore top LinkedIn content from expert professionals.

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    80,201 followers

    This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,812 followers

    🚨 MIT Study: 95% of GenAI pilots are failing. MIT just confirmed what’s been building under the surface: most GenAI projects inside companies are stalling. Only 5% are driving revenue. The reason? It’s not the models. It’s not the tech. It’s leadership. Too many executives push GenAI to “keep up.” They delegate it to innovation labs, pilot teams, or external vendors without understanding what it takes to deliver real value. Let’s be clear: GenAI can transform your business. But only if leaders stop treating it like a feature and start leading like operators. Here's my recommendation: 𝟭. 𝗚𝗲𝘁 𝗰𝗹𝗼𝘀𝗲𝗿 𝘁𝗼 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵. You don’t need to code, but you do need to understand the basics. Learn enough to ask the right questions and build the strategy 𝟮. 𝗧𝗶𝗲 𝗚𝗲𝗻𝗔𝗜 𝘁𝗼 𝗣&𝗟. If your AI pilot isn’t aligned to a core metric like cost reduction, revenue growth, time-to-value... then it’s a science project. Kill it or redirect it. 𝟯. 𝗦𝘁𝗮𝗿𝘁 𝘀𝗺𝗮𝗹𝗹, 𝗯𝘂𝘁 𝗯𝘂𝗶𝗹𝗱 𝗲𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱. A chatbot demo is not a deployment. Pick one real workflow, build it fully, measure impact, then scale. 𝟰. 𝗗𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗵𝘂𝗺𝗮𝗻𝘀. Most failed projects ignore how people actually work. Don’t just build for the workflow but also build for user adoption. Change management is half the game. Not every problem needs AI. But the ones that do, need tooling, observability, governance, and iteration cycles; just like any platform. We’re past the “try it and see” phase. Business leaders need to lead AI like they lead any critical transformation: with accountability, literacy, and focus. Link to news: https://lnkd.in/gJ-Yk5sv ♻️ Repost to share these insights! ➕ Follow Armand Ruiz for more

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,827 followers

    Agentic AI is 𝗻𝗼𝘁 about wrapping prompts around a large language model. It’s about designing systems that can: → 𝗣𝗲𝗿𝗰𝗲𝗶𝘃𝗲 their environment → 𝗣𝗹𝗮𝗻 actionable steps → 𝗔𝗰𝘁 on those plans → 𝗟𝗲𝗮𝗿𝗻 and improve over time And yet, many teams hit a wall—not because the models fail, but because the 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 behind them isn’t built for agent behavior. If you’re building agents, you need to think in 𝗳𝗼𝘂𝗿 𝗱𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝘀: 1. 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝘆 & 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 → Agents must decompose goals into steps and execute them independently. 2. 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 → Without memory, agents forget past context. Vector DBs like FAISS, Redis, or pgvector aren’t optional—they’re foundational. 3. 𝗧𝗼𝗼𝗹 𝗨𝘀𝗮𝗴𝗲 & 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 → Agents must go beyond text generation—calling APIs, browsing, writing code, and executing it. 4. 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗶𝗼𝗻 & 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 → The future isn’t just one agent. It's many, working together—planner-executor setups, sub-agents, role-based dynamics.     Frameworks like 𝗟𝗮𝗻𝗴𝗚𝗿𝗮𝗽𝗵, 𝗔𝘂𝘁𝗼𝗚𝗲𝗻, 𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻,𝗚𝗼𝗼𝗴𝗹𝗲'𝘀 𝗔𝗗𝗞, and 𝗖𝗿𝗲𝘄𝗔𝗜 make these architectures more accessible. But frameworks alone aren’t enough. If you’re not thinking about: • 𝗧𝗮𝘀𝗸 𝗱𝗲𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻 • 𝗦𝘁𝗮𝘁𝗲𝗳𝘂𝗹𝗻𝗲𝘀𝘀 • 𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 • 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽𝘀 …your agents will likely remain shallow, brittle, and fail to scale. The future of GenAI lies in 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝗶𝗻𝗴 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿, not just fine-tuning prompts. 2025 is the year we go from 𝗽𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 to 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘀. Let’s build agents that don’t just respond—but 𝗿𝗲𝗮𝘀𝗼𝗻, 𝗮𝗱𝗮𝗽𝘁, 𝗮𝗻𝗱 𝗲𝘃𝗼𝗹𝘃𝗲.

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,451 followers

    "Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach. Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations. From our interviews, surveys, and analysis of 300 public implementations, four patterns emerged that define the GenAI Divide: • Limited disruption: Only 2 of 8 major sectors show meaningful structural change • Enterprise paradox: Big firms lead in pilot volume but lag in scale-up • Investment bias: Budgets favor visible, top-line functions over high-ROI back office • Implementation advantage: External partnerships see twice the success rate of internal builds The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time."

  • View profile for Panagiotis Kriaris
    Panagiotis Kriaris Panagiotis Kriaris is an Influencer

    FinTech | Payments | Banking | Innovation | Leadership

    158,924 followers

    GenAI is easy to start but hard to scale. Too many companies are stuck in endless pilots. Here’s what it takes to build GenAI capability. McKinsey has recently published their findings from working with 150+ companies on their GenAI programs over two years. Two hurdles stand out: 𝟭. 𝗙𝗮𝗶𝗹𝘂𝗿𝗲 𝘁𝗼 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗲: Teams waste time on duplicate experiments, wait on compliance processes, and solve problems that don’t matter. 30% - 50% of innovation time is spent trying to meet compliance - not building. 𝟮. 𝗙𝗮𝗶𝗹𝘂𝗿𝗲 𝘁𝗼 𝘀𝗰𝗮𝗹𝗲: Even when a prototype works, most companies can’t get it into production. Risk, security, and cost barriers overwhelm teams, leading to stalled or cancelled deployments. According to McKinsey the most successful GenAI platforms contains three core components: 𝟭. 𝗔 𝘀𝗲𝗹𝗳-𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝗽𝗼𝗿𝘁𝗮𝗹: To support both innovation and scale, companies need a secure, centralized portal that gives teams easy access to pre-approved gen AI tools, services, and documentation. It should enable developers to quickly build with reusable patterns, while also offering governance features like observability, cost controls, and access management. The best portals promote contribution and reuse across the organization, reducing friction and accelerating development at scale. 𝟮.𝗔𝗻 𝗼𝗽𝗲𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝘁𝗼 𝗿𝗲𝘂𝘀𝗲 𝗚𝗲𝗻𝗔𝗜 𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀: Scaling GenAI requires modular, open architecture that enables teams to reuse services, application patterns, and data products across use cases. Leading companies build libraries of common components (like RAG, embeddings, or chat workflows) and focus on integration via APIs - not vendor lock-in. Infrastructure and policy as code ensure changes can propagate quickly and securely across the platform, reducing cost and accelerating deployment. 𝟯. 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱, 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀: To scale safely, GenAI platforms must embed automated governance that enforces compliance, manages risk, and tracks costs. This includes microservices that audit prompts, detect policy violations (like sharing sensitive personal data or generating inaccurate responses), and attribute usage to specific teams. A centralized AI gateway enforces access controls, logs interactions, and routes traffic through security filters - allowing flexibility where needed. These guardrails accelerate approval processes, reduce setup time, and let teams focus on building value - not managing risk manually. 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲? Source: McKinsey & Company 𝐒𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐭𝐨 𝐦𝐲 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫: https://lnkd.in/dkqhnxdg

  • View profile for Jonathan Holt
    Jonathan Holt Jonathan Holt is an Influencer

    Chief Executive KPMG UK and Switzerland, Senior Partner, KPMG UK

    43,348 followers

    It seems to have become an accepted fact that Generative AI will replace many office jobs. And today I've written in the Times to ask: what if this isn't the case? We’ve all seen the reports and articles detailing the negative impact of #GenAI on our workforces. As a chief executive, I read these and think about the atmosphere of uncertainty they might be creating, damaging morale and demotivating people.  I am not alone in questioning that negative narrative. KPMG’s latest global CEO Outlook survey of more than 1,300 CEOs shows the leaders of the world’s biggest businesses see GenAI as positive for workers. We found that 71% of UK CEOs see GenAI as an opportunity to try new ways of working, creating a highly skilled and productive workforce without significant job losses. A third even think it will create more jobs. We found similar views across the broader population of CEOs globally. And when I look at how we have been introducing AI and GenAI at KPMG UK, I can see that it is making the work we do even better. All our audits are digital. We use AI today to support our teams, and the ambition is for all our audits to always be delivered using the latest technology, including GenAI, as this will lead to even better quality audits. As a graduate auditor I spent many, many - often frustrating - hours transferring data from a ledger. Today our auditors are saved from this grind by our AI enabled tools, freeing them up to spend more time talking to clients and focusing on the more judgmental areas of the audit. Of course, implementing GenAI doesn’t come without challenges. When I talk to other CEOs the same thorny issues come up time and again: trust, regulation, and concerns about a lack of skills. This is why it needs to be implemented with care. For me, human intelligence combined with artificial intelligence are greater than the sum of their parts. With GenAI we have a genuine opportunity to help solve the UK’s productivity puzzle. There’s a role for the new Government to make sure young people are starting out with the right skills. And there’s a role for businesses of all sizes in partnership with government, both national and local, in helping to achieve this.   The GenAI story is changing every day and this is only the beginning. But the leaders who do get it right can look to a motivated workforce, empowered to do more interesting and productive work. That’s what I want for my people. So it’s time to change the story on GenAI and to see it as the great enabler of our time. #AI #Technology #Skills #CEOoutlook

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,732 followers

    One of the most important applications of GenAI is in foresight. A new report from Paulo Carvalho at IF Insight & Foresight on "How Generative AI Will Transform Strategic Foresight" provides wide-ranging perspectives on the possibilities. Here are some of the most interesting action-oriented frames I found in the report. 🔍 Real-Time Environmental Scanning: Use GenAI to conduct continuous scanning of emerging trends, weak signals, and disruptions across diverse sources. This real-time, dynamic approach allows organizations to stay agile, proactively adjusting strategies as new insights unfold. 🌐 Immersive Scenario Simulations: Utilize GenAI to create interactive VR/AR scenarios that bring potential futures to life. These simulations engage stakeholders deeply, helping them visualize and emotionally connect with complex strategic choices, fostering stronger alignment with future goals. 🔄 Adaptive Scenario Planning: Move from static to adaptive planning by integrating live data into foresight models. Continuous updates based on geopolitical, economic, and technological shifts ensure that scenarios remain relevant and actionable over time. 💬 Enhanced Strategic Conversations: Use GenAI-powered virtual agents to facilitate dynamic "what-if" conversations, helping stakeholders explore a range of possible outcomes. This deepens strategic insights and encourages a proactive approach to complex decision-making. ⚙️ Modeling Complexity and Emergent Behaviors: Use GenAI to simulate complex systems and emergent behaviors, enabling organizations to anticipate interconnected, cascading effects. This prepares them for resilience in the face of unpredictable challenges and non-linear changes. 📊 Multimodal Data Integration for Richer Insights: Leverage GenAI’s capacity to analyze diverse data types (e.g., text, images, audio, video) to gain a nuanced, comprehensive view of trends and risks. This multimodal approach captures intricate patterns that single-source analysis might miss. 🌍 Embrace Multiple Perspectives and Plurality: Design foresight processes that incorporate a wide array of perspectives, blending cross-disciplinary and cultural insights. This inclusive approach creates more robust, innovative scenarios that account for diverse worldviews and challenges assumptions. 🤝 Facilitate Participatory and Co-Creative Approaches: Use GenAI to build interactive platforms that invite diverse stakeholders to co-create and refine scenarios. Real-time collaboration enhances the relevance and inclusivity of strategic models, making them more reflective of shared goals and values. I'll be sharing some of my thoughts on this very important topic in the next little while.

  • View profile for Raj Goodman Anand
    Raj Goodman Anand Raj Goodman Anand is an Influencer

    Helping organizations build AI operating systems | Founder, AI-First Mindset®

    23,724 followers

    Most enterprise generative AI projects still struggle to show measurable financial returns within their first six months. That tolerance is fading because boards and investors now want AI to add to earnings instead of just serving as a test. The focus has shifted from pilots to impact on profits and losses. Spending on AI is increasing, while control over capital is getting stricter. Leaders who cannot link AI to better margins or increased revenue risk losing their budgets and credibility. What’s changing is how deployment is viewed. Early efforts were exploratory because the technology was new. Now, management teams are focusing on use cases that directly relate to reducing costs or improving measurable efficiency, as vague claims of productivity gains are no longer accepted. This means AI initiatives must connect to financial statements, not just innovation presentations. Another change is the emphasis on readiness. Only a small number of organizations consider their infrastructure or data environment to be ready for AI because outdated systems create obstacles. Companies that are using AI to upgrade their IT are saving money that they can use for further deployment, as improved efficiency builds on itself. This means modernisation and return on investment must progress together to maintain funding. Random or broad AI projects fail because they overlook workflow realities and data limitations. Targeted deployment focused on clear outcomes leads to measurable results. Measuring sentiment or perceived productivity does not work because boards care about contributions to earnings. Tracking costs and cycle times in workflows provides a solid basis for ROI. One good starting point is to choose a workflow that involves a practical starting point is a workflow with frequent decisions. Measure its cycle time and transaction costs first. Then introduce AI support. Avoid using AI in areas where data is scattered or governance is unclear because scaling up will be difficult. #AIROI #EnterpriseAI #AILeadership #DigitalTransformation #DataStrategy #CIO #CEOAgenda #BusinessValue #AIAdoption #TechStrategy #BoardGovernance #AITalent

  • View profile for Graham Walker, MD
    Graham Walker, MD Graham Walker, MD is an Influencer

    Healthcare AI — MDCalc & Offcall Founder — ER Doctor @ TPMG (views are my own, not employers’)

    67,814 followers

    GenAI in the hospital doesn’t need tinfoil hats — but it does need cognitive PPE. Boundaries. Supervision. And training wheels. Yesterday, I wrote about how the real risk of GenAI in medicine isn’t just hallucinations; it’s more insidious. Confidence bias, judgment drift, a subtle nod that says, 𝘠𝘦𝘢𝘩, 𝘺𝘰𝘶’𝘳𝘦 𝘱𝘳𝘰𝘣𝘢𝘣𝘭𝘺 𝘳𝘪𝘨𝘩𝘵, 𝘋𝘳. 𝘞𝘢𝘭𝘬𝘦𝘳.   We keep comparing GenAI to Google, but that misses the point. Google makes you work — sift sources, weigh trust and validity, choose your own link to click. GenAI hands you a final answer and says 𝘛𝘳𝘶𝘴𝘵 𝘮𝘦. GenAI takes over an enormous amount of cognitive friction and work. It’s ultra-processed information: tasty, convenient, easy to overconsume. So is GenAI too risky for medicine? Not at all. We already deal with high-risk high-benefit tools every day. Scalpels. Narcotics. Paralytics. The issue isn’t the tool. It’s the system around it. We don’t hand a PGY1 a needle and a syringe without training. Why would we hand them a language model without the same care? Here’s some cognitive countermeasures I've been thinking about. 1️⃣ Educate clinicians — not just on how to use the tool, but how it fails. Make GenAI part of medical education, not just IT deployment. Create spaces for experimentation before clinical exposure. 2️⃣ Set boundaries — GenAI should assist, not replace. Use it for note drafting or patient education. Not as a shortcut for complex clinical reasoning. Think "hypothesis generator for me to accept or reject," not "diagnosis decider." 3️⃣ Structure your prompts — Avoid vague asks like "what could this be?" System-level prompting should encourage critical thinking: "What would argue against this diagnosis?" "What else could explain this?" 4️⃣ Cite sources — If the model can’t show its receipts, assume it hallucinated. Embedded links help, but they need verification. No source, no trust. 5️⃣ Monitor and audit — Models drift. Behavior changes. Logging, usage reviews, maybe even GenAI M&M rounds should be standard. And again — we need safe sandboxes to test and learn before real-world rollout. When something sounds smart, but is confident and occasionally wrong—that’s not a reason to panic. That's just an intern. And what do we do? We train and manage and supervise. We build structures and processes. It's the same as any drug that can alleviate pain but stop you breathing, or any procedure that can save a life or end one.   In medicine we don’t just trust a tool; we build systems around it. (If you still want the tinfoil hat? Make sure it’s sterile.)

  • View profile for Sumeet Mathur

    Senior Vice President & Managing Director, ServiceNow India Technology & Business Center

    12,405 followers

    Recently there has been growing skepticism about whether Generative AI's ROI can ever justify current and projected levels of investment, with Goldman Sachs posing the question "too much spend, too little benefit?" My thoughts: Even in its early stages of development, we are seeing #GenAI deliver significant boosts to employee productivity and customer experience. Amongst ServiceNow early adopters of Now Assist, our GenAI solutions, we have seen: 👨🏽💻 IT agents spend up to 30% less time getting to successful resolutions ⚙️ Developers increasing velocity by as much as 25% ✅ Self-service deflection improving by >80% for IT and HR requests Across ServiceNow itself we have achieved $5M+ annualized cost takeout and additional $4M+ in increased productivity, as a direct result of our investments in Gen AI for our internal usage. That’s $10M in tangible benefit. We have been disciplined in developing GenAI solutions that focuses exclusively on practical use cases to help key personas (e.g. IT or HR service agents, developers) in the context of their work. We often engage customer teams and our own employees who fit these personas to test solutions like Now Assist in their everyday work – leading to the results mentioned above. This differs substantially from more generalist approaches to GenAI model training and tuning. We are also seeing that the most common inhibitors to achieving ROI from GenAI are not related to the technology itself, but to the quality of data within the organization. Where data is well-structured, accessible, and consolidated on a single platform environment - such as what ServiceNow offers - the use of GenAI and broader AI tends to yield higher, more sustainable returns. Excessive exuberance about any new technology warrants a little healthy skepticism but when it meets practicality it generates tangible, measurable value. To avoid "too much spend, too little benefit" with GenAI, our philosophy has been to apply it only where it is the most appropriate technology to solve real problems experienced by real people. And it's working!

Explore categories