Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,471,121 followers

    How can businesses go beyond using AI for incremental efficiency gains to create transformative impact? I write from the World Economic Forum (WEF) in Davos, Switzerland, where I’ve been speaking with many CEOs about how to use AI for growth. A recurring theme is that running many experimental, bottom-up AI projects — letting a thousand flowers bloom — has failed to lead to significant payoffs. Instead, bigger gains require workflow redesign: taking a broader, perhaps top-down view of the multiple steps in a process and changing how they work together from end to end. Consider a bank issuing loans. The workflow consists of several discrete stages: Marketing -> Application -> Preliminary Approval -> Final Review -> Execution Suppose each step used to be manual. Preliminary Approval used to require an hour-long human review, but a new agentic system can do this automatically in 10 minutes. Swapping human review for AI review — but keeping everything else the same — gives a minor efficiency gain but isn’t transformative. Here’s what would be transformative: Instead of applicants waiting a week for a human to review their application, they can get a decision in 10 minutes. When that happens, the loan becomes a more compelling product, and that better customer experience allows lenders to attract more applications and ultimately issue more loans. However, making this change requires taking a broader business or product perspective, not just a technology perspective. Further, it changes the workflow of loan processing. Switching to offering a “10-minute loan” product would require changing how it is marketed. Applications would need to be digitized and routed more efficiently, and final review and execution would need to be redesigned to handle a larger volume. Even though AI is applied only to one step, Preliminary Approval, we end up implementing not just a point solution but a broader workflow redesign that transforms the product offering. At AI Aspire (an advisory firm I co-lead), here’s what we see: Bottom-up innovation matters because the people closest to problems often see solutions first. But scaling such ideas to create transformative impact often requires seeing how AI can transform entire workflows end to end, not just individual steps, and this is where top-down strategic direction and innovation can help. This year's WEF meeting, as in previous years, has been an energizing event. Among technologists, frequent topics of discussion include Agentic AI (when I coined this term, I was not expecting to see it plastered on billboards and buildings!), Sovereign AI (how nations can control their own access to AI), Talent (the challenging job market for recent graduates, and how to upskill nations), and data-center infrastructure (how to address bottlenecks in energy, talent, GPU chips, and memory). I will address some of these topics in future posts. [Original text: https://lnkd.in/gbiRs2mi ]

  • View profile for Demis Hassabis
    Demis Hassabis Demis Hassabis is an Influencer

    Co-Founder & CEO, Google DeepMind

    267,645 followers

    I’ve worked on AI my whole life because I’ve always believed it could unlock the ability to answer some of the biggest and most intractable problems in science. Our first big science breakthrough happened five years ago when we announced our solution to the protein structure prediction problem: AlphaFold 2. It has been incredible to see its impact since then. More than 3 million researchers across 190 countries have used this tool for disease understanding, drug discovery and more. And it was an honour of a lifetime for our work to be recognised last year with a Nobel Prize. One of our greatest ambitions is for AI to aid in accelerating drug design and help cure all diseases. This is what led me to found Isomorphic Labs, which is already making amazing progress. We’ve also expanded AlphaFold to predict the interactions of all of life’s molecules. But AlphaFold represents more than a solution to a biological puzzle. It demonstrated how AI can crack ‘root node’ problems - where a single breakthrough unlocks entire new avenues of research. It is a critical step towards a long-held dream of mine: building a virtual cell. Imagine running ‘in silico’ experiments orders of magnitude faster than in a wet lab. Scientists could rapidly test hypotheses, model complex pathways and see how a drug affects a cell. It would be an incredible boon not only for fundamental biology but also for medicine. Although for me, AlphaFold was never just about biology. It was the first major proof point for a much larger thesis: that AI could be the ultimate tool for advancing science. By processing data or helping us come up with new hypotheses, I think AI will help us tackle some of humanity’s greatest challenges and answer fundamental questions about the universe. From materials design to fusion energy to mathematics, I believe we’re on the cusp of a new golden age of discovery. We’re just getting started. Read more about AlphaFold’s impact: https://lnkd.in/eNeqxqQp

  • View profile for Arianna Huffington
    Arianna Huffington Arianna Huffington is an Influencer

    Founder and CEO at Thrive Global | Passionate about Health and AI

    9,600,770 followers

    A study by researchers from Stanford and Carnegie Mellon has found that AI models are 50% more sycophantic than humans. Not only that, participants rated flattering responses as higher quality and wanted to use them more. It gets even worse: the flattery made participants less likely to admit they were wrong — even when confronted with evidence. “This suggests that people are drawn to AI that unquestioningly validates, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior,” the researchers wrote. “These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy.” https://lnkd.in/dBTJwE-F   In other words, humans are hard-wired for approval, and so is AI. It’s a win-win for both sides — a flattery perpetual motion machine. AI models have turned into high-tech versions of courtesans — mistresses, or prostitutes, found in royal and aristocratic courts in Europe and Asia over the centuries. Among other talents, they often used flattery to seduce and gain status. Now we’re all royals, being sweet-talked by courtesans at the touch of a button. As Disraeli said, “Everyone likes flattery; and when you come to Royalty you should lay it on with a trowel.” There are some obvious pitfalls to having flattery applied to our queries with a digital trowel. OpenAI now has 800 million weekly users. And people increasingly trust AI to give them advice on more and more aspects of their lives. Surveys have found that 66% of Americans have used AI for financial advice, that nearly 40% trust AI on medical advice, and that 72% of teens have used AI companions.    Sure, AI will answer any question, but how useful is that when it’s just telling us what we want to hear? What it’s not doing is what a human advisor or a trusted friend would do: tell us when we’re wrong. Nor, in its eagerness to please, will it tell us when we’re asking the wrong question altogether. As Plutarch put it, “I do not need a friend who changes when I change and nods when I nod; my shadow does that much better.” The risk is that instead of being a defense against the echo chambers of social media, AI just becomes a more powerful version.

  • View profile for Andy Jassy
    Andy Jassy Andy Jassy is an Influencer
    1,036,105 followers

    Every cloud provider faces the same AI infrastructure challenge: chips need to be positioned close together to exchange data quickly, but they generate intense heat, creating unprecedented cooling demands. We needed a strategic solution that allowed us to use our existing air-cooled data centers to do liquid cooling without waiting for new construction. And it needed to be rapidly deployed so we could bring customers these powerful AI capabilities while we transition towards facility-level liquid cooling. Think of a home where only one sunny room needs AC, while the rest stays naturally cool – that’s what we wanted to achieve, allowing us to efficiently land both liquid and air-cooled racks in the same facilities with complete flexibility. The available options weren't great. Either we could wait to build specialized liquid-cooled facilities or adopt off-the-shelf solutions that didn't scale or meet our unique needs. Neither worked for our customers, so we did what we often do at Amazon… we invented our own solution. Our teams designed and delivered our In-Row Heat Exchanger (IRHX), which uses a direct-to-chip approach with a "cold plate" on the chips. The liquid runs through this sealed plate in a closed loop, continuously removing heat without increasing water use. This enables us to support traditional workloads and demanding AI applications in the same facilities. By 2026, our liquid-cooled capacity will grow to over 20% of our ML capacity, which is at multi-gigawatt scale today. While liquid cooling technology itself isn't unique, our approach was. Creating something this effective that could be deployed across our 120 Availability Zones in 38 Regions was significant. Because this solution didn't exist in the market, we developed a system that enables greater liquid cooling capacity with a smaller physical footprint, while maintaining flexibility and efficiency. Our IRHX can support a wide range of racks requiring liquid cooling, uses 9% less water than fully-air cooled sites, and offers a 20% improvement in power efficiency compared to off-the-shelf solutions. And because we invented it in-house, we can deploy it within months in any of our data centers, creating a flexible foundation to serve our customers for decades to come. Reimagining and innovating at scale has been something Amazon has done for a long time and one of the reasons we’ve been the leader in technology infrastructure and data center invention, sustainability, and resilience. We're not done… there's still so much more to invent for customers.

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    80,177 followers

    This week MIT dropped a stat engineered to go viral: 95% of enterprise GenAI pilots are failing. Markets, predictably, had a minor existential crisis. Pundits whispered the B-word (“bubble”), traders rotated into defensive stocks, and your colleague forwarded you a link with “is AI overhyped???” in the subject line. Let’s be clear: the 95% failure rate isn’t a caution against AI. It’s a mirror held up to how deeply ossified enterprises are. Two truths can coexist: (1) The tech is very real. (2) Most companies are hilariously bad at deploying it. If you’re a startup, AI feels like a superpower. No legacy systems. No 17-step approval chains. No legal team asking whether ChatGPT has been “SOC2-audited.” You ship. You iterate. You win. If you’re an enterprise, your org chart looks like a game of Twister and your workflows were last updated when Friendswas still airing. You don’t need a better model - you need a cultural lobotomy. This isn’t an “AI bubble” popping. It’s the adoption lag every platform shift goes through. - Cloud in the 2010s: Endless proofs of concept before actual transformation. - Mobile in the 2000s: Enterprises thought an iPhone app was strategy. Spoiler: it wasn’t. - Internet in the 90s: Half of Fortune 500 CEOs declared “this is just a fad.” Some of those companies no longer exist. History rhymes. The lag isn’t a bug; it’s the default setting. Buried beneath the viral 95% headline are 3 lessons enterprises can actually use: ▪️ Back-office > front-office. The biggest ROI comes from back-office automation - finance ops, procurement, claims processing - yet over half of AI dollars go into sales and marketing. The treasure’s just buried in a different part of the org chart. ▪️Buy > build. Success rates hit ~67% when companies buy or partner with vendors. DIY attempts succeed a third as often. Unless it’s literally your full-time job to stay current on model architecture, you’ll fall behind. Your engineers don’t need to reinvent an LLM-powered wheel; they need to build where you’re actually differentiated. ▪️Integration > innovation. Pilots flop not because AI “doesn’t work,” but because enterprises don’t know how to weave it into workflows. The “learning gap” is the real killer. Spend as much energy on change management, process design, and user training as you do on the tool itself. Without redesigning processes, “AI adoption” is just a Peloton bought in January and used as a coat rack by March. You didn’t fail at fitness; you failed at follow-through. In five years, GenAI will be as invisible - and indispensable - as cloud is today. The difference between the winners and the laggards won’t be access to models, but the courage to rip up processes and rebuild them. The “95% failure” stat doesn’t mean AI is snake oil. It means enterprises are in Year 1 of a 10-year adoption curve. The market just confused growing pains for terminal illness.

  • View profile for Felix Haas

    Design at Lovable, Angel Investor

    97,589 followers

    Invisible UX is coming 🔥 And it’s going to change how we design products, forever. For decades, UX design has been about guiding users through an experience. We’ve done that with visible interfaces: Menus. Buttons. Cards. Sliders. We’ve obsessed over layouts, states, and transitions. But with AI, a new kind of interface is emerging: One that’s invisible. One that’s driven by intent, not interaction. Think about it: You used to: → Open Spotify → Scroll through genres → Click into “Focus” → Pick a playlist Now you just say: “Play deep focus music.” No menus. No tapping. No UI. Just intent → output. You used to: → Search on Airbnb → Pick dates, guests, filters → Scroll through 50+ listings Now we’re entering a world where you guide with words: “Find me a cabin near Oslo with a sauna, available next weekend.” So the best UX becomes barely visible. Why does this matter? Because traditional UX gives users options. AI-native UX gives users outcomes. Old UX: “Here are 12 ways to get what you want.” New UX: “Just tell me what you want & we’ll handle the rest.” And this goes way beyond voice or chat. It’s about reducing friction. Designing systems that understand intent. Respond instantly. And get out of the way. The UI isn’t disappearing. It’s mainly dissolving into the background. So what should designers do? Rethink your role. Going forward you’ll not just lay out screens. You’ll design interactions without interfaces. That means: → Understanding how people express goals → Guiding model behavior through prompt architecture → Creating invisible guardrails for trust, speed, and clarity You are basically designing for understanding. The future of UX won’t be seen. It will be felt. Welcome to the age of invisible UX. Ready for it?

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    242,183 followers

    McKinsey & Company 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟭𝟱𝟬+ 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗚𝗲𝗻𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝗳𝗼𝘂𝗻𝗱 𝗼𝗻𝗲 𝗰𝗼𝗺𝗺𝗼𝗻 𝘁𝗵𝗿𝗲𝗮𝗱: ⬇️ One-off solutions don’t scale. The most successful projects take a different path: They use open, modular architectures that enable speed, reuse, and control. → Designed for reuse → Able to plug in best-in-class capabilities → Free from vendor lock-in This is the reference architecture McKinsey now recommends — optimized to scale what works while staying compliant. It consists of five core components: ⬇️ 𝟭. 𝗦𝗲𝗹𝗳-𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝗽𝗼𝗿𝘁𝗮𝗹: → A secure, compliant “pane of glass” where teams can launch, monitor, and manage GenAI apps. → Preapproved patterns, validated capabilities, shared libraries. → Observability and cost controls built-in. 𝟮. 𝗢𝗽𝗲𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 → Services are modular, reusable, and provider-agnostic. → Core functions like RAG, chunking, or prompt routing are shared across apps. → Infra and policy as code, built to evolve fast. 𝟯. 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 → Every prompt and response is logged, audited, and cost-attributed. → Hallucination detection, PII filters, bias audits — enforced by default. → LLMs accessed only through a centralized AI gateway. 4. 𝗙𝘂𝗹𝗹-𝘀𝘁𝗮𝗰𝗸 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 → Centralized logging, analytics, and monitoring across all solutions → Built-in lifecycle governance, FinOps, and Responsible AI enforcement → Secure onboarding of use cases and private data controls → Enables policy adherence across infrastructure, models, and apps 5. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗴𝗿𝗮𝗱𝗲 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 → Modular setup for user interface, business logic, and orchestration → Integrated agents, prompt engineering, and model APIs → Guardrails, feedback systems, and observability built into the solution → Delivered through the AI Gateway for consistent compliance and scale The message is clear: If your GenAI program is stuck, don’t look at the LLM. Look at your platform. 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    171,076 followers

    For decades, career growth followed a familiar formula: More headcount. More budget. More scope.  That model is changing. In the AI era, careers won’t be built on span of control, they’ll be built on innovation density. Today, anyone - from ICs to execs - can scale their impact without more headcount, more budget, or more time. The playing field is flatter. The differentiator? How fast you can learn, apply, and compound innovation with AI. If you’re thinking about career growth, stop asking: “How can I get more?” Start asking: “How can I innovate more with AI?” The people who rise fast will: See problems through an AI-first lens. Move from manual to scalable. Iterate faster than the rest. Your team size won’t define your trajectory. Your creativity will. Your budget won’t signal your value. Your innovation density will.

  • View profile for Ruben Hassid

    Master AI before it masters you.

    834,974 followers

    The One Prompt To Make ChatGPT Write Naturally: (save it for later, to copy & paste) Prompt: "Act like a professional content writer and communication strategist. Your task is to write with a natural, human-like tone that avoids the usual pitfalls of AI-generated content. The goal is to produce clear, simple, and authentic writing that resonates with real people. Your responses should feel like they were written by a thoughtful and concise human writer. You are writing the following: [INSERT YOUR TOPIC OR REQUEST HERE] Follow these detailed step-by-step guidelines: Step 1: Use plain and simple language. Avoid long or complex sentences. Opt for short, clear statements.  - Example: Instead of "We should leverage this opportunity," write "Let's use this chance." Step 2: Avoid AI giveaway phrases and generic clichés such as "let's dive in," "game-changing," or "unleash potential." Replace them with straightforward language.  - Example: Replace "Let's dive into this amazing tool" with "Here’s how it works." Step 3: Be direct and concise. Eliminate filler words and unnecessary phrases. Focus on getting to the point.  - Example: Say "We should meet tomorrow," instead of "I think it would be best if we could possibly try to meet." Step 4: Maintain a natural tone. Write like you speak. It’s okay to start sentences with “and” or “but.” Make it feel conversational, not robotic.  - Example: “And that’s why it matters.” Step 5: Avoid marketing buzzwords, hype, and overpromises. Use neutral, honest descriptions.  - Avoid: "This revolutionary app will change your life."   - Use instead: "This app can help you stay organized." Step 6: Keep it real. Be honest. Don’t try to fake friendliness or exaggerate.  - Example: “I don’t think that’s the best idea.” Step 7: Simplify grammar. Don’t worry about perfect grammar if it disrupts natural flow. Casual expressions are okay.  - Example: “i guess we can try that.” Step 8: Remove fluff. Avoid using unnecessary adjectives or adverbs. Stick to the facts or your core message.  - Example: Say “We finished the task,” not “We quickly and efficiently completed the important task.” Step 9: Focus on clarity. Your message should be easy to read and understand without ambiguity.  - Example: “Please send the file by Monday.” Follow this structure rigorously. Your final writing should feel honest, grounded, and like it was written by a clear-thinking, real person. Take a deep breath and work on this step-by-step." ___ PS: For better results, always use ChatGPT-o3.

  • View profile for Endrit Restelica

    AI | Tech | Marketing | +8 Million Followers and +1 Billion Views 👉 I will help you scale your brand and community 🏆📈

    416,167 followers

    Are you ready for this future?! A future where a quiet, rubber wheeled humanoid slides next to an elder loved one (or even you) extends two telescopic arms, and lifts its passenger into a wheelchair with the grace of a seasoned nurse, minus the back strain and risk of mishaps. Engineers at Hebei University of Technology have spent years perfecting this caregiver bot. It now hoists up to 90 kg, pivots every joint with two degrees of freedom, and carries its load in one smooth, AI-balanced arc. Sensors guide each grip, lithium batteries recharge while the ward sleeps, and human staff are freed for conversation, reassurance, and real-time medical decisions instead of heavy lifting. The metallic chill we see today may fade as designers wrap these helpers in soft skins and friendly faces. Yet beneath the silicone smiles they will remain machines, lines of code driving steel and servos...only harder to notice as AI grows subtler...but maybe the extra minutes they hand back to nurses and families could make care more human, not less? How does that make you feel?

Explore categories