How Liquid Cooling Transforms Data Centers

Explore top LinkedIn content from expert professionals.

Summary

Liquid cooling is a cutting-edge technology that uses fluids instead of air to remove heat from powerful computer chips in data centers, helping them stay cool as AI and high-density computing demands grow. This shift allows data centers to operate more efficiently, handle larger workloads, and reduce energy use for a more sustainable future.

  • Embrace targeted cooling: Consider installing direct-to-chip or immersion cooling systems to efficiently manage heat from high-powered servers and prevent slowdowns.
  • Improve operational sustainability: Adopt closed-loop liquid cooling to cut water consumption and lower electricity use, reducing both costs and environmental impact.
  • Upgrade for higher density: Plan infrastructure changes that support liquid cooling to enable more servers per rack, making room for expanding AI and cloud computing needs.
Summarized by AI based on LinkedIn member posts
  • View profile for Andy Jassy
    Andy Jassy Andy Jassy is an Influencer
    1,036,527 followers

    Every cloud provider faces the same AI infrastructure challenge: chips need to be positioned close together to exchange data quickly, but they generate intense heat, creating unprecedented cooling demands. We needed a strategic solution that allowed us to use our existing air-cooled data centers to do liquid cooling without waiting for new construction. And it needed to be rapidly deployed so we could bring customers these powerful AI capabilities while we transition towards facility-level liquid cooling. Think of a home where only one sunny room needs AC, while the rest stays naturally cool – that’s what we wanted to achieve, allowing us to efficiently land both liquid and air-cooled racks in the same facilities with complete flexibility. The available options weren't great. Either we could wait to build specialized liquid-cooled facilities or adopt off-the-shelf solutions that didn't scale or meet our unique needs. Neither worked for our customers, so we did what we often do at Amazon… we invented our own solution. Our teams designed and delivered our In-Row Heat Exchanger (IRHX), which uses a direct-to-chip approach with a "cold plate" on the chips. The liquid runs through this sealed plate in a closed loop, continuously removing heat without increasing water use. This enables us to support traditional workloads and demanding AI applications in the same facilities. By 2026, our liquid-cooled capacity will grow to over 20% of our ML capacity, which is at multi-gigawatt scale today. While liquid cooling technology itself isn't unique, our approach was. Creating something this effective that could be deployed across our 120 Availability Zones in 38 Regions was significant. Because this solution didn't exist in the market, we developed a system that enables greater liquid cooling capacity with a smaller physical footprint, while maintaining flexibility and efficiency. Our IRHX can support a wide range of racks requiring liquid cooling, uses 9% less water than fully-air cooled sites, and offers a 20% improvement in power efficiency compared to off-the-shelf solutions. And because we invented it in-house, we can deploy it within months in any of our data centers, creating a flexible foundation to serve our customers for decades to come. Reimagining and innovating at scale has been something Amazon has done for a long time and one of the reasons we’ve been the leader in technology infrastructure and data center invention, sustainability, and resilience. We're not done… there's still so much more to invent for customers.

  • View profile for Abdullah Mahrous

    Senior Data Center Operations & Maintenance Engineer | Critical Facilities | Tier III Data Centers

    9,841 followers

    How Full Liquid Cooling Is Powering the Next Generation of AI Data Centers.... . . As AI workloads grow, traditional cooling methods are no longer enough. Modern high-performance data centers are now built around full liquid cooling architectures designed to manage the extreme heat generated by advanced AI processors. At the facility level, water from the building cooling system flows into in-row Coolant Distribution Units (CDUs). Inside, a liquid-to-liquid heat exchanger transfers cooling capacity to a secondary fluid that circulates directly to each rack, creating an efficient bridge between facility cooling and IT equipment. Inside every server, a dedicated liquid loop is engineered to match the processor layout and power density of AI hardware. Instead of relying on air, this loop absorbs heat directly from CPUs, GPUs, and memory modules, removing thermal energy at the source. The heated liquid then returns to the CDU, where high-performance heat exchangers move the heat away from the IT space toward the facility cooling system. From there, rooftop chillers or dry coolers reject the heat into the ambient environment. Even in fully liquid-cooled data centers, air still plays a supporting role. Air handlers remove residual heat from components not connected to the liquid loop, creating a balanced ecosystem where liquid handles high-density loads and air maintains room stability. Full liquid cooling is becoming a foundation for AI-ready infrastructure, enabling higher rack densities, better efficiency, and stable performance under extreme compute demand. As a Data Center Operations & Maintenance Engineer, I closely follow how these cooling architectures are transforming operations and facility design. Always happy to connect with professionals working on next-generation, AI-ready data centers. Video copyright: BOYD © Abdullah Mahrous – CC BY 4.0

  • View profile for PS Lee

    Head of NUS Mechanical Engineering & Executive Director of ESI | Expert in Sustainable AI Data Center Cooling | Keynote Speaker and Board Member

    51,465 followers

    🚀 Pumped Two-Phase Direct-to-Chip Cooling: Powering the Future of AI Data Centers Summary: As AI workloads surge, we are entering a new era of compute intensity. Chips like the NVIDIA Blackwell (2000W TDP), AMD MI300X (750W), and Gaudi HL-2080 (600W) are pushing thermal design limits far beyond traditional cooling capabilities. With cooling systems already accounting for up to 40% of an AI data center’s total energy use, the industry must innovate—fast. 🔍 Pumped Two-Phase (P2P) Direct-to-Chip Cooling is emerging as a transformative solution. By leveraging the latent heat of vaporization, P2P cooling removes heat more efficiently than single-phase methods. Cold plates are placed directly on high-power components, and a refrigerant circulates in a closed loop—absorbing heat through flow boiling and returning to the CDU for condensation and recirculation. 💡 Recent research from Vertiv, Intel, NVIDIA, and Binghamton University—presented at ASME InterPACK 2024—has validated P2P D2C cooling as commercially viable (TRL 7, CRL 2). Notable performance metrics include: - Heat load handling up to 170kW per rack - Case temperatures below 56.4°C - Thermal resistance of cold plates as low as 0.012°C/W - Efficient operation across dynamic loads, including hot-swapping scenarios - Stable control via flow regulators (2–32 PSID) to manage vapor quality and avoid dry-out 🔧 Two main system architectures are being optimized: Refrigerant-to-Air (R2A): For integration into existing air-cooled environments. R2A CDUs with microchannel condensers and variable-speed fans deliver up to 40kW in 600mm racks, making them ideal for gradual liquid cooling adoption. Refrigerant-to-Liquid (R2L): Using brazed plate heat exchangers and chilled water loops, R2L systems are ideal for high-power density clusters, leveraging liquid’s superior heat transport. 🧪 In real-world tests, the Vertiv R2L system maintained a constant pump flow of 39 GPM while supporting transient and asymmetric IT loads. Even under high refrigerant saturation temperatures and pressure drops (up to 7.6 psi across cold plates), the system remained within design parameters. Importantly, system resilience was demonstrated under failure simulations (e.g., pump switch-over, loss of heat rejection) without triggering pressure relief valves—ensuring safe shutdown protocols and zero refrigerant release. 🌍 Why it matters: As we push toward 600kW+ rack densities and AI training workloads scale exponentially, efficient and safe heat removal will be the linchpin of sustainable digital infrastructure. P2P D2C cooling isn’t just a stopgap—it may be the definitive pathway for next-gen AI data centers. #AIDataCenters #LiquidCooling #DirectToChip #TwoPhaseCooling #Vertiv #NVIDIA #ThermalManagement #SustainableComputing #HighDensityCooling #DataCenterInnovation #CoolingEfficiency #BlackwellGPU #HPC #GreenDigitalInfrastructure #EnergyEfficiency #PUE #NetZeroTech #FutureOfCooling #R2L #R2A #FlowBoiling #ColdPlate

  • View profile for Obinna Isiadinso

    Global Sector Lead, Data Centers and Cloud Services Investments – Follow me for weekly insights on global data center and AI infrastructure investing

    22,581 followers

    Liquid cooling is redefining data center efficiency... Delivering a powerful combination of sustainability and cost savings. As computing demands increase, traditional air cooling is falling behind. Data centers are turning to liquid cooling to reduce energy use, cut costs, and support high-performance workloads. Operators are considering direct-to-chip cooling, which circulates liquid over heat-generating components, and immersion cooling, where servers are fully submerged in a dielectric fluid for maximum efficiency. Developed markets, like the U.S. and Europe, are adopting liquid cooling to support AI-driven workloads and reduce carbon footprints in large-scale facilities. Meanwhile, emerging markets in Southeast Asia and Latin America are leveraging liquid cooling to manage high-density computing in regions with hotter climates and less reliable power grids, ensuring operational stability and efficiency. Greater Energy Efficiency Liquid cooling reduces total data center power consumption by 10.2%, with facility-wide savings up to 18.1%. It also uses 90% less energy than air conditioning, improving heat transfer and maintaining stable operating temperatures. Sustainability Gains Lower PUE (Power Usage Effectiveness) means less wasted energy, while reduced electricity use cuts carbon emissions. Closed-loop systems also minimize water consumption, making liquid cooling a more sustainable option. Cost and Performance Advantages Efficient temperature management prevents thermal throttling, optimizing CPU and GPU performance. Higher-density computing lowers construction costs by 15-30%, while cooling energy savings of up to 50% reduce long-term operational expenses. The Future of Cooling As #AI and cloud workloads grow, liquid cooling is becoming a competitive advantage. Early adopters will benefit from lower costs, improved efficiency, and a more sustainable infrastructure. #datacenters

  • View profile for MANDEEP SINGH

    Lead Commissioning Engineer | Data Center & MEP Specialist | BMS Certified | PMP Certified | HVAC & Sustainable Construction (LCA) | AWS Certified | BIM Certified

    8,078 followers

    Liquid Cooling: The $8 Billion Architecture Powering AI & Hyperscale Density Air cooling is officially struggling to keep up. As AI acceleration and HPC (High-Performance Computing) drive server power density past 30kW per rack, operators are rapidly shifting to liquid cooling—the only viable solution that is both efficient and future-ready. According to the latest forecast, the Data Center Liquid Cooling Market is set to surge from $2.2 billion to nearly $8 billion by 2031 🚀. This massive trajectory is fueled by sustainability demands and the insatiable appetite for compute power. 💡 So, What Makes Liquid Cooling Unstoppable? Liquid cooling replaces roaring fans with a targeted, high-precision pipeline, leveraging the superior heat transfer capacity of fluid over air. The primary architectures include: 1. Direct-to-Chip (Cold Plate) Cooling: Heat is transferred directly from the hot chip surface (CPU/GPU) to a Cold Plate. This is highly efficient for high-power chips. 2. Rear-Door Heat Exchanger (RDHx): Liquid-cooled coils in the rear door remove heat from the exhaust air before it enters the data hall. 3. Immersion Cooling: Servers are fully submerged in a non-conductive dielectric fluid, offering the highest possible density. 🧠 The Core Component: Coolant Distribution Units (CDUs) All these systems rely on the Coolant Distribution Unit (CDU). The CDU acts as the intelligent bridge, managing the precise flow, pressure, and temperature of the coolant between the facility's heat rejection system and the IT gear. ✨ Quantifiable Benefits for Operators Liquid cooling is not an upgrade—it's an essential architectural shift delivering powerful ROI: Higher Density: Enables compute density previously impossible with air. Energy Efficiency: Drastically reduced cooling power (PUE), leading to lower operating costs. Sustainability: Supports greener data centers by facilitating heat reuse and lowering the carbon footprint. Reliability: Eliminates thermal strain and hot spots, improving system stability for critical AI + HPC workloads. If you are shaping data center cooling strategies for 2025–2030, understanding the dynamics of D2C, Immersion, and CDU integration is now non-negotiable. High-Impact Hashtags #LiquidCooling #DataCenterCooling #AIWorkloads #HPC #CDU #ImmersionCooling #DirectToChip #ThermalManagement #PUE #GreenDataCenters #Hyperscale #DataCenterDesign #Infrastructure #CoolingArchitecture #Engineering

  • View profile for Amir Olajuwon Mission Critical Infrastructure Data Center Systems • Commissioning • SME

    Commissioning Leader & Construction Executive | Mission Critical MEP & QA/QC | AI/Hyperscale Data Centers | Owner’s Rep & GC | Multi-Sector: Semiconductor, Oil & Gas, Aerospace

    11,516 followers

    A Technical Look at Liquid Cooling in AI Data Centers (Part 2) Last week, I broke down how liquid cooling works at the chip level. This post finishes the picture by looking at how the rest of the system operates. At the center of most liquid-cooled AI data centers is the Coolant Distribution Unit (CDU). The CDU is the interface between the servers and the facility cooling plant. It controls coolant temperature, pressure, and flow, while isolating sensitive IT hardware from fluctuations or contaminants in the building water loop. Inside a CDU are pumps, filters, sensors, expansion volume, and a liquid-to-liquid heat exchanger that safely transfers heat out of the server loop. Once heat leaves the servers, it’s passed through plate heat exchangers into the facility’s external water loop. That heat is then rejected through cooling towers or dry coolers, depending on site design and climate. At scale, this architecture delivers meaningful efficiency gains and helps reduce overall PUE in AI facilities. Coolant chemistry matters just as much as hardware. Most direct-to-chip systems use deionized water or water-glycol mixtures for high thermal performance and low electrical conductivity. Chemistry is tightly monitored—small shifts in conductivity or particulates can signal corrosion, leaks, or biological growth. End to end, liquid cooling operates as a closed loop: heat is generated at the chip, absorbed by coolant, transferred through the CDU, rejected to the environment, and returned to the servers. Every component in that loop is designed for reliability, efficiency, and scalability. This post focused on direct-to-chip cooling, the most common liquid approach in today’s data centers. A future post will cover immersion cooling and where it fits. #AI #HPC #DataCenters #AIInfrastructure #LiquidCooling #HighDensityComputing #ThermalManagement #CoolingTechnology #InfrastructureEngineering #NextGenDataCenters #GPUs

  • View profile for Rich Miller

    Authority on Data Centers, AI and Cloud

    48,442 followers

    AWS Builds Custom Liquid Cooling System for Data Centers Amazon Web Services (AWS) is sharing details of a new liquid cooling system to support high-density AI infrastructure in its data centers, including custom designs for a coolant distribution unit and an engineered fluid. “We've crossed a threshold where it becomes more economical to use liquid cooling to extract the heat,” said Dave Klusas, AWS’s senior manager of data center cooling systems, in a blog post. The AWS team considered multiple vendor liquid cooling solutions, but found none met its needs and began designing a completely custom system, which was delivered in 11 months, the company said. The direct-to-chip solution uses a cold plate placed directly on top of the chip. The coolant, a fluid specifically engineered by AWS, runs in tubes through the sealed cold plate, absorbing the heat and carrying it out of the server rack to a heat rejection system, and then back to the cold plates. It’s a closed loop system, meaning the liquid continuously recirculates without increasing the data center’s water consumption. AWS also developed a custom coolant distribution unit, which it said is more powerful and more efficient than its off-the-shelf competitors. “We invented that specifically for our needs,” Klusas says. “By focusing specifically on our problem, we were able to optimize for lower cost, greater efficiency, and higher capacity.” Klusas said the liquid is typically at “hot tub” temperatures for improved efficiency. AWS has shared details of its process, including photos: https://lnkd.in/e-D4HvcK

  • View profile for Addison Stark

    Clean Industrialist | Chief Boilermaker @ AtmosZero

    3,005 followers

    Last week's remarks about next‑gen systems running on 45 °C warm‑water cooling rattled leading HVAC stocks. To the uninitiated, that market reaction revealed an over‑reliance on a single storyline: chiller‑heavy data centers as the primary growth engine. But from my estimation, the reality is simpler and more interesting: liquid cooling will take share inside the server hall, and smart strategics will keep growing by balancing their thermal product portfolios. Here’s my strategic framing: 1) Liquid is inevitable at high compute density. Direct‑to‑chip/immersion designs move heat more efficiently than air at scale. They also raise outlet temperatures enough to enable economization. That shifts value toward next-gen fluids, manifold designs, and integrated controls. 2) Electrification is gaining momentum in other thermal sectors including process heat. Industrial steam and hot‑water applications are a massive, under‑served electrification arena. High‑temperature heat pumps remove emissions and fit the same efficiency‑first narrative. Different use case, same trend: more useful heat per unit of input energy. 3) Platforms products beat mega-projects. The winners will build platforms that travel across segments: common components, shared controls, interoperable modules; so engineering advances and supply chains compound. 4) Balanced portfolios damp volatility. Over‑concentration on one TAM naturally invites beta to the narrative. A multi‑thermal portfolio (liquid cooling + electrified process heat + residential HVAC + air chillers) lets strategic players participate in both AI infrastructure and industrial electrification, with optionality to move where policy, power prices, and customer demand are strongest. For the major players to succeed in driving continued growth, it’s about building across the thermal stack. Precision liquid cooling where compute needs it; electrified industrial HVAC where process heat dominates. The curve will favor those investing in efficiency, integration, and modularity across temperature bands. #IndustrialHVAC #ElectrifiedHeat #ProcessSteam #LiquidCooling #ThermalPlatforms #WasteHeat #DataCenters

  • View profile for Youssef El Manssouri

    Co-Founder & CEO at Sesterce - first principles, small teams, simple systems.

    6,191 followers

    The air-cooling era of the data center is officially over. A traditional enterprise server rack runs at about 8kW. We are now entering the era of the 1MW rack: A concentration of power and heat that fundamentally breaks the laws of traditional infrastructure. Average power density per rack doubled from 8kW to 17kW in twenty-four months and is projected to hit 30kW by 2027. But that is just the beginning. The individual GPU levels are scaling from 150W to 1,500W, with some projections exceeding 10,000W per unit. This requires a transition from 100kW IT racks to 1MW configurations. How do you deliver and dissipate 1MW in a single IT rack without catastrophic failure? To reach this density, the industry is moving toward disaggregated systems and high-voltage direct current (HVDC) architectures. By bringing 480V AC into the rack and converting it to +/- 400V or even 800V DC, you can distribute power directly to the IT gear through HVDC bus bars. This configuration, based on the OCP Diablo spec, allows for 110kW to 180kW power shelves interspersed with battery or capacitive backup units. The thermodynamic reality is even steeper. As rack power increases 20x, we see a parallel 20x increase in heat generation. This requires in-row Coolant Distribution Units (CDUs) with a 1.8MW capacity, operating at flow rates of 1.5 liters per minute per kilowatt. Liquid cooling is a mandatory physical constraint to prevent clock-speed throttling across clusters of 72 GPUs acting as a single logical chip. But where does a 1GW site find this level of stable, always-on power? Traditional grids cannot handle the 1GW "flat load" profile of a massive AI cluster. In France, our nuclear baseload provides the sovereign, carbon-free energy needed for 99.9% frequency stability. If the grid frequency deviates, you risk losing an entire training epoch during checkpointing. Most companies simply cannot build this vertical integration.

  • View profile for Imran L.

    Global Data Center Executive | AI Factories | Sovereign AI and GCC markets | Digital Twin | HPC/Quantum Infrastructure | Liquid Cooling Pioneer | M&A Strategic Advisor | Chip to Grid optimization | Industry Analyst

    5,928 followers

    🚀 Pioneering Research: How Liquid Cooling is Reshaping the Future of AI Factories I'm thrilled to announce that our multi-institutional research team has just published groundbreaking findings on NVIDIA's Sustainable Computing platform, revealing how direct-to-chip liquid cooling is transforming AI infrastructure performance and sustainability. It was an absolute privilege to lead this exceptional collaboration bringing together world-class researchers and industry leaders from: Berkeley Lab Brookhaven National Laboratory Florida Atlantic University Kansas State University NVIDIA (James Hooks) and Supermicro (Jim Hetherington) Special thanks to co-authors Alex Newkirk Arslan Munir Hayat Ullah for their outstanding contributions to this work. Why This Matters for AI Factories: As enterprises build the next generation of AI factories—purpose-built facilities designed to train and deploy large language models and advanced AI systems at scale—thermal management has emerged as a critical enabler, not just an operational concern. Our research demonstrates that liquid cooling fundamentally unlocks higher sustained performance, superior energy efficiency, and the thermal headroom needed to push AI workloads to their full potential. These findings provide the technical foundation for designing AI factories that can handle extreme rack densities while maintaining optimal performance and cost efficiency. The implications extend beyond individual nodes to facility-scale operations where thermal efficiency directly impacts both computational capability and sustainability goals. I'm excited to announce that the next phase of our research will explore how Johnson Controls' cutting-edge thermal management solutions can further accelerate AI factory performance and efficiency. As the industry pushes toward 100kW+ rack densities and multi-megawatt AI training clusters, JCI's innovative approach to integrated thermal infrastructure will be instrumental in building the sustainable, high-performance AI factories of tomorrow. The convergence of advanced liquid cooling, intelligent thermal management, and purpose-built AI infrastructure represents a pivotal moment for our industry. This research helps chart the path forward for organizations investing in AI factory deployments. Read the full white paper: https://lnkd.in/eJUPCHfR https://lnkd.in/e266n3kd Johnson Controls #AI #DataCenters #LiquidCooling #AIFactory #Sustainability #Innovation #JohnsonControls #NVIDIA #GreenComputing #HighPerformanceComputing #FutureOfAI

Explore categories