Comparing Power Conversion Architectures

Explore top LinkedIn content from expert professionals.

Summary

Comparing power conversion architectures involves analyzing how electricity moves from the grid to servers in data centers, focusing on the methods and stages used to convert and distribute power efficiently for high-performance computing. With rising demand and AI-driven workloads, new approaches are emerging—like 800 VDC and +/-400 VDC systems—that streamline power delivery and minimize energy loss compared to traditional AC configurations.

  • Simplify power paths: Redesigning the power chain to use fewer conversion steps and higher voltages can help save energy and reduce heat, especially in dense AI environments.
  • Embrace new technology: Consider deploying solid state transformers, which convert AC to DC more precisely and enable direct integration with battery storage for improved reliability and flexibility.
  • Assess operational challenges: Prepare for new complexities with DC systems, such as different fault protection requirements and evolving industry standards, to ensure safe and stable operations.
Summarized by AI based on LinkedIn member posts
  • View profile for Igor Morozov

    VP Data Center Power Solutions, SolarEdge | 800VDC & SST-Based Power Architecture for AI Data Centers | 15 Years Building Hardware at Scale | Kellogg EMBA

    3,529 followers

    The Hidden Megawatt: Why We Are Engineering Our Own Gridlock ⚡🏗️ Everyone is racing to secure the next grid connection 🔌 Almost nobody is asking how much compute is being left on the table with the connection they already have. I call it the Hidden Megawatt. In a traditional data center power chain, electricity is converted about five times between the grid and the GPU. Each stage adds loss, heat, cost, and failure risk. By the time power reaches the chip, roughly 5 to 7 percent of total facility capacity is already gone, burned as heat before a single token is produced ♨️ At 100 MW, that equals an entire row of GPU racks you paid for but never use 🖥️ At gigawatt scale, it becomes a full building of stranded compute capacity 🏢 This would matter less if new grid capacity were easy to obtain. It is not. In major hubs, large connections can take many years to secure ⏳ While the industry waits for new megawatts, existing megawatts quietly disappear inside legacy conversion chains. This is an architecture problem 🧠 An 800 VDC architecture cuts the conversion chain down to a minimal path ⚡ Converting medium voltage AC directly to high voltage DC and distributing DC natively turns more incoming watts into usable compute instead of heat. The ecosystem is already shifting 🚀 Next generation AI racks, high density power shelves, and 800 VDC reference designs are entering deployment now. The most valuable megawatt is often not the next one you are trying to connect. It is the one you can recover inside your existing facility 💡 No permits 📄 No queue 🚦 No multi year wait ⏱️ Just better power architecture ⚡ #DataCenterDesign #800VDC #AIInfrastructure #EnergyEfficiency #DataCenterDC #DataCenterSST #SolarEdgeSST #DataCenter800VDC

  • View profile for Piet Vanassche

    Power System Architect & Co-founder @ Triphase | Advancing Model-Based Control & System Design | Entrepreneur Driving Innovation in Scalable Power Conversion

    3,000 followers

    Data centers are rapidly becoming a major driver for DC power distribution and DC microgrids. Hyperscale facilities consume 20 to 100MW each, wit most of that power ultimately delivered at ~1V at the point-of-load for xPUs and memory. To improve efficiency and to reduce distribution cost, designers push voltage levels upward, with the conversion to 1V as close to the silicon as possible. Hereby, the power conversion system architecture is of crucial importance! Across the industry, facility-scale DC distribution is converging on either +/-400VDC (Google, Meta, Microsoft) or 800VDC (NVidia). In a future architectures, these DC buses will likely be fed from the medium voltage AC grid via solid-state transformers (SSTs). Today, the power delivery from 800V to 1V is envisioned to move from 800 V → 48 V → 12 V → 1 V. A rack-level conversion from 800V to 48V, is followed by a tray- or GPU card-level conversion from 48V to 12V. The final conversion from 12V to 1V happens on the GPU card, as close to the silicon as possible. Exact voltages may vary a bit. This structure evolved from traditional AC-fed architectures. However, it has two big drawbacks: it still requires substantial copper at rack- and tray-level and it has multiple conversion stages. Both add loss and cost. Skipping a stage—for example, jumping from 800 V directly to ~12 V—sounds attractive, but creates challenges for converter semiconductors and magnetics. A multi-module series architecture may be more promising! On the high-voltage side, modules connect in series, naturally dividing the input bus (e.g., 800 V into ~100 V segments). Each module converts directly to 12V, a much more favorable design point for both semiconductors and magnetics. These modules can be integrated directly on the GPU board, minimizing the amount of copper needed to transport power within a rack. A series architecture taps into low-voltage power devices which are more more efficient and more reliable than high voltage ones. Moreover, power converter transformer ratios are less extreme which simplifies magnetics. On the flip side, a series architecture requires a more complex communication and control. But embedded digital control, and high-speed communication are becoming inexpensive, making the control challenge solvable. Power system design is ultimately about managing the “conservation of misery”. Design challenges remain, but you can choose where the burden sits. The arrival of smart, all-digital power modules unlocks new possibilities to redistribute that burden more intelligently. #DC, #800V, #microgrids, #datacenters, #nvidia

  • View profile for Eric Meier

    Supervisor - Planning Modeling at ERCOT | Power Systems Engineer and Modeler | PE

    3,626 followers

    As our need for computing resources continues to increase servers are consuming more and more power. Racks are expected to eventually consume a megawatt of power which requires rethinking the data center power architecture. This movement is envisioning DC distribution at 400 or 800V DC that is converted from AC at the point of connection with the grid and extends to the server. Two architectures are emerging: NVIDIA’s monopolar 800VDC end-to-end architecture and the OCP’s Mt. Diablo ±400VDC specification; This design change allows for several levels of power conversion to be eliminated improving energy efficiency and reducing the footprint needed for electrical infrastructure. To achieve this vision requires facility level AC to DC conversion devices. Rising to meet this challenge are solid state transformers. Solid state transformers or SSTs are power electronic devices that converts AC to AC or AC to DC. The transformer has been a fundamental component of the grid since its earliest days. They are simple magnetic devices that transform voltage. An SST replaces that with a rectifier, converter, and inverter. For data center applications SSTs can convert AC to 400V or 800V DC to supply the DC power distribution system. They can also be paired with battery backup units or BBUs to enable the facility to meet ridership-through requirements and manage demand fluctuations that can induce forced oscillations. Numerous players are emerging to manufacturer SSTs. There are the new entrants like Heron Power and DG Matrix along with existing companies like Solar Edge. With SSTs we see a new implementation of one of the most fundamental components on the grid. Transformers are simple magnetic devices but as SSTs are power electronic devices it is unknown if they will last as long as conventional transformers. They also have implications for the power grid that need to be studied. We have had numerous challenges with power electronic generation such as IBRs that we don’t want with power electronic transformers. We learned that we need proper dynamic and EMT models for these generators, and will need the same for these SSTs. We also need to see what their protection and fault behavior is and study how that interacts with the grid. Finally we need to ensure that grid codes are updated as the transformer moves from a passive device to an actively controlled power electronic device.

  • View profile for Amir Olajuwon Mission Critical Infrastructure Data Center Systems • Commissioning • SME

    Commissioning Leader & Construction Executive | Mission Critical MEP & QA/QC | AI/Hyperscale Data Centers | Owner’s Rep & GC | Multi-Sector: Semiconductor, Oil & Gas, Aerospace

    11,501 followers

    800 VDC Isn’t the Future of Data Centers. It’s the Pressure Response to What AI Just Broke. For years, we optimized around a stable model: Utility AC → UPS → PDU → Rack → Server That model worked… Until rack densities stopped behaving. Now we’re seeing: • 80kW racks becoming normal • 150kW+ racks entering production • AI clusters pushing infrastructure beyond design assumptions At that point, the problem isn’t just cooling. It’s the power architecture itself. ⸻ Here’s where high-voltage DC enters the conversation: Utility MV → MV switchgear → step-down transformer → centralized rectification / DC UPS → 380–800 VDC bus → DC distribution → rack → point-of-load conversion → GPU This isn’t theoretical anymore. It’s being evaluated because the traditional AC path is hitting limits in: • Conductor sizing • Conversion losses • Space constraints • System complexity at scale ⸻ What HVDC actually solves: • Higher voltage → lower current → reduced I²R losses • Fewer conversion stages across the power chain • Native alignment with battery storage and on-site generation • Cleaner distribution deeper into high-density environments And most importantly: It moves efficiency closer to the silicon — where the real load lives. ⸻ But let’s be clear — this isn’t a free win. DC introduces real challenges: • Fault protection is significantly harder (no zero crossing) • Arc flash behavior changes the entire protection philosophy • Limited standards and fragmented vendor ecosystem • Requires a different level of engineering discipline to execute correctly This is why AC still dominates. ⸻ So where does 800 VDC actually make sense? Not everywhere. It makes sense where: • Rack densities exceed 100kW • AI/HPC clusters dominate the load profile • Battery and generation are co-located • Efficiency gains justify architectural complexity ⸻ 380 VDC is proven. 800 VDC is emerging. This isn’t about replacing AC. It’s about recognizing that AI has changed the load profile faster than the industry has adapted the power path. ⸻ The real question isn’t: “Is DC better than AC?” It’s: At what scale does AC stop being efficient enough? ⸻ #DataCenters #AIInfrastructure #CriticalPower #800VDC #EnergyEngineering #MissionCritical #ElectricalEngineering

Explore categories