𝗧𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗶𝘀...𝗥𝗔𝗗𝗜𝗢𝗔𝗖𝗧𝗜𝗩𝗘? ☢️ Over the last few weeks, Microsoft, AWS, Google and Oracle have all announced heavy investments in nuclear energy. These investments are driven by the need for reliable, carbon-free energy to power AI and data center operations (in fact the combined energy consumption of all data centers worldwide would rank 17th among all countries). And power demands for AI are expected to continue to rise and renewables seem not able to keep up. 𝗛𝗲𝗿𝗲 𝗶𝘀 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗽𝗶𝗰𝘁𝘂𝗿𝗲: 𝗔𝗺𝗮𝘇𝗼𝗻: • Anchoring a $500 million investment in X-energy for small modular reactor (SMR) development. • Partnering with Energy Northwest to develop four SMRs in Washington state. • Collaborating with Dominion Energy to explore SMR development near the North Anna nuclear power station in Virginia. • Acquired a $650 million nuclear-powered data center in Pennsylvania. 𝗚𝗼𝗼𝗴𝗹𝗲: • Signed last week a power purchase agreement with Kairos Power for multiple SMRs expected to be operational by 2030. • Aiming to bring 500 megawatts of nuclear power to the grid through this partnership. 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁: • Financing the revival of the decommissioned Three Mile Island nuclear power plant in Pennsylvania. • Committed to purchasing energy from Helion Energy, a startup aiming to establish the world's first nuclear fusion plant by 2028. 𝗢𝗿𝗮𝗰𝗹𝗲 • Oracle has secured building permits for three small modular reactors (SMRs) to power a new data center (2030), which is expected to require over 1 gigawatt of electrical power. • The initiative reflects Oracle's strategy to utilize advanced nuclear technology as a reliable, carbon-neutral energy source, aiming to enhance the efficiency and sustainability of its extensive data center operations globally. 𝗧𝗵𝗲𝗿𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗰𝗼𝗺𝗺𝗼𝗻 𝘁𝗿𝗲𝗻𝗱 𝗮𝗺𝗼𝗻𝗴 𝗮𝗹𝗹 𝗳𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀: A strong focus on 𝗦𝗺𝗮𝗹𝗹 𝗠𝗼𝗱𝘂𝗹𝗮𝗿 𝗥𝗲𝗮𝗰𝘁𝗼𝗿 (𝗦𝗠𝗥) 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 as a central part of their nuclear energy strategy. SMR are quite small compared to nuclear power stations. Seems like they're probably going to be made on a production line and packed into a few trucks for each reactor. This might be a way to help companies meet climate commitments and meet increasing energy needs at the same time. It also represents a significant shift from previous investments in wind and solar, driven by the growing energy needs of AI. 𝗜𝘀 𝗮 𝗻𝘂𝗰𝗹𝗲𝗮𝗿 𝗿𝗲𝗻𝗮𝗶𝘀𝘀𝗮𝗻𝗰𝗲 𝘂𝗻𝗱𝗲𝗿𝘄𝗮𝘆? #climatechange #AI #datacenter
Data Center Equipment
Explore top LinkedIn content from expert professionals.
-
-
Everyone's chasing data center land. Almost everyone is missing the real constraint. It's not fiber. It's not even land. It's power. U.S. Interior Secretary Doug Burgum said at the Prologis conference: "To win the AI arms race against China, we've got to figure out how to build these artificial intelligence factories close to where the power is produced, and just skip the years of trying to get permitting for pipelines and transmission lines." Translation: The next generation of data centers won't be built where the land is cheap. They'll be built where the power is available. Three implications for dirt investors: 1. Nuclear Proximity = New Premium: Amazon already signed deals with Dominion Energy near the North Anna nuclear power station in Virginia and expanded partnerships with Talen Energy at the Susquehanna nuclear plant. Sites within transmission distance of existing nuclear facilities just became exponentially more valuable. 2. Warehouse Conversions Accelerate: If Prologis is eyeing their 6,000 buildings for data center conversion, every industrial site with surplus power capacity needs re-evaluation. What looks like a struggling warehouse today might be a data center tomorrow. 3. Grid Capacity > Geographic Desirability: Constellation Energy CEO Joseph Dominguez noted that data economy customers "want to run their systems 24-7" with "firm pricing so that they know the price for energy for 20 years". Long-term power contracts are becoming the new land entitlements. But here's what nobody's talking about: The same power constraints driving this opportunity are also creating massive project risks. According to a recent CoStar analysis, data centers will account for up to 60% of total power load growth through 2030. But there's a timing mismatch: data centers take 2-3 years to build, while power system upgrades take 8 years. That gap is forcing developers to either wait or find sites with existing capacity. The Community Resistance Factor Data Center Watch estimates $64 billion in data center projects were blocked or delayed over a recent two-year period. There are now 142 activist groups across 24 states organizing against data center development. Northern Virginia alone-the nation's largest data center market-has 42 activist groups fighting projects. Reasons cited: water consumption, higher utility bills, noise, decreased property values, loss of open space. Translation for land investors: Sites with existing power capacity + community support just became exponentially more valuable than sites with just land and zoning. The power infrastructure thesis isn't just about finding available capacity. It's about finding that capacity in counties that actually want data centers. Not every market will roll out the welcome mat. Are you evaluating community sentiment alongside power infrastructure access?
-
Global data centre power demand is expected to reach 130 GW by 2028. Growing at 16% per year. But infrastructure isn’t built in CAGR charts. It’s built with copper, transformers, and time. You can’t scale compute if your logistics don’t scale first. Transformer lead times now exceed 18–30 months High-purity copper fluctuates 20–40% annually PDUs and switchgear bottleneck at tier-2 fabs in Malaysia or Taiwan. In the GCC, grid access and hardware supply are the real blockers. You can’t fix this reactively. Therefore, do this ➝ Engage EWEC and ADPower at design freeze (+12 months) ➝ Tie permits to confirmed upstream generation capacity ➝ Source servers from Dell Technologies, Inspur Group, or others ➝ Use dual-rated transformers with voltage-class fallback mapped ➝ Lock PDU/UPS supply with Vertiv or Schneider Electric ➝ Buffer SKUs at Jebel Ali, King Abdullah Port, and Chennai ➝ Hold 90+ days of bonded spares under warehouse agreements ➝ Use Foxconn India, Tata Electronics, or Modon for sub-assembly ➝ Pre-contract backup generation with Cummins Inc. or others. ➝ Track LME copper pricing and feed into your BOM risk models ➝ Enforce fallback mapping in ServiceNow across critical components Model power constraints ⤦ Simulate dispatch curves for NEOM's wind and solar resources. ⤦ Map load by rack, season, and cooling profile. ⤦ Validate UPS and chiller curves against real site-level energy windows. Hardware gets built in Asia. Time gets lost in transit. Power gets delayed at permits. Your design isn’t complete until every component has a fallback... ...and every kilowatt has a Plan B. Save this if you’re planning anything Hyperscale.
-
As grid operators and planners deal with a wave of new large loads on a resource-constrained grid, we need fresh approaches beyond just expecting reduced electricity use under stress (e.g. via recent PJM flexible load forecast or via Texas SB 6). While strategic curtailment has become a popular talking point for connecting large loads more quickly and at lower cost, this overlooks a more flexible, grid-supportive strategy for large load operators. Especially for loads that cannot tolerate any load curtailment risk (like certain #datacenters), co-locating #battery #energy storage systems (BESS) in front of the load merits serious consideration. This shifts the paradigm from “reduce load at utility’s command” to “self-manage flexibility.” It’s BYOB – Bring Your Own Battery and put it in front of the load. Studies have shown that if a large load agrees to occasional grid-triggered curtailment, this unlocks more interconnection capacity within our current grid infrastructure. But a BYOB approach can unlock value without the compromise of curtailment, essentially allowing a load to meet grid flexibility obligations while staying online. Why do this? For data centers (DC’s), it’s about speed to market and enhanced reliability. The avoidance of network upgrade delays and costs, along with the value of reliability, in many cases will justify the BESS expense. The BYOB approach decouples flexibility from curtailment risk with #energystorage. Other benefits of BYOB include: -Increasing the feasible number of interconnection locations. -Controlling coincident peak costs, demand charges, and real-time price spikes. -Turning new large loads into #grid assets by improving load shape and adding the ability to provide ancillary services. No solution is perfect. Some of the challenges with the BYOB approach include: -The load developer bears the additional capital and operational cost of the BESS. -Added complexity: Integrating a BESS with the grid on one side and a microgrid on the other is more complex than simply operating a FTM or BTM BESS. -Increased need for load coordination with grid operators to maintain grid reliability. The last point – large loads needing to coordinate with grid operators - is coming regardless. A recent NERC white paper shows how fast-growing, high intensity loads (like #AI, crypto, etc.) bring new #electricty reliability risks when there is no coordination. The changing load of a real DC shown in the figure below is a good example. With more DC loads coming online, operators would be severely challenged by multiple >400 MW loads ramping up or down with no advanced notice. BYOB’s can manage this issue while also dealing with the high frequency load variations seen in the second figure. References in comments.
-
⚡ What really keeps a Tier III data center running 24/7—even during failures? It’s not just backup power. It’s how power flows through a fully redundant, concurrently maintainable design. Let’s break down Tier III data center power flow in a simple way 👇 🔁 Tier III isn’t about zero failures — it’s about zero downtime during maintenance or single faults. Here’s how the power path makes that possible: 🔌 1. Dual Utility / Source Paths ➡️ Independent A & B power paths ➡️ Either path can carry the full IT load ➡️ No single point of failure 🔋 2. UPS with N+1 Redundancy ➡️ Continuous, clean power to IT load ➡️ Batteries bridge the gap during utility loss ➡️ Maintenance possible without shutdown ⚙️ 3. Generator Backup ➡️ Automatically starts during prolonged outages ➡️ Supports full load via either path ➡️ Fuel redundancy ensures extended runtime 🧱 4. Switchgear & PDUs ➡️ Power routed through redundant switchboards ➡️ PDUs distribute power to racks independently ➡️ Faults isolated without affecting IT equipment 💻 5. IT Load (Dual-Corded Equipment) ➡️ Servers powered by both A & B paths ➡️ Loss of one path = no service interruption 💡 Tier III data centers are designed for concurrent maintainability — 👉 Any single component can be taken out of service without impacting operations. This is why Tier III remains the industry standard for enterprise and mission-critical facilities. 🔎 If you work with data centers, power systems, or critical infrastructure, understanding this power flow is essential. ♻️ Repost to share with your network if you find this useful. 🔗 Follow Ashish Shorma Dipta for more posts like this. #DataCenter #PowerDistribution #ElectricalEngineering #DataCenterDesign #PowerSystems #DataCenterOperations
-
Your 2026 Data Centre Infrastructure Roadmap Signal #3 (of 12): "The Power Wars" ⚡ Who's alive and who's dead in 2026? AI isn't running out of ideas, it's running out of electrons. With 7-year Grid Queues → Bring Your Own Grid. That's not just efficiency → That's survival. Three decades of building hyperscale data centres have taught me: Bad power assumptions bring billion-dollar headaches. It's time to understand these three positions: 1. Grid-Dependent = Growth-Capped 2. Grid-Independent = Untouchable 3. Grid-Optional = Unstoppable 𝗧𝗵𝗲 𝗚𝗿𝗶𝗱 𝗝𝘂𝘀𝘁 𝗕𝗲𝗰𝗮𝗺𝗲 𝗬𝗼𝘂𝗿 𝗕𝗶𝗴𝗴𝗲𝘀𝘁 𝗖𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗼𝗿 → 7-year interconnection queues (Princeton ZeroLab: bypass for power w/in 2 years) → 270kW racks shipping now, ~480kW next year (air cooling is dead, it's now liquid or lose) → AI campuses = 6x what grids were designed for (if one cluster trips, entire cities will flicker) 𝗕𝗿𝗶𝗻𝗴 𝗬𝗼𝘂𝗿 𝗢𝘄𝗻 𝗚𝗿𝗶𝗱 𝗜𝘀 𝗧𝗵𝗲 𝗡𝗲𝘄 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 → 1% are fully self-powered today (27% by 2030 = the winners' circle) → 38% will use onsite as primary source (grid becomes backup, not a lifeline) → 85% going behind-the-meter (solar, batteries, turbines; anything but delay) 𝗪𝗵𝗮𝘁 𝟮𝟬𝟮𝟲 𝗟𝗼𝗼𝗸𝘀 𝗟𝗶𝗸𝗲 → Power becomes your #1 constraint (not chips/talent/capital. It's all about electrons) → "Power date" replaces "Go-Live date" (control it or it controls you) → Communities revolt against mega-campuses (unless you bring jobs, heat reuse, tax revenue) 𝗧𝗵𝗲 𝗛𝘆𝗽𝗲𝗿𝘀𝗰𝗮𝗹𝗲𝗿𝘀 𝗔𝗿𝗲 𝗔𝗹𝗿𝗲𝗮𝗱𝘆 𝗠𝗼𝘃𝗶𝗻𝗴 → AWS: Fuel cells + behind-the-meter generation → Google: Geothermal + 24/7 carbon-free power → Meta: Massive solar farms + battery storage → Microsoft: Securing dedicated power plants They're not building data centres anymore. They're building power companies. 𝗬𝗼𝘂𝗿 𝟮𝟬𝟮𝟲 𝗣𝗼𝘄𝗲𝗿 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 → Map every site by time-to-power (treat megawatts like revenue) → Design for hybrid from day one (leave space for turbines, batteries, future SMRs) → Use flexibility to jump the queue (batteries + solar = 5 years saved) → Hire energy people NOW (they're worth more than your architects) The winners in 2026 won't have the best GPUs. They'll have their own electrons. 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 If your company sees power as someone else's problem. In 2026, you'll discover it is THE problem. 𝗬𝗼𝘂𝗿 𝗧𝘂𝗿𝗻: Who's controlling your power dates: You, or your utility provider? ♻️ Repost if you see the grid has to become optional ✅ Follow me, Guy Massey, and get your Roadmap Signals for success in 2026
-
🔌 The state-of-the-art power system in the data centre utilises 400 V AC connected to the MV grid via a low-frequency transformer (LFT) and distributed power factor correction (PFC) rectifiers at the rack level, achieving an overall efficiency of approximately 97.1% from MVAC input to the rack-level 400 V/48 V DC-DC conversion. Increasing the AC distribution voltage to 690 V may enhance the overall efficiency to about 97.8 % due to reduced distribution losses, as losses in identical busbars decrease with the square of the voltage. PFC rectifiers suitable for 690 V AC can be employed with three-level topologies, maintaining high conversion efficiency. Alternatively, an 800 V DC (±400 V DC) distribution system can result in slightly lower distribution losses than the 690 V AC system. Additionally, there are other advantages to DC, such as the straightforward and efficient integration of battery energy systems. 💡 In principle, three conceptual approaches to MVAC-LVDC conversion can be considered. The first involves retaining the LFT and centralising the PFC rectifier functionality with a high-power SiC unit. This approach achieves an MVAC-LVDC conversion efficiency of approximately 98.2 % and an overall efficiency of around 97.9 %, with an estimated power density of about 0.25 kW/dm³. The second option employs robust 12-pulse rectifier systems complemented by active filters (AFs) to achieve power factor correction, forming a hybrid transformer. This partial-power-processing technique enables a high MVAC-LVDC conversion efficiency of approximately 98.5 % and an overall efficiency of about 98.2 %, with a power density estimated at 0.22 kW/dm³. Finally, solid-state transformers (SSTs) with medium-frequency transformers (MFTs) represent a fully controllable option. Current MVAC-LVDC SST prototypes have demonstrated full-load efficiencies of around 98 %, possibly reaching 98.5%, resulting in an overall efficiency of approximately 97.7 % or 98.2%. However, the power density of the overall SST system based on modular topologies tends to be comparatively lower than that of the hybrid transformer solution, despite the very high power density of the modules. #solidstate #powerelectronics #datacenters #lowvoltage #directcurrent #efficiency #powerdensity
-
From silent risk to silent victory. Not long ago, our data center environments were still carrying a hidden vulnerability: several critical devices were running on single power sources. It was a legacy design choice one that no longer aligned with the standards of resilience and high availability we strive for. We knew it was time to raise the bar. Today, I’m proud to share that every single device in our data centers now runs on dual power completely eliminating single points of failure. And we didn’t stop there: we also completed a full PDU tech refresh with zero impact on live systems. And here’s why this matters: Doing this in a highly critical, always-on environment — like in the banking or financial sector — is not just an upgrade. It’s a milestone. It means every change had to be precise, risk-aware, and flawlessly executed. The impact? • Rock-solid infrastructure • Reduced operational risk • Stronger service continuity for mission-critical operations Sometimes the biggest wins don’t make noise. But they change everything. #DataCenter #Infrastructure #HighAvailability #BankingIT #Resilience #TechLeadership #ZeroDowntime #OperationalExcellence #ITTransformation
-
Last month, I broke down Alberta’s new rules for large loads. One clause deserves its own spotlight: Large Loads Must Ride Through Positive-Sequence Phase Angle Jumps of up to 25° in a Single Cycle. This has always been a generator requirement. Never a load requirement. Until now. Why it matters: ➤ A 25° phase jump in a 60 Hz system means the waveform shifts in ~1.16 ms. Enough to confuse relays, desynchronize PLLs in converters, or crash UPS systems. ➤ During the Jan 23, 2023 Odessa fault, several solar and wind parks tripped despite voltages being within LVRT limits, caused by inverter synchronism failure under phase-angle jumps. ➤ Alberta recognised that data-centre rectifiers and UPS inverters use the same PLL chips. If generators couldn’t ride through the angle step, neither will a 300 MW load once >60% inverter-fed. ➤ For decades, loads were passengers: when the grid shook, they could vanish. ➤ Now Alberta says: if you want to connect at hundreds of MW, you stay on, you hold steady, you behave like generation. What this means in practice: • A hyperscale data centre must keep operating through a disturbance strong enough to throw inverters out of lock-step. • By imposing generator-grade ride-through on loads, AESO is collapsing the old divide: stability is everyone’s job. • Paper specs won’t cut it. Centres must supply validated EMT and phasor-domain models, tested against real faults. • Fail to prove compliance? You don’t connect. The bar has shifted from permission to connect → proof you can stabilise. The bigger picture: Alberta is a 12.4 GW system. A 500 MW data centre isn’t background noise, it’s a system shock. • Lose one invisibly, and planners are chasing ghost contingencies never logged. • Spain has already seen what hidden dynamics can do. • Ireland is straining under hyperscale clustering, forcing curtailments and planning freezes. The UK isn’t far behind as AI demand ramps. • ERCOT and PJM are next. The precedent won’t stay in Alberta for long. AESO’s 2025 Roadmap is explicit: instead of curtailing new load or blanketing the grid with synchronous condensers, “we will require the load itself to stay online for the same disturbances we expect generators to survive.” Short term, that’s the cheaper path. Long term, synchronous condensers are still on the table. But for now, hyperscale demand must carry its share of stability. My view: This is more than a technical clause. It’s the clearest sign yet that programmable demand has crossed the line from consumer to grid actor. AESO’s message is blunt: 👉 Hyperscale demand is no longer a disturbance to manage. It’s a stability resource to command. The question isn’t whether other grids will follow. It’s this: 👉 Should hyperscalers everywhere be forced to ride through like generation, or is Alberta setting an impossible bar? #AI #DataCenters #GridStability #PowerSystems #Policy #SystemStrength #TransmissionAndDistribution
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development