Study: Generators May Provide a Faster Path to Power A new study by energy researchers suggests that data centers could get faster access to power by adopting load flexibility, agreeing to briefly curtail utility usage and shift to generator power. In an in-depth analysis of the U.S. power grid, researchers at Duke University estimate that this approach could tap existing headroom in the system to more quickly integrate at least 76 gigawatts of new loads, arguing that even a small reduction in peak demand could reduce the need for new investments in transmission and generation capacity - as well as the need to pass on those investments to ratepayers. Data centers are all about uptime, and thus have been resistant to innovations that create additional risk around reliability. But current power constraints in key markets, along with growing demand for AI training workloads (which may be more interruptible than cloud or colocation) has prompted the industry to explore load flexibility options. Last year the Electric Power Research Institute (EPRI) launched the DCFlex project to work with utilities and a number of data center operators - including Compass Datacenters, QTS Data Centers, Google and Meta - on pilot projects for load flexibility. The Duke study, titled "Rethinking Load Growth," puts some interesting numbers on the upside potential. Their findings: - 76 gigawatts of new load could be enabled by a annual load curtailment rate of 0.25% of maximum uptime, equivalent to 1.7 hours per year operating on backup generators. - An annual curtailment rate of 0.5% (2.1 hours annually) could enable 98 GWs of new load, while a rate of 1.0% (2.5 hours) could boost that to 126 GWs. - A 0.5% curtailment could enable 18GWs in the PJM and 10 GWs in ERCOT, the research finds. At least one hyperscaler seems open to the idea. “This is a promising tool for managing large new energy loads without adding new generating capacity and should be part of every conversation about load growth,” said Michael Terrell, Senior Director of Clean Energy and Carbon Reduction at Google, in a LinkedIn post. With the acceleration of the AI arms race, speed-to-market is now a top priority, along with a competitive opportunity cost for companies that are unable to deploy new capacity. There are tradeoffs to consider (including more emissions), but the Duke paper will likely advance the conversation. Duke study: https://lnkd.in/eS3s_pvk Background on DCFlex: https://lnkd.in/euK746Zy
Load Flexibility Enhancement Techniques
Explore top LinkedIn content from expert professionals.
Summary
Load flexibility enhancement techniques involve managing and adjusting when and how large users like data centers consume electricity, so the power grid can handle more demand without expensive upgrades. These methods let facilities shift or reduce their power use temporarily, install onsite batteries, or use smart software to keep operations reliable while supporting grid stability and growth.
- Adopt onsite batteries: Consider installing battery storage systems to manage your power needs independently, keeping operations running while meeting grid support requirements.
- Implement smart orchestration: Use advanced software tools to coordinate and schedule your energy use, allowing you to respond to grid stress and save on utility costs.
- Explore demand shifting: Shift or briefly lower electricity use during peak times or whenever the grid is strained, which can speed up the process of connecting new facilities and help avoid costly infrastructure expansion.
-
-
As grid operators and planners deal with a wave of new large loads on a resource-constrained grid, we need fresh approaches beyond just expecting reduced electricity use under stress (e.g. via recent PJM flexible load forecast or via Texas SB 6). While strategic curtailment has become a popular talking point for connecting large loads more quickly and at lower cost, this overlooks a more flexible, grid-supportive strategy for large load operators. Especially for loads that cannot tolerate any load curtailment risk (like certain #datacenters), co-locating #battery #energy storage systems (BESS) in front of the load merits serious consideration. This shifts the paradigm from “reduce load at utility’s command” to “self-manage flexibility.” It’s BYOB – Bring Your Own Battery and put it in front of the load. Studies have shown that if a large load agrees to occasional grid-triggered curtailment, this unlocks more interconnection capacity within our current grid infrastructure. But a BYOB approach can unlock value without the compromise of curtailment, essentially allowing a load to meet grid flexibility obligations while staying online. Why do this? For data centers (DC’s), it’s about speed to market and enhanced reliability. The avoidance of network upgrade delays and costs, along with the value of reliability, in many cases will justify the BESS expense. The BYOB approach decouples flexibility from curtailment risk with #energystorage. Other benefits of BYOB include: -Increasing the feasible number of interconnection locations. -Controlling coincident peak costs, demand charges, and real-time price spikes. -Turning new large loads into #grid assets by improving load shape and adding the ability to provide ancillary services. No solution is perfect. Some of the challenges with the BYOB approach include: -The load developer bears the additional capital and operational cost of the BESS. -Added complexity: Integrating a BESS with the grid on one side and a microgrid on the other is more complex than simply operating a FTM or BTM BESS. -Increased need for load coordination with grid operators to maintain grid reliability. The last point – large loads needing to coordinate with grid operators - is coming regardless. A recent NERC white paper shows how fast-growing, high intensity loads (like #AI, crypto, etc.) bring new #electricty reliability risks when there is no coordination. The changing load of a real DC shown in the figure below is a good example. With more DC loads coming online, operators would be severely challenged by multiple >400 MW loads ramping up or down with no advanced notice. BYOB’s can manage this issue while also dealing with the high frequency load variations seen in the second figure. References in comments.
-
As we have load growth each year, Utilities could install software to orchestrate loads and reduce utility rates 20% by 2030. But that would require leadership. "Distribution-level orchestration brings better solutions to address the reality that utilities operate in. First, it targets the constraint. It prioritizes action where it matters most, from the bottom up, starting at transformers, then feeders, then substations, while still respecting system needs. Second, it shapes the load continuously. Instead of one-off events, orchestration adjusts to evolving local limits and real-world behavior. Third, it produces robust results that allow distributed load flexibility to be counted on as a capacity planning resource. Serving growing load on existing equipment is central to affordability. This is particularly true for the local distribution system, which was not built for fast, clustered load growth and presents some of the most significant cost drivers for utilities." https://lnkd.in/ehaKvMA5
-
There's 1 spot for a 100 MW data center in Belgium. Add 5% demand flexibility: suddenly there are 16. That’s the power of flexibility as visualized by Elia Group - Belgium’s transmission system operator. Looking at Elia’s grid capacity map for 2027, there’s literally one (new) spot in the country where you can connect a 100 MW data center. And 100 MW isn’t even that big. But if you allow for 5% of demand flexibility - which could mean shifting or reducing the load, or replacing it with on-site generation, there’s suddenly 16 free spots for new industrial loads, many up to 300 MW. If you go to 20% (which is a bit extreme), the grid pretty much stops becoming a bottleneck. Of course you wouldn’t want to close your data center for 2 months in a year. But actually, the yearly average load of data centers is 50%. So there’s clearly space for optimization. In fact, a recent trial by Nebius, Emerald AI, EPRI and National Grid showed that a test AI cluster in London could slash load by 30% in 40 seconds in response to sudden grid stress, while keeping critical jobs. Even better, it could sustain multi-hour load reductions of 10-40% and still deliver 99% performance on highest priority jobs. And you can optimize further: with onsite battery storage, you can shape the daily load profile to match the grid’s needs, potentially getting an accelerated connection agreement and saving money in the process. More in tomorrow's newsletter on data center flexibility. EDIT: a good point raised in the comments - I should clarify that the grid capacity map shows the connection capabilities for new projects, so on top projects that have already secured a grid connection.
-
✋ Read the new paper from The American Council for an Energy-Efficient Economy (ACEEE), "Opportunities to Use Energy Efficiency and Demand Flexibility to Reduce Data Center Energy Use and Peak Demand". Author: Steven Nadel Highlights: 💡 Reducing Data Center Energy Use: Efficiency Meets Flexibility 🏭 Rising Power Demand: Data centers already use 4% of U.S. electricity and could reach 12% by 2028, largely driven by AI and hyperscale computing ⚙️ Efficiency = Resilience: Smarter chips, optimized algorithms, AI-assisted cooling, and heat recovery could cut energy use 40% or more, helping utilities avoid costly infrastructure expansion 🌡️ Smart Cooling: Techniques like liquid and immersion cooling and AI-driven climate control can reduce cooling loads by up to 40%, improving uptime and lowering costs 🔋 Demand Flexibility: Curtailing data center load by just 0.25% of uptime could free 76 GW of capacity — equal to 76 nuclear reactors! Oracle and Google are already testing this via grid-responsive software 📊 New Metrics Needed: Traditional PUE misses compute efficiency; next-gen metrics like Google’s Compute Carbon Intensity (CCI) offer a more complete picture 💬 Do you think AI’s efficiency gains can outpace its soaring energy demands — or will we just compute more? #DataCenters #EnergyEfficiency #AI #DemandFlexibility #Sustainability #CleanEnergy #ClimateTech #GridInnovation
-
Experts say data centre operators should treat their huge loads as flexible resources rather than fixed demands. That means modulating workloads, shifting compute tasks, and using onsite generation or batteries when the grid is strained. On the Grid Forward Forum, Iron Mountain Data Centers's energy director said data centres can shed load during peak times by running on diesel or natural gas units and by adding batteries between servers and the grid. Alberta's behind the meter gas plants and proposed battery projects make this flexibility feasible. Combined heat and power units can also capture waste heat for cooling or adjacent industries, boosting efficiency. As AESO caps new grid connections and high voltage transformer lead times stretch to 128-144 weeks, the ability to flex load could be a competitive edge. #EnergyFlexibility #DataCentres #AlbertaGrid #CHP
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development