Grid Management Systems

Explore top LinkedIn content from expert professionals.

  • View profile for Rich Miller

    Authority on Data Centers, AI and Cloud

    48,490 followers

    Study: Generators May Provide a Faster Path to Power A new study by energy researchers suggests that data centers could get faster access to power by adopting load flexibility, agreeing to briefly curtail utility usage and shift to generator power. In an in-depth analysis of the U.S. power grid, researchers at Duke University estimate that this approach could tap existing headroom in the system to more quickly integrate at least 76 gigawatts of new loads, arguing that even a small reduction in peak demand could reduce the need for new investments in transmission and generation capacity - as well as the need to pass on those investments to ratepayers. Data centers are all about uptime, and thus have been resistant to innovations that create additional risk around reliability. But current power constraints in key markets, along with growing demand for AI training workloads (which may be more interruptible than cloud or colocation) has prompted the industry to explore load flexibility options. Last year the Electric Power Research Institute (EPRI) launched the DCFlex project to work with utilities and a number of data center operators - including Compass Datacenters, QTS Data Centers, Google and Meta - on pilot projects for load flexibility. The Duke study, titled "Rethinking Load Growth," puts some interesting numbers on the upside potential. Their findings: - 76 gigawatts of new load could be enabled by a annual load curtailment rate of 0.25% of maximum uptime, equivalent to 1.7 hours per year operating on backup generators. - An annual curtailment rate of 0.5% (2.1 hours annually) could enable 98 GWs of new load, while a rate of 1.0% (2.5 hours) could boost that to 126 GWs. - A 0.5% curtailment could enable 18GWs in the PJM and 10 GWs in ERCOT, the research finds. At least one hyperscaler seems open to the idea. “This is a promising tool for managing large new energy loads without adding new generating capacity and should be part of every conversation about load growth,” said Michael Terrell, Senior Director of Clean Energy and Carbon Reduction at Google, in a LinkedIn post. With the acceleration of the AI arms race, speed-to-market is now a top priority, along with a competitive opportunity cost for companies that are unable to deploy new capacity. There are tradeoffs to consider (including more emissions), but the Duke paper will likely advance the conversation. Duke study: https://lnkd.in/eS3s_pvk Background on DCFlex: https://lnkd.in/euK746Zy

  • View profile for Ron DiFelice, Ph.D.

    CEO at EIP Storage & Energy Transition Voice

    19,413 followers

    As grid operators and planners deal with a wave of new large loads on a resource-constrained grid, we need fresh approaches beyond just expecting reduced electricity use under stress (e.g. via recent PJM flexible load forecast or via Texas SB 6). While strategic curtailment has become a popular talking point for connecting large loads more quickly and at lower cost, this overlooks a more flexible, grid-supportive strategy for large load operators. Especially for loads that cannot tolerate any load curtailment risk (like certain #datacenters), co-locating #battery #energy storage systems (BESS) in front of the load merits serious consideration. This shifts the paradigm from “reduce load at utility’s command” to “self-manage flexibility.” It’s BYOB – Bring Your Own Battery and put it in front of the load. Studies have shown that if a large load agrees to occasional grid-triggered curtailment, this unlocks more interconnection capacity within our current grid infrastructure. But a BYOB approach can unlock value without the compromise of curtailment, essentially allowing a load to meet grid flexibility obligations while staying online. Why do this? For data centers (DC’s), it’s about speed to market and enhanced reliability. The avoidance of network upgrade delays and costs, along with the value of reliability, in many cases will justify the BESS expense. The BYOB approach decouples flexibility from curtailment risk with #energystorage. Other benefits of BYOB include: -Increasing the feasible number of interconnection locations. -Controlling coincident peak costs, demand charges, and real-time price spikes. -Turning new large loads into #grid assets by improving load shape and adding the ability to provide ancillary services. No solution is perfect. Some of the challenges with the BYOB approach include: -The load developer bears the additional capital and operational cost of the BESS. -Added complexity: Integrating a BESS with the grid on one side and a microgrid on the other is more complex than simply operating a FTM or BTM BESS. -Increased need for load coordination with grid operators to maintain grid reliability. The last point – large loads needing to coordinate with grid operators - is coming regardless. A recent NERC white paper shows how fast-growing, high intensity loads (like #AI, crypto, etc.) bring new #electricty reliability risks when there is no coordination. The changing load of a real DC shown in the figure below is a good example. With more DC loads coming online, operators would be severely challenged by multiple >400 MW loads ramping up or down with no advanced notice. BYOB’s can manage this issue while also dealing with the high frequency load variations seen in the second figure. References in comments. 

  • View profile for Craig Scroggie
    Craig Scroggie Craig Scroggie is an Influencer

    CEO & MD, NEXTDC | AI infrastructure, energy systems, sovereignty

    45,128 followers

    For most of the last century, generators stabilised the grid as a by-product of producing energy. Today, we are building assets that stabilise the grid without producing energy at all. That shift identifies the binding constraint. Electricity system transition is no longer constrained by renewable resource availability. It is constrained by deliverability and operability. In inverter-dominated systems under rapid load growth, the binding constraints are: - transmission and major substation capacity - system strength, fault levels, frequency and voltage control - connection and commissioning throughput - secure operation under worst-day conditions - execution pace across networks and system services Generation capacity remains necessary. On its own, it no longer delivers firm supply or supports large new loads. Historically, synchronous generators supplied energy and stability together. Inertia, fault current, voltage support, and controllability were implicit. As synchronous plant retires, these services must be provided explicitly. Stability shifts from physics-led to control-led. System behaviour becomes more sensitive to modelling accuracy, protection coordination, control settings, and real-time visibility. Curtailment is not excess energy. It is a deliverability or security constraint. When transmission and substations lag generation, congestion and curtailment rise. Independent analysis shows that delay increases prices and emissions by extending reliance on higher-cost thermal generation. Distribution networks are no longer passive. They now host distributed generation, storage, EV charging, and large loads at the edge of transmission. Voltage control, protection coordination, hosting capacity, and connection throughput now constrain both decarbonisation and industrial growth. Firming is a hard requirement. Batteries provide fast frequency response and contingency arrest. They do not provide multi-day energy and do not replace networks or system strength in weak grids. Demand response reduces peaks. It cannot be relied upon for system-wide security under stress. Execution speed is critical. Slow delivery increases congestion duration, curtailment exposure, reserve requirements, and reliance on ageing plant. These effects flow directly into costs, emissions, and reliability. This is why electricity bills can rise even when average wholesale prices fall. Costs are driven by peak demand, contingencies, and security, not average energy. Large digital and industrial loads are transmission-scale, continuous, and failure-intolerant. They increase contingency size and correlation risk. At that scale, loads do not connect to the grid, they shape it. Supporting growth requires time-to-power, transmission and substation capacity in load corridors, explicit system strength and fault levels, operable firming under worst-day conditions, scalable connection and commissioning, and early procurement of long lead time HV equipment. #energy

  • View profile for Jorge E. Medina, PE

    Energy Consulting Expert | Eliminating Energy Project Delays for Banks, Investors & Developers | End-to-End Due Diligence Without the Big-Firm Bureaucracy

    8,726 followers

    I'm seeing industrial operators and data centers commission feasibility studies that don't answer the right questions. And with NERC's 2025 Long-Term Reliability Assessment flagging 13 of 23 regions at resource adequacy risk through 2030, the stakes just got higher. MISO, PJM, Texas ERCOT, WECC-Northwest, WECC-Basin, SERC-Central. High-risk regions. The same regions where data center and industrial load growth is heaviest. That's not a coincidence. The grid reliability problem isn't just about capacity. It's about the type of capacity. Coal retirements are accelerating. Solar and batteries are coming online fast. But when you model dispatch during tight hours (winter peaks, extreme weather), the reliability attributes aren't the same as the baseload capacity they're replacing. Layer surging peak demand from data centers and electrification on top of that, and the gap widens between what the grid can reliably deliver and what industrial operators need to run 24/7. Which brings us to behind-the-meter generation and microgrids. Legal since the 1970s. What's changed: the economics now justify it as a competitiveness strategy, not just a resiliency backup. Most industrial teams commission a feasibility study. It comes back with a topline number: "Yes, on-site generation is possible. Here's the estimated cost." That's not enough. You need to know: • What's the optimal configuration for the best price per megawatt? • How does on-site generation compare to utility rates over 10+ years, including rate escalation? • Which combination of assets (gas, solar, battery, hybrid) delivers the best economics under high growth, low growth, and base case scenarios? • How does this hold up if fuel costs spike or equipment costs come in higher? Most feasibility studies don't model that. They give you a snapshot, not a stress test. In the microgrid space, we do feasibility analysis, but it's a techno-economical study. We model your load. Simulate multiple generation configurations. Run sensitivity analysis across different futures. Compare on-site vs. utility economics even if you already have grid access. The result: you know the optimal price per megawatt configuration and whether the economics hold up when the assumptions change. That's the difference between making an informed decision and hoping the utility can keep up. —— Evaluating behind-the-meter generation or microgrid solutions for your data center or industrial facility? Let's talk. I'll walk you through what a proper techno-economical study covers and what the numbers look like for your site. Grab time on my calendar or give me a call. 🗓️ https://t2m.io/mMoKxRy | 📱 1-888-218-6001 Image Source: NERC LTRA 2025

  • View profile for Abe Yokell

    Co-Founder and Managing Partner at Congruent Ventures

    12,631 followers

    A challenge of our time: how will the United States will compete in the global AI race? 🇺🇸 The simple constraint in the US: access to power ⚡ 🇨🇳The simple constraint in China: access to compute 💻 The reality: the compute challenge is a technology challenge, and the power constraint is a human and policy challenge. Technology challenges are *always* easier to solve than people and policy challenges. The US will lose unless we find a way to access more power. 💡 Enter… load flexibility. In a Duke University study published by Tyler Norris (and others) in February, the grid was shown to have massive excess capacity if large loads (aka data centers) could self‑generate for short intervals. But that work was system‑level and therefore not easily actionable. For the first time, Congruent Ventures portfolio company Camus Energy led by Astrid Atkinson, Google (where Tyler is now an energy lead), Princeton University's Zero Lab led by Jesse Jenkins, and encoord ran an in‑depth study on a 500 MW data center load, analyzing impacts on transmission, generation, and ratepayer (consumer) cost allocations. The results? ✅ Data centers get access to power in 1–2 years vs. ~7 years ✅ Electricity rates may *decrease* due to higher utilization in the region ✅ Grid costs are borne by the data center ⚡Demand flexibility is the only way out of our near‑term power constraints without massive inflationary pressure. 📉 I love me some nuclear, but the only way out of this jam in the short term is demand flexibility, solar + batteries, geothermal, a bit of gas, and a lot of innovation! #AI #Energy #DataCenters #GridFlexibility #Policy Get the exec summary here: https://lnkd.in/gPPqhCqW Duke study here: https://lnkd.in/gHs39jbt https://lnkd.in/gHMpSjPn Shout-out to Marianne Wu, Nicholas Adeyi, Steven Brisley, and Nathan Case.

  • View profile for Dlzar Al Kez

    PhD, CEng, MIET, FHEA | Power System Stability & Security Advisor | Helping Operators & Developers De-risk IBR & AI Data Centre Connections | RMS+EMT • Grid-Forming • Grid Code Compliance

    13,179 followers

    56 Hz. That’s not a disturbance. That’s a system running out of arresting power. Dominican Republic’s SENI collapsed after reported synchronisation issues at Punta Catalina, a 138 kV transmission trip, and the rapid loss of multiple large thermal units. ~1,600 MW offline in seconds. One reason large synchronous trips hit so hard: you don’t just lose MW, you lose inertia, system strength (fault level), and dynamic voltage support at the same instant. Frequency fell to 56 Hz. In a 60 Hz islanded system, that’s almost a 7% deviation. No interconnection. No external inertia. No neighbour to absorb imbalance. When large blocks trip in a concentrated fleet, the issue isn’t capacity. It’s how fast the system can arrest the fall. In small isolated grids, the binding constraint often isn’t reserve margin. It’s the first few seconds. If RoCoF outruns governor response and under-frequency load shedding, protection can amplify collapse instead of containing it. The real question isn’t “do we have enough megawatts?” It’s “how much sub-second arresting capability do we actually have?” For islanded systems like SENI: Should grid codes require fast frequency response/grid-forming capability for new large units? Or do we accept that concentrated thermal fleets will continue to define system risk? #PowerSystemStability #FrequencyStability #SystemStrength #GridOperations #IslandedGrids #EnergySecurity #GridResilience #GridCodes

  • View profile for Sam Maleki, Ph.D. , P.Eng.

    CGO, Hyper Scale Data Centers and IBRs| MicroGrid, Controller, DigitalTwin | ERCOT PJM MISO SPP AESO IESO PSCAD PSSE SCADA HMI PPC

    22,399 followers

    Based on the latest #ERCOT #Large #Load Working Group discussions on February 19, a proposed approach was introduced to evaluate the impact of #AI #data_center loads on the grid. At this stage, it has been suggested that AI loads limit their power variations within a defined time window. The current proposal considers a 5-second window with a maximum allowable load swing of 10 MW. The concept of repetitive load variations was also discussed, indicating that sustained or repeated load swings might be the main reason for the concern, not just a single power jump. Based on our recent observations and discussions with developers, many are leaning toward addressing these requirements through corrective actions at the facility level, particularly by #colocating #battery #energy #storage systems with the data center to smooth load variations. The key observations at this stage are the following: Energy storage can be an effective solution for mitigating load swings, but there is always a response #delay between the #detection of the load variation and the corrective action from the storage system. We are talking about a delay as low as 10-20 ms. Because of this delay, fast power jumps during the first few cycles of the load change may still appear at the grid interface. Regardless of the size of the battery system, this very first jump cannot be completely eliminated because it is driven by control and measurement delays (i.e. even oversizing BESS unit may not resolve the issue) Our studies have indicated that #full-#conversion solutions, where the load is fully #decoupled from the grid through power electronic interfaces, can address these variations more effectively. However, these solutions come with additional cost (but a great tool to significantly reduce the project operation #risks) As the industry evolves and the first wave of large AI load facilities begins to interconnect to the grid, the industry will gain better visibility into the actual system behavior. At that point, ERCOT and other stakeholders will be in a stronger position to determine appropriate requirements, including acceptable #damping #ratios, maximum load variation limits, and the most effective #mitigation methods. Every millisecond of latency should be accounted for when selecting the size and technology for AI load smoothing, even at the very early stages. That is why we are moving towards Real Time Simulation when clients ask us about the amount of storage they need. Even a small delay can lead to huge financial risks.  

  • Can site-level flexibility really allow us to connect huge amounts of new datacenter demand to the grid? That question is at the heart of one of the biggest issues in energy, as we adapt our existing energy infrastructure to serve unprecedented load growth. Like many in the industry, we at Camus paid close attention to the Duke report which suggested that we could fit up to 100GW of new load into the existing US grid with a small amount of load flexibility. Our team delivers flexible interconnection solutions for our customers, and we’ve seen firsthand that a little curtailment can massively increase utilization of the grid’s resources. That kind of flexible “fast path” option is not available to large loads today, and up until now, most datacenter power solutions are “either/or”. Either they’re connecting to the grid which will serve 100% of their demand through the existing energy system (but waiting several years for new capacity and upgrades), or they're designing sites to be entirely self-sufficient by colocating with new or existing generation. As timelines increase for new grid connections, and available capacity (and turbines!) are spoken for, the existing options aren’t going to get datacenters connected fast enough. As the Camus team started working with datacenters, we needed to develop a real-world, data-driven blueprint for interconnecting datacenters with site-level flexibility. Whether it’s reducing load, adding onsite generation, or switching to batteries, there are a range of options which enable datacenters to reduce their usage of the grid during key times - depending on how often, how long, and how much curtailment is required. Given the real concerns about datacenters driving up system costs for everyone else, we also wanted to know - can adding site flexibility help manage the costs for new datacenters? With the support of our partners encoord (Carlo Brancucci), the Princeton ZERO Lab (Jesse Jenkins), and financial backing from Google, we were able to dig into these questions. We used real transmission system data to model six sites in PJM, to first understand whether site-level flexibility could speed up their path to power, and then to understand what kinds of flexibility options would best meet their power needs. The answer to whether flexibility can help is a resounding “yes.” Our results showed that a small amount of local flexibility could enable loads to connect 3-5 years sooner. Further, onsite power solutions, when paired with procured capacity from renewable, battery, or VPP resources, can enable sites to entirely offset the additional system costs of adding new loads. The result is a better grid - which can support a lot more load, keep energy and capacity prices stable, and in which new flexibility provides increased system reliability and resilience. Download our whitepaper here: https://bit.ly/4pE5MGM

  • View profile for Michael Smith

    Chief Executive Officer at CPower Energy

    2,409 followers

    I recently chatted with RTO Insider LLC to share some thoughts on PJM Interconnection’s work to update its large-load rules. After a full day of discussion, no proposal moved forward and that’s okay if it means PJM takes the time to get this right.    With data center growth accelerating and PJM’s most recent capacity auction already hitting its price cap, the reality is the system is tight. We simply can’t afford policies that unintentionally push 8 GW of existing demand response (DR) out of the market, especially when DR remains one of the most reliable tools PJM has during peak grid stress.   A recent study by Camus Energy, which partners with Princeton University and encoord, report modeling of PJM’s system shows flexible load and DR can keep large customers powered more than 99% of the year while requiring only 40–70 hours of dispatch annually. That’s a far cry from the hundreds of DR calls PJM could face if new large loads don’t bring their own capacity. And when dispatch becomes constant, customers eventually pull back, which harms reliability for everyone.   The same research also shows how much DR reduces system strain, avoiding up to 273 MW of new capacity build for every GW of new load. So as the PJM Board continues its work, I hope they keep a few simple principles in mind:   • Non-Firm Goes First: Large loads that haven’t purchased firm service shouldn’t be dispatched ahead of pre-emergency DR. • DR is DR: New load management programs shouldn’t be sequenced after existing DR, they should be treated the same. • DR Loads are DR Loads: Any new DR programs must be open to all customers, not only new large loads. • Capacity is Capacity: If large loads can buy generation-backed capacity, they should be allowed to buy DR-backed capacity, too.   PJM’s decision will set the tone for markets nationwide, which makes getting DR right even more critical. It’s the most immediately available, affordable and scalable tool we have to support rapid load growth and maintain reliability as the energy economy evolves.   Read the full article here: https://lnkd.in/eXj-4Pnn   #DemandResponse #PJM #VirtualPowerPlants #VPPs #energy

  • View profile for Hanane Oudli

    Senior Electrical Engineer | Power Systems & EPC | HV/MV | Data Center & BESS | ETAP | Founder, Hanane Global Advisory | Ex-ONEE | Global Engineering Voice

    25,479 followers

    A single transmission fault, and 387 MW just… disappeared. Not generation. Demand. In Ireland, one fault caused 52% of data center load to vanish in milliseconds. UPS systems did exactly what they were designed to do: protect uptime. So they switched to backup instead of riding through the disturbance. And the grid felt it immediately. EirGrid estimated the imbalance could exceed 1,150 MW. More than double what the system was designed to handle. This is the part I keep thinking about: The more I try to understand power systems from a utility perspective, not just a classroom one… The more I see this gap. We’ve built a system where: • Data centers are engineered for zero interruption • Grids are engineered for controlled behavior during disturbances And those two philosophies don’t always align. And now we’re scaling it. This isn’t just Ireland. It’s happening across the US. Across Europe. NERC has already raised alerts on large load risks. The EU’s Demand Connection Code is being revisited. Because the grid was never designed for hundreds of MW of power-electronic loads that can disappear in milliseconds. What’s coming next is not small: • Fault ride-through for demand • RoCoF withstand requirements • Controlled post-fault recovery • Reactive power obligations • Remote curtailment by TSOs We’re not just connecting loads anymore. We’re asking them to behave like grid participants. But here’s the tension I can’t ignore: Data centers were never built for this. UPS systems. Rectifiers. Control logic. They were designed for isolation, not coordination. And now we’re asking them to support the grid… during the exact moments they were built to disconnect. So the question isn’t just technical. It’s economic. If compliance becomes too complex… too expensive… too uncertain… Do hyperscalers stay connected? Or do they quietly step away… build behind-the-meter… and operate on their own terms? Gas. BESS. Maybe even SMRs. Because when 387 MW disappears, the grid doesn’t just lose load. It loses stability leverage. And that makes everything harder. So I keep coming back to this: Are data centers going to evolve into true grid allies? Or are we watching the early signs of separation? Because whatever direction this takes… it’s going to reshape how we design power systems over the next decade. Curious how others are seeing this shift. Hanane Oudli🌍 Hanane Global Advisory Inc. #ElectricalEngineering #PowerSystems #EnergyTransition #GridModernization #EngineeringLeadership

Explore categories