Inaccurate inverter-based resource (IBR) models are widely recognized as a key contributor to instability and outage events in recent years. One major issue: models are often incorrectly parameterized. Default or placeholder values, undocumented control updates, and the lack of validation against field data all lead to inaccurate models that fail to capture true IBR behaviors, which can mislead grid planning and operations. 👉 Our latest research offers a solution. In a recent paper accepted to IEEE Transactions on Power Systems, we present a parameter error identification method that efficiently validates and calibrates IBR models using grid disturbance data. Instead of blindly estimating all parameters — which is often intractable — our approach can provably pinpoint which parameter(s) are wrong and effectively correct them. ✨ Key strengths of our method: - Exactly identifies and corrects the specific culprit parameters causing model–measurement discrepancies, among a large set of parameters in IBR models that are formidable to tune. - Provably distinguishes whether a discrepancy comes from model error (wrong parameter) or measurement error (corrupted sensor data). This is crucial for the method's practicality, since field data are rarely “clean.” Learn more here: https://lnkd.in/eu3F-yc8 #InverterBasedResources #Stability #Ocsillation #ModelValidation #ParameterEstimation #RenewableEnergy #GridInterconnection #PowerSystems
Methods to Validate Energy Project Models
Explore top LinkedIn content from expert professionals.
Summary
Methods to validate energy project models involve checking if computer simulations or predictions match real-world performance and behavior, ensuring that designs and systems operate as intended in practice. This process helps bridge the gap between theoretical models and actual outcomes in energy projects like buildings or grid systems.
- Use field data: Gather real-world measurements and operational information to adjust assumptions and parameters in your energy model.
- Prioritize calibration: Regularly recalibrate models using updated sensor or smart meter data to maintain accuracy, especially after changes or events.
- Verify against reality: Compare predicted results with actual performance, and involve experts early to confirm that designs and specifications are truly reflected in the project.
-
-
Energy models are supposed to predict building performance. But here’s the uncomfortable truth: most of them don’t. In theory, LEED energy modeling is a powerful tool. It helps teams simulate design decisions, compare systems, and optimize efficiency before construction begins. Yet, once the building is occupied, the numbers often tell a different story. Why? Because models are only as real as the assumptions behind them: - Schedules that don’t match how people actually use the space. - Equipment efficiencies that fade with time (Sometimes specified systems are not procured > leading towards inefficient performance). - Climate data that were impacted with the rapid urban heat shifts. The result? A performance gap where predicted energy savings look impressive on paper, but not on the utility bill during operations. So what’s the solution? It starts with making our models smarter before we ever break ground: - Use refined inputs based on real operational data from similar buildings, and data sheets performance and not theories or textbook assumptions. - Apply practical operating schedules that reflect how the building will truly function. - Engage the commissioning team early to validate design intent, specs, and system selection. - Revisit the energy model throughout design development, not just at the end for LEED submission. - Document key performance assumptions clearly so they can guide procurement and installation. #LEED #EnergyModeling #BuildingPerformance #Sustainability #Environment #ESG #GreenBuilding #SustainableDesign #Commissioning #PerformanceGap
-
Calibrating urban building energy models (UBEM) with limited and diverse fidelity of measured energy data has been a challenge. In our study just published at Applied Energy journal, we developed a new framework for calibrating UBEM using smart meter data, targeting the accurate prediction of summer peak electricity loads to support robust grid planning. The framework first integrates various data sources to enhance baseline input assumptions for building models, and then calibrates the baseline models through a pattern-matching approach. A case study using CityBES and two years of AMI data from over 9000 residential customers in Portland, Oregon, demonstrated the workflow and its effectiveness. The calibrated models achieved a daily peak load mean absolute percentage error of 2.6% during the heatwave in the calibration year, and 2.0 % in the validation year using another year of AMI data. Read details at the open access article https://lnkd.in/gCAZ5_jH Authors: Wanni Zhang, Kaiyu Sun, Han Li, Luis Rodríguez-García, Miguel Heleno, Tianzhen Hong This work is funded by the Office of Electricity, U.S. Department of Energy. We appreciate the collaboration and support from Portland General Electric.
-
Modelling Detail = Real World Impact One lesson we reinforce again and again: modelling accuracy directly affects project outcomes. For example, we worked on a BESS harmonic case and there were two most important factors to consider: - Modelling the BESS inverter self-impedance accurately, especially its behaviour in higher harmonic orders. - Getting detailed cable data to capture resonance conditions. If you shortcut these, you miss the harmonic amplification zones entirely. For this project, we modelled the BESS inverters as Norton equivalents with tuned current injections and used measured site data across 14 operating states, using CIGRE TB 766 methodology to tune harmonic injections and validated the background grid impedance using envelope sweeps from the DNO. The takeaway? Site-specific design only works if it starts with site-specific modelling. Generic filter templates don’t solve harmonic issues, bespoke solutions do.
-
"𝗛𝗼𝘄 𝗮𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗶𝘀 About:Energy'𝘀 𝗯𝗮𝘁𝘁𝗲𝗿𝘆 𝗺𝗼𝗱𝗲𝗹 𝗳𝗼𝗿 𝗺𝘆 𝗲𝘅𝗮𝗰𝘁 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲?” 🔍 This is one of the most common questions we get from customers. And it is a fair one. Our model validation matrices include 40+ controlled test conditions. Different drive cycles, C rates, temperatures, and SoC windows. But on the surface, none of them may look like a customers unique drive cycle. So how do you estimate accuracy without running a brand new validation test? (Which we can always do!) We map the drive cycle back to what has already been validated. Each segment of the cycle is decomposed into its underlying characteristics. Current magnitude. Temperature. SoC. We then infer accuracy based on our lab-tested validation metrics, allowing us to construct a relative uncertainty band across the full prediction. In The Figure: • Top plot: Predicted voltage trace with the calculated uncertainty band applied • Bottom plot, green: Estimated uncertainty generated by mapping the drive cycle to the validated test matrix • Bottom plot, black: Actual voltage error measured against unseen lab validation data for the same drive cycle What makes this compelling is that engineers can move quickly without having specific validation data for their exact use case, while still quantifying confidence. By correctly flagging higher uncertainty during high power events and low error during steady discharge, teams can infer downstream impacts on thermal limits, power capability, and BMS strategy without waiting for a bespoke test campaign.
-
Model system performance and troubleshoot issues. If you run operations in energy, this is your daily reality. Executives in service and maintenance don’t need more dashboards. We need systems that see the physics, predict the failure, and help us fix the right thing at the right time. The cost of getting this wrong is real: delays in licensing, spiraling maintenance expense, and plants that miss decarbonization targets. Here’s the shift I see working: move from static models to a living, high‑fidelity digital twin. Not a diagram. A virtual reactor that evolves with the physical asset, predicts thermal behavior, and flags risks before they show up on the floor. In the material, you’ll see how reactor teams replaced one‑dimensional tools with 3D twins to resolve pebble‑level temperatures, validate turbulence models for liquid metal coolants, and cut validation time without building expensive demonstration units. That’s practical performance insight, not theory. Why this matters for operations: a twin that captures real physics turns troubleshooting into a targeted plan. When hot spots and thermal striping are modeled accurately, you don’t over‑maintain or guess. You condition the system where it needs it, prove it to regulators with validated data, and keep output steady. Try this today: pick one chronic issue your team sees repeatedly, temperature excursions, mixing anomalies, or vibration in assemblies. Build a minimal digital twin around that single behavior. Validate it with the highest‑fidelity data you have, then downgrade to faster models once accuracy is proven. Use it to guide maintenance windows and set thresholds you can defend. If you’re accountable for uptime and safety, a living twin is the simplest way to see problems before they become events.
-
Volumetric Method Principle: Estimates hydrocarbons in place (STOIIP/GIIP) based on the reservoir’s geometry, porosity, saturation, and formation volume factor. Applies before production begins (static method). Strengths: Useful in early field life (before production data). Straightforward and quick. Requires geological and petrophysical data. Weaknesses: Accuracy depends on data quality (porosity, thickness, area). Assumes uniformity—doesn't capture heterogeneity or compartmentalization. Does not account for reservoir connectivity. 🔍 2. Material Balance Method (MBE) Principle: Uses the law of conservation of mass to estimate Original Hydrocarbon in Place (OHIP) by relating cumulative production to pressure depletion. Strengths: Applicable after some production data is available. Good for estimating drive mechanisms. Integrates PVT and production data. Weaknesses: Assumes average reservoir pressure is known accurately. Requires reliable PVT data. Sensitive to aquifer behavior assumptions. 🔍 3. Decline Curve Analysis (DCA) Principle: Projects future production using historical trends (rate-time data), assuming reservoir behavior remains consistent. Types include: Exponential Harmonic Hyperbolic Strengths: Simple and fast. Requires only production data. Effective in mature reservoirs. Weaknesses: Poor prediction in early life or unstable production. Doesn’t directly estimate hydrocarbons in place. Assumes constant operating conditions and no interventions. 🔍 4. Reservoir Simulation (Numerical Modeling) Principle: Uses mathematical models and computer simulations to predict reservoir performance under different scenarios. Integrates geology, petrophysics, PVT, SCAL, and production history. Strengths: Handles complex reservoir geometries. Simulates different development strategies. Powerful for optimization and forecasting. Weaknesses: Data- and labor-intensive. Requires skilled personnel and calibration. Can produce misleading results if poorly constrained. 🔍 5. Analog/Analytical Models Principle: Estimates reserves by comparing with similar, previously developed fields (analogs). Strengths: Quick and low cost. Useful for frontier areas with little data. Weaknesses: Assumes similarity—can be misleading. Not suitable for unique or heterogeneous reservoirs. 🔍 6. Probabilistic Methods (Monte Carlo Simulation) Principle: Applies probability distributions to input variables (porosity, saturation, area, etc.) to generate a range (P90, P50, P10) of reserves. Strengths: Accounts for uncertainty. Provides risk-based estimates. Useful for decision-making and portfolio management. Weaknesses: Requires proper input distributions. Computational resources needed. Can give false confidence if assumptions are wrong.
-
The NESO GC0141 requirement for RMS and EMT modelling has been with us for a number of months now and is continuing to be a substantial cost and delay in getting ION. Like it or not, G0141 is here to stay and successfully navigating it correctly is the difference between a project achiving a timely ION and not: If you are not familiar with GC0141, there is a link to the requirements below https://lnkd.in/eJrq2W6j In simple terms: 1) The developer must provide an unencrypted open source RMS model in DIgSILENT Powerfactory. NESO have indicated that a generic open source model is now preferred - but it is still acceptable to provide a full model provided it is unencrypted. 2) The developer must provide an EMT model in PSCAD format. Version 5.0.2 using Intel FORTRAN OneAPI. 3) The RMS and EMT models submitted now how to be provided with additional simulations in the form of a verification report. 4) A detailed user guidance manual, must also be provided detailing all settings, transfer functions model equivalents and user guide. The RMS scope is in theory not too difficult, but is a problem because of the logistics of providing the unencrytped files, and a number of OEMs are having difficulties. Four main approaches are used, and It is essential that the inverter and PPC vendors agree the strategy for this submission at an early stage of the project. At present Method 2 or 3 are the most common approach. 1. Inverter and PPC OEM agree to provide unencrypted models at the start of the project and these are used al the way through. 2. Inverter and PPC OEM provide encrypted models as normal for the main studies. At model handover stage, the encrypted model is sent to a trusted 3rd party (such as DIgSILENT) and the OEMs give their unencrypted models to the 3rd party. The 3rd party swaps out the encrypted bits and submits to NESO. The validation report only works if it is a 1-to-1 swap of encrypted elements with unencrypted elements. 3. Either the Inverter OEM, or PPC OEM provides an equivalent generic model (such as WECC) and the other OEM maintains an encrypted model. A new model is created based on the open model, and is validated. The model is then provided to the OEM who still has an encrypted system, and the OEM removes the encrypted system replaces it with an unencrypted version and submits to NESO. 4. Bother inverter OEM and PPC OEM provide an equivalent generic model (such as WECC). A validation report and user guide and preapred and all are submitted to NESO. This approach facilitates the use of aggregated modelling techniques. The EMT scope is technically more challenging as PSCAD is harder to use, and not all OEMs have good PSCAD models. For this part to work smoothly, the inverter OEM and PPC OEM must supply satisfactory models with good guidance document. GC0141 is challening, but achieveable if it is planned ahead and the OEMs are ready for it! If you want to know more, please get in touch.
-
𝐈𝐧𝐯𝐞𝐫𝐭𝐞𝐫/𝐏𝐏𝐂/𝐅𝐀𝐂𝐓𝐒 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐥𝐞𝐫 𝐦𝐨𝐝𝐞𝐥 𝐯𝐞𝐫𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 Verification of #inverter, #Power #Plant #Controller (PPC), or #FACTS controller #models using a Real-Time Digital Simulator (#RTDS) involves evaluating the model's performance against its corresponding physical #hardware in a closed-loop, real-time simulation environment. This process ensures that the #model accurately #replicates the behavior of the #actual #device under a range of #grid #conditions, including both normal #operation and #fault scenarios. The model validation process includes the following key steps: 𝟏. 𝐌𝐨𝐝𝐞𝐥 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭: A high-fidelity model of the inverter, Power Plant Controller (PPC), or FACTS controller is developed within a suitable #simulation environment. Typically, these models are provided by the Original Equipment Manufacturer (OEM). 𝟐. 𝐇𝐈𝐋 𝐒𝐞𝐭𝐮𝐩: The #physical inverter, PPC, or STATCOM controller is #interfaced with the RTDS, forming a closed-loop system that facilitates #real-time #interaction between the hardware and the simulated environment. 𝟑. 𝐓𝐞𝐬𝐭𝐢𝐧𝐠: The controller is subjected to a range of #operating #conditions and fault scenarios —such as #short #circuits and #line #outages— within the #simulated #environment. 𝟒. 𝐂𝐨𝐦𝐩𝐚𝐫𝐢𝐬𝐨𝐧: The dynamic response of the #physical #controller is compared against the #simulated #model’s behavior under #identical #conditions to assess #consistency and #accuracy. 𝟓. 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧: The model is considered #validated if the #responses of the #physical and #simulated systems align within #predefined tolerance #thresholds.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development