Algebraic Geometry Offers Fresh Solution to Data Center Energy Inefficiency Key Insights: • Mathematicians from Virginia Tech, led by Professor Gretchen Matthews and Assistant Professor Hiram Lopez, are applying algebraic geometry to address energy inefficiency in data centers. • Data replication, a common method for ensuring reliability and backup, significantly increases energy consumption by duplicating information across servers. • The researchers propose using algebraic structures to distribute data efficiently across servers, reducing redundancy while maintaining reliability. The Problem with Traditional Data Replication: • Data centers currently rely on redundant data replication to safeguard against data loss, often replicating data two or three times across servers. • This approach consumes substantial energy and storage resources, creating environmental and economic inefficiencies. • With the exponential growth of global data generation, smarter storage and recovery methods are urgently needed. The Algebraic Geometry Approach: • The researchers utilize polynomial-based mathematical structures to break data into smaller pieces and distribute them across neighboring servers. • In case of server failure, instead of relying on multiple full copies of data, neighboring servers can collectively recover the missing information using these algebraic structures. • While the use of polynomials for data storage dates back to the 1960s, recent advancements allow researchers to build specialized polynomial systems optimized for localized data recovery. Benefits of the Algebraic Geometry Method: • Reduced Energy Consumption: Less reliance on redundant copies means lower energy demands for storage and replication. • Efficient Data Recovery: Localized recovery algorithms minimize the need for long-range data transfers, saving both time and power. • Scalability: The approach is well-suited for growing data infrastructures, where efficient distribution is increasingly critical. Implications for the Future of Data Centers: • Greener Data Centers: Algebraic geometry could help reduce the carbon footprint of large-scale data storage operations. • Cost Efficiency: Lower energy requirements translate to significant cost savings for data center operators. • Resilience and Reliability: Localized recovery ensures data remains accessible and secure even during server failures. Future Outlook: • Further research will focus on refining the polynomial algorithms to handle larger datasets and integrate seamlessly with existing data center architectures. • As global data demands continue to grow, these algebraic approaches could play a key role in making data centers more sustainable and cost-efficient. This innovative application of algebraic geometry to real-world data infrastructure challenges highlights the power of mathematical research in driving technological and environmental advancements.
Mathematical Approaches to Energy Planning
Explore top LinkedIn content from expert professionals.
Summary
Mathematical approaches to energy planning use advanced math, modeling, and simulation tools to make smart decisions about how energy is produced, stored, and used, especially as we add renewables, decentralize grids, and aim for sustainability. These techniques help predict demand, balance resources, and plan for uncertainties like market changes or equipment failures, making our energy systems more reliable and eco-friendly.
- Use smart modeling: Adopt mathematical models and optimization tools to simulate different energy scenarios, which can uncover cost-saving and sustainability opportunities for complex systems like grids or industrial facilities.
- Prepare for uncertainty: Incorporate approaches such as probabilistic forecasting or Monte Carlo simulations to address unpredictable factors like energy price fluctuations, demand shifts, or renewable resource variability.
- Support decision making: Combine data analysis with mathematical strategies to guide choices about technology investments, resource allocation, and policy planning in both current and future energy landscapes.
-
-
Ever wondered how to co-optimise electricity and gas networks in PLEXOS? Or how to quantify hydrogen demand for urban transportation purposes? Or how to use stochastic modelling to size renewable generation, battery energy storage, electrolyser capacity and hydrogen storage optimally to meet yearly H₂ demand under both supply and demand side uncertainty? Then my latest paper, titled: “Design and Stochastic Analysis of an Off-Grid Green Hydrogen Refuelling Station for Urban Transportation” is for you — amd now available as a preprint at SSRN! 🔗 https://lnkd.in/dNRbE6Ks This work took me slightly off-track from my core PhD research, but been always interested in the electricity–gas co-optimisation. What’s are you going to read about? I developed a full optimisation + simulation framework in PLEXOS, including: • A mathematical formulation to model hourly hydrogen demand for an urban bus fleet • A mixed-integer capacity expansion model to size wind, BESS, PEM electrolysis and H₂ storage over a 25-year horizon • Dual Monte Carlo sampling applied to both wind resource and hydrogen demand • A full 8,760-hour chronological simulation resolving renewable intermittency, battery cycling, electrolyser loading and hydrogen storage dynamics. Key insights • Zero hydrogen shortages across all stochastic scenarios • Wind farm can reach ~52% capacity factor • Levelised cost of hydrogen (LCOH): 3.41 USD/kg • Policy mechanisms like strike-price contracts can reduce payback to under 4 years If this topic interests you — energy system optimisation, urban H₂ demand modelling, off-grid design, or stochastic planning — I’d love to hear your thoughts.
-
𝗧𝗼𝘄𝗮𝗿𝗱𝘀 𝗰𝗮𝗿𝗯𝗼𝗻-𝗻𝗲𝗴𝗮𝘁𝗶𝘃𝗲 𝗽𝗿𝗶𝗺𝗮𝗿𝘆 𝗮𝗹𝘂𝗺𝗶𝗻𝗶𝘂𝗺𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻: 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗻𝗴 𝗯𝗶𝗼𝗺𝗮𝘀𝘀 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗮𝗻𝗱 𝗿𝗲𝗻𝗲𝘄𝗮𝗯𝗹𝗲 𝗲𝗹𝗲𝗰𝘁𝗿𝗶𝗰𝗶𝘁𝘆 by Dareen Dardor, Daniel Flórez-Orrego, Reginald Germanier, Manuele Margni, Francois Marechal #Highlights 🔴 Decarbonization solutions outperform current baseline under 50% of price scenarios. 🔴 Bio-electric configurations are 58% likely to be the best decarbonization option. 🔴 Solutions relying heavily on electricity are less resilient during energy crises. 🔴 CO2 tax paired with reduced fossil subsidies can encourage decarbonization. Secondary aluminium #production facilities typically consume 700–1,000 kWh of #naturalgas and 200–400 kWh of #electricity per tonne of #rolledsheets. To achieve #environmental #targets, the #aluminiumindustry is exploring #decarbonization #strategies, including #biomassgasification, #carbon #abatement and #utilization, #powertogas, #directelectrification, and #wasteheatrecovery, among others. While most of these #technologies have lifetimes of a couple of decades, decisions on their installation must be made today. Biomass, electricity, and natural gas #costs can be subject to unpredictable #market variations, whereas #carbonprices are related to environmental #regulations and future #marketsituations. Therefore, current decarbonization #decisions must account for uncertainty in future energy #prices. This study presents a systemic approach to incorporate #energyprice #fluctuations into decarbonization planning for #secondaryaluminium production. A mixed integer linear programming (#MILP) approach is used to generate a list of #feasiblesystem #configurations under 4,000 combinations of energy prices and carbon taxes. Next, #MonteCarlo #simulations are applied to predict energy price trends and assess the resilience of favourable scenarios, from the MILP approach, under “#stochastic” or “#crisis” circumstances. Results show that decarbonization #pathways are less costly than fossil #CO2-emitting configurations in 50% of the price combinations. Among these decarbonization configurations, the #pathway combining #electricity and #biomass is the most economical. However, its likelihood of outperforming the #naturalgas-driven baseline over a 25-year lifetime is estimated at 22%–37% under stochastic energy price profiles. Finally, resource #diversification, such as #biomassutilization, reduces risk during #economiccrises by 6% compared to complete electrification. Industrial Process and Energy Systems Engineering at EPFL, École polytechnique fédérale de Lausanne, EPFL, Institute of Energy and Environment, School of Engineering, HES-SO Valais-Wallis, Novelis Switzerland S.A., Faculty of Minas, Universidad Nacional de Colombia https://lnkd.in/djP3vxzS (Elsevier #Sciencedirect)
-
As energy systems continue to decentralise, the optimal deployment of distributed generators (DGs) has become an increasingly complex engineering problem. Placing these units at the right location and with the right capacity is essential not only for minimising power losses, but also for maintaining voltage stability and improving overall grid performance. In our 2022 paper, A Novel Hybrid Fuzzy-Metaheuristic Strategy for Estimation of Optimal Size and Location of the Distributed Generators, published in Energy Reports, we presented an advanced computational approach to this challenge. This work has already been cited 12 times and has proven particularly relevant for researchers and planners working in smart grid design and renewable integration. Our proposed method combines: Fuzzy logic for handling uncertainty and imprecise inputs A metaheuristic optimisation algorithm (inspired by swarm or evolutionary intelligence) for global search A modular framework suitable for both radial and meshed distribution systems This hybrid strategy allows for a more nuanced and robust decision-making process, accommodating the inherent variability in renewable generation and load demand. Compared to traditional techniques, our model achieved superior outcomes in loss reduction, voltage profile improvement, and computational efficiency. We also included a comprehensive sensitivity analysis to demonstrate the approach's flexibility under different loading scenarios and DG penetration levels. This ensures that planners can adapt the method to a range of practical grid configurations. As distribution systems evolve toward greater complexity, this paper contributes a scalable and intelligent toolset for engineers aiming to enhance the sustainability, reliability, and economics of modern grids. 📖 Access the Full Paper: DOI: 10.1016/J.EGYR.2022.09.019 📢 Contribute Your Insights What optimisation techniques have you used in DG planning? Are fuzzy systems or hybrid models part of your toolkit? I would welcome your perspectives and potential collaboration opportunities.
-
Applied Energy: Together with my postdocs Dr. Shang Jiang and Dr. Quoc Cong Tran, we proposed a novel and innovative approach to forecast transportation energy demand in New Zealand. Our study presents a new AI-driven model that combines context-specific variables (like oil prices, GDP, tourism, and travel data) with advanced techniques (Graph Recurrent Unit (GRU) and a Denoising Diffusion Probabilistic Model (DDPM)) to improve energy demand forecasting, even in disruptive times. The proposed approach effectively capture temporal dynamics and provides robust probabilistic predictions. 📍 Case study: New Zealand ✅ Outperforms traditional methods, especially during COVID-19 📈 Offers robust predictions for future energy demand under various recovery scenarios Using this model to predict demand for hydrogen, biofuels, and other energy sources can support smarter decision-making for policymakers and energy providers. #CTSLAB #EnergyForecasting #AI #SustainableTransport #DataScience #EnergyPolicy #SmartPlanning #NewZealand #Research https://lnkd.in/gfcqjVp8
-
Direct lookahead approximations IX – Parameterized deterministic lookaheads for stochastic problems The approach that has been most overlooked in the literature (but widely used in practice) is the idea of making decisions with a deterministic approximation that is parameterized to work well in practice. The gif below takes the problem of planning how much energy to draw from a wind farm and the grid to meet a time-varying load, using a storage device to help smooth over the variations. There is a rolling forecast of the wind energy that changes quickly over the day. The problem is solved by using a deterministic lookahead policy, where forecasts are multiplied by coefficients \theta_\tau where \tau=1,2, …, 24 is how many hours in the future we are forecasting. When we optimize (tune) these parameters, we get performance that is 30 percent better than if we just set \theta_\tau = 1. Tuning these parameters is hard, but using the tuned policy is no more difficult than a vanilla deterministic lookahead. Parameterized deterministic optimization models can be thought of as bridging classical deterministic optimization and parametric machine learning. The key is recognizing that the objective function is performance of the policy over time, not the cost function at a point in time.
-
Volumetric Method Principle: Estimates hydrocarbons in place (STOIIP/GIIP) based on the reservoir’s geometry, porosity, saturation, and formation volume factor. Applies before production begins (static method). Strengths: Useful in early field life (before production data). Straightforward and quick. Requires geological and petrophysical data. Weaknesses: Accuracy depends on data quality (porosity, thickness, area). Assumes uniformity—doesn't capture heterogeneity or compartmentalization. Does not account for reservoir connectivity. 🔍 2. Material Balance Method (MBE) Principle: Uses the law of conservation of mass to estimate Original Hydrocarbon in Place (OHIP) by relating cumulative production to pressure depletion. Strengths: Applicable after some production data is available. Good for estimating drive mechanisms. Integrates PVT and production data. Weaknesses: Assumes average reservoir pressure is known accurately. Requires reliable PVT data. Sensitive to aquifer behavior assumptions. 🔍 3. Decline Curve Analysis (DCA) Principle: Projects future production using historical trends (rate-time data), assuming reservoir behavior remains consistent. Types include: Exponential Harmonic Hyperbolic Strengths: Simple and fast. Requires only production data. Effective in mature reservoirs. Weaknesses: Poor prediction in early life or unstable production. Doesn’t directly estimate hydrocarbons in place. Assumes constant operating conditions and no interventions. 🔍 4. Reservoir Simulation (Numerical Modeling) Principle: Uses mathematical models and computer simulations to predict reservoir performance under different scenarios. Integrates geology, petrophysics, PVT, SCAL, and production history. Strengths: Handles complex reservoir geometries. Simulates different development strategies. Powerful for optimization and forecasting. Weaknesses: Data- and labor-intensive. Requires skilled personnel and calibration. Can produce misleading results if poorly constrained. 🔍 5. Analog/Analytical Models Principle: Estimates reserves by comparing with similar, previously developed fields (analogs). Strengths: Quick and low cost. Useful for frontier areas with little data. Weaknesses: Assumes similarity—can be misleading. Not suitable for unique or heterogeneous reservoirs. 🔍 6. Probabilistic Methods (Monte Carlo Simulation) Principle: Applies probability distributions to input variables (porosity, saturation, area, etc.) to generate a range (P90, P50, P10) of reserves. Strengths: Accounts for uncertainty. Provides risk-based estimates. Useful for decision-making and portfolio management. Weaknesses: Requires proper input distributions. Computational resources needed. Can give false confidence if assumptions are wrong.
-
🔍 Solar Heating System Modeling | Sustainability Note In renewable & energy landscape, agility matters. When it comes to designing or scaling solar heating systems, it's not just about estimating peak output, it's about understanding the impact of change. What happens if we tweak the inclination? Reduce the number of panels? Vary the sunlight hours? 💡 This is where mathematical modeling (Steady State and Dynamics) proves invaluable. Using dynamic models, we're able to simulate hundreds of sensitivity cases in minutes, adjusting factors like panel angle, solar irradiance, and operational hours to evaluate performance before physical implementation. Instead of static spreadsheets or trial-and-error decisions, we rely on data-backed simulations to: 1- Quantify power generation under different design scenarios 2- Optimize for cost, output, and footprint 3- Support investment decisions with confidence Whether it’s a 20-panel rooftop or a utility-scale field, modeling gives us the power to plan smarter and move faster. 🌞 Energy output in kW/m² isn’t just a number. It’s a decision driver. #Sustainability #AspenTech #AspenCustomModelr #SolarEnergy #DigitalEngineering #EnergyTransition #MathematicalModeling #SensitivityAnalysis #CleanTech #Simulation #ProcessOptimization #Sustainability #AspenTech #OptimizeXP #UAE #Emerson
-
I am happy to share that our article “Application of quantum computing to temporal aggregation for efficient capacity expansion in power systems” is now available open access on ScienceDirect. In this work, we explore a quantum-inspired approach to selecting representative days for capacity expansion planning, cutting solve times while keeping cost accuracy high. Some highlights: • Formulate a QUBO for selecting k representative days for temporal aggregation in CEP • Use PCA to reduce daily feature dimensions before QUBO construction • Half-month segmentation keeps each QUBO within current qubit hardware limits • With k = 3 days, we see only ~4–6% cost deviation on IEEE 9, 30, and 118-bus systems • Benchmark against classical MCMC k-medoids representative-day selection Many thanks to Ankana Singha and Prof. Sambeet Mishra, it has been a pleasure working together on this topic. 🔗 Final version (open access): https://lnkd.in/dXbw6whB #OpenAccess #QuantumComputing #PowerSystems #EnergySystems #CapacityExpansion #EnergyTransition #Optimization #OperationsResearch #GridPlanning #RenewablesIntegration #TemporalAggregation #QUBO #PyPSA
-
This study focuses on optimizing an isolated solar-wind-diesel microgrid to diminish dependency on diesel generators, reduce operational expenses, and alleviate environmental pollution in remote regions. The optimization process involves combining the arithmetic optimization algorithm with the golden jackal optimization to achieve optimal capacity planning while considering economic and emission dispatch aspects. This fusion improves the optimization process by balancing exploration and exploitation through the arithmetic operators of the arithmetic optimization algorithm and the adaptive search capabilities of the golden jackal optimization. A performance analysis is carried out by simulating and comparing three scenarios: utilizing only diesel generators, employing a solar-wind-diesel system, and utilizing a solar-wind system with a reduced number of diesel generators. The findings reveal substantial cost reductions when implementing the solar-wind-diesel microgrid under the proposed combined optimization approach, showcasing superior outcomes compared to using solely the arithmetic optimization algorithm, the golden jackal algorithm, and traditional metaheuristic optimization methods based on genetic algorithms.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development