Data Center Operations

Explore top LinkedIn content from expert professionals.

  • View profile for Guy Massey

    Scale the networks satisfying AI demand | $1.6B delivered for Google, Microsoft & Meta | Top 10 Data Centre LinkedIn Voice | CommScope | “The Hyperscale Hero”

    59,293 followers

    Microsoft and Google just rewrote the map for data centre growth. The next chapter starts now. Let’s break it down 👇 → Microsoft: 55 MW, all-renewable, new cloud region in Sweden (that’s five in the Nordics!). Powered by wind and hydro, built for AI, connected with fiber and subsea cables. Fast, green, and built for low-latency workloads. → Google: 150 MW hyperscale campus in Chile, using solar and geothermal. The goal? 60% renewables by 2035. This site will drive local jobs, fuel AI, and unlock a new digital backbone for South America. But here’s the real headline: Hyperscaler CapEx will almost DOUBLE in two years. → $170B (2024) → $320B (2026) Let that sink in! Why the surge? Three forces: 1️⃣ AI is exploding-training and inference need more power, everywhere. 2️⃣ Data sovereignty-regulations demand regional builds. 3️⃣ Sustainability-green certification is now a must-have. What’s changed? The old way: Centralised mega-cities with “cheap land.” The new way: Go where clean power, fast networks, and local laws align. Here’s what I’m seeing on the ground: → Every site selection starts with power-how green? How reliable? → Latency matters-close to users, close to the edge. → Policy is king-compliance first, not an afterthought. For companies looking to scale: • Pick sites with renewable energy as a priority. Power = competitive edge. • Build for low latency and local rules. It pays off later. • Watch where the hyperscalers invest. The ecosystem follows. Microsoft in Sweden. Google in Chile. These are not one-offs. They’re signals for the whole industry. The new geography of hyperscale is about green power, distributed sites, and AI at the core. Where do you see the biggest shift-energy, policy, or connectivity? Share how you’re navigating the new map.

  • View profile for Kris McGee

    Advisor, Senior VP, eXp Commercial | Dirt Dawg | I Sell Land, Sometimes It Has Stuff On It | 32 Years Helping Visionary Investors See What Others Miss

    5,517 followers

    "How to Evaluate a Building for Data Center Conversion" Earlier this week I shared how Chicago developers turned a $12 million office building into a $40 million data center in 15 months. Today, let's talk about what to look for. The Five Critical Factors: 1. Power Infrastructure This is the dealbreaker. Can you increase capacity to 30-50 megawatts? Existing transformers? Proximity to substations? The Chicago building had substantial electrical infrastructure from its trading floor days. Without power capacity, you don't have a deal. 2. Building Structure You need: Wide, column-free floors High ceilings for cooling Floor load capacity for server weight Cavernous layouts The Cboe building was designed for trading floors—which converts perfectly to data centers. 3. Existing Connectivity "This building is very heavily wired from its time as a trading platform," said buyer Daniel English. Look for heavy wiring, fiber proximity, and urban locations near connectivity hubs. 4. Cooling Potential CRE Daily reports liquid cooling is becoming standard as power densities jump from 120 kW per rack today to 600 kW by 2027. Can the building support liquid cooling systems and upgraded HVAC? 5. Urban Location Advantage English explained why urban conversions command premiums: "Just like Amazon last-mile delivery, data centers take less time to deliver when they're close." Low-latency applications—trading, streaming, gaming—pay premiums for urban proximity. The Best Candidates: Former trading floors, financial services buildings, telecom facilities, heavy industrial with power infrastructure. My Take: The Chicago flip proves it: The biggest returns aren't in greenfield development. They're in buying assets where someone else already solved the hard problems and the market hasn't caught up. What building in your market has these five factors? Because while everyone else sees obsolete real estate, you might be looking at a 233% return in 15 months. What are you seeing that others are missing? Sources: "Flip of former Cboe Global Markets headquarters in Chicago shows soaring data storage values" by Ryan Ori, CoStar News, October 23, 2025; "Data Centers Driving Growth In AI And Real Estate" CRE Daily, PrincipalAM research

  • View profile for Patrick Collins

    CEO at Novaro Capital • $9bn+ of Transaction Experience • Opportunistic Real Estate Investments

    14,688 followers

    We're actively negotiating on several data center sites—200MW+ greenfield and brownfield projects, a site ready for new construction, and multiple edge portfolios. The due diligence has been a masterclass. Data center investing isn't real estate anymore. It's a hybrid of real estate, energy, and technology. And if you're a lifelong learner, this is one of the most fascinating spaces to be in right now. -The Basics Everyone Knows- You need land. You need power, fiber, and water. That's the easy part. What qualifies that land is where it gets complex. -Power- It's not just "do you have power?" It's: • What type of power—grid, renewable, on-site generation? • How many redundant sources are available? • What's the timeline to energize—months or years? • What's the cost per MW, and how does it escalate? • Can you secure a PPA that makes the economics work? -Fiber- • How many points of connectivity do you have? • How close is your nearest point of presence (POP)? • Are there dark fiber lines available to light up? • What's the latency to major cloud on-ramps? -Water- • What's the source—municipal, well, reclaimed? • Is it sustainable at scale? • Will local stakeholders oppose the usage? • What are the cooling alternatives if water becomes constrained? Each of these has layers underneath. And that's before you get into the technical due diligence. -Where It Gets Interesting- Once you understand the fundamentals, a whole new world opens up. Alternative power sources—nuclear, SMRs, behind-the-meter solar. Equipment sourcing and lead times for transformers and switchgear. Understanding the energy draw from AI workloads versus traditional compute. The difference between training clusters and inference at the edge. Then there's the structuring: PPAs, offtake agreements, utility negotiations, local stakeholder alignment, tax incentives, and creative financing that makes projects pencil. -Why This Matters- The operators who win in this space won't just be real estate people. They'll be the ones who understand energy markets, technology roadmaps, and how to structure deals that work for utilities, communities, and capital partners simultaneously. Data centers reward curiosity. The more you learn, the more creative you can get—and the better the opportunities you can unlock. We're deep in this right now. It's been one of the steepest learning curves we've taken on, and one of the most rewarding. Who else is navigating the complexity of data center development?

  • Is your Data Center Baking Clay? Should you be worried about the ground under your data centers feet? I want to highlight the work Mara Zwicker at Substrata has done in visualizing a risk that most of the industry treats as a rounding error. She modeled five real-world sites to see how 100 MW of high-density AI infrastructure interacts with specific geological profiles over two decades. The modeling suggests that our standard approach to site selection often relies on treating the ground effectively as an infinite thermal sink. While this holds for legacy densities, that assumption can break as we move toward concentrated liquid-to-chip architectures with much higher densities combined with BESS solutions etc. In the world of regenerative infrastructure, we have to stop viewing the ground as just a static platform for weight. If you are building for a 30 year horizon, that soil is a thermal battery. Ignoring the "Ground Heat Bulb" doesn't lead to immediate failure, but it introduces long-term friction, with potential MEP misalignment and structural subsidence that cannot be fixed after the fact. Reliability starts at the bedrock, if the local geology can’t disperse the thermal stress of your specific cooling architecture, you aren't managing a Tier III+ facility. Instead, you’re managing an asset where the long-term structural behavior is not understood. This isn't a universal threat, it is highly site-dependent. Granular soils with high conductivity disperse heat effectively, while organic clays or volcanic ash can trap it, leading to the gradual consolidation that threatens high-voltage feeds and coolant piping. Sovereign power requires us to protect the literal foundation of our infrastructure by moving beyond simple weight-bearing capacity to include how that ground behaves thermally over time. It’s ironic that while we build for 24 month GPU cycles we need to potentially consider ground risks that might manifest over 10-30 years. If you'd like to follow Mara's work please visit her here: https://lnkd.in/gu3ZF4CV iMasons Climate Accord Nomad Futurist International Data Center Authority (IDCA) Data Center-Dynamics Adam Knobloch George Rockett Ron Vokoun Jack Pouchet Mui Hoon POH, FSID FSCS FHKIoD Eugene S. Sureel Choksi Harqs Singh Michalis Grigoratos Chris Petersen Rowan Peck John Wallerich Denise Holt Matt Evans James Betts Caterina Giacomelli Andy Hastings MIET Andrew Dewing Adam Kramer w.media Bisnow Rich Miller

  • View profile for Ryne Ogren

    Investor | Marketer | Former Pro Baseball Player

    12,387 followers

    Most people think data center site selection is about proximity to fiber and population centers. That was true 5 years ago. It's not true anymore. Here's what actually matters now: Power availability. Full stop. We've walked away from sites with perfect fiber, perfect location, perfect everything. Because the utility couldn't deliver power in a reasonable timeline. And we've pursued sites in the middle of nowhere. Because the utility had capacity and could move fast. The math has completely flipped. Proximity to end users matters less when you can build fiber. Proximity to talent matters less when you can operate remotely. Proximity to power generation matters more than anything else. Here's what we look for now: Utilities with excess generation capacity or clear path to new generation (Hint: Sometimes you have to create your own path). Regions with natural gas pipeline infrastructure already in place. Sites near existing substations with available capacity. Regulatory environments that move fast on interconnection approvals. Everything else is secondary. The crazy thing is: This is creating opportunities in places nobody's looking. While everyone's fighting over Northern Virginia and Silicon Valley, there are regions with abundant power that nobody's paying attention to. The data center map is about to get redrawn. And it's going to be drawn by power availability, not proximity to users. *Here's a picture of my favorite beach for those in colder climates 😊 *

  • View profile for Andrey Alekseenko

    Business Technology Consultant

    5,864 followers

    🧠 AI-Powered Configuration Management (CMDB): The Silent Brain Behind SLA, RCA, and the Future of Enterprise Architecture Management (EAM) This is not just automation. It’s a closed-loop intelligence system where AI transforms raw signals into strategic ITSM outcomes-continuously learning, adapting,and governing.  1️⃣ AI-Powered ConfigurationManagement–Core Modules (Strategic Layer/Execution): •Dynamic CI Maps&Impact Analysis Builds real-time service dependency maps using validated CMDB relationships Enables accurate impact analysis for changes and outages •Real-Time Availability Modeling Scores service health based on CI status and operational signals Provides objective, real-time visibility into service quality •Predictive SLA Breach Forecasting Detects early signs of degradation using anomaly clustering and drift signals Triggers proactive alerts before SLA violations occur •Accelerated Root Cause Analysis(RCA) Correlates incidents,logs,and changes against a trustworthy CMDB timeline Pinpoints root causes faster than manual investigation •Self-Healing CMDB Governance Automates hygiene,drift correction, and policy enforcement Ensures continuous compliance with minimal manual effort 2️⃣ AI-Powered CMDB–Core Modules (Operational Layer/Engine) •Data Ingestion & Enrichment •Data Quality & Reconciliation •Relationship Discovery & Validation •Anomaly & Drift Detection •Automated Remediation & Update •Continuous Feedback & Learning For a detailed description see→🧠AI-Powered CMDB Internal Operations Algorithm: https://lnkd.in/eS3KSe63  3️⃣ Interaction Between Strategic and Operational Modules Each strategic module is powered by a specific CMDB function - and feeds back into it: •Dynamic CI Maps & Impact Analysis←powered by→Relationship Discovery & Validation. Feedback improves topology inference and dependency confidence •Real-Time Availability Modeling←powered by→ Data Quality & Reconciliation. Feedback enhances CI health scoring and validation logic •Predictive SLA Breach Forecasting←powered by→ Anomaly & Drift Detection. Feedback improves anomaly clustering and drift pattern recognition •Accelerated RCA←powered by→Automated Remediation & Update. Feedback refines incident/change correlation and update accuracy •Self-Healing CMDB Governance ←powered by→All CMDB modules. Feedback strengthens policy orchestration and compliance scoring •All Strategic Outcomes ←improve→ Continuous Feedback & Learning Outcome data retrains ML models to enhance all upstream CMDB functions. This is how Configuration Management evolves from static record-keeping into a dynamic, self-improving enterprise capability — driving proactive ITSM and measurable business impact.   🚘 Next up: How BMW Group uses CMDB as a strategic asset for Enterprise Architecture Management and Digital Twin enablement.   💡 Curious how this conception could reshape your ITSM Strategy and EnterpriseArchitecture Management (EAM)? Let’s discuss!

  • View profile for Logan D. Freeman

    I Don’t Just List CRE 👉🏾 I Launch It | CRE Broker + Developer | $400M+ in Deals | Smart Leasing ➕ AI-Driven Strategy | 1031s | Land | Kansas City | Faith | Family | Fitness | Future

    37,548 followers

    Most people are talking about data centers. Our team at Midwest CRE Advisors is actually working them. There are two ways I'm engaged in the Kansas City data center market right now and they couldn't be more different. 1️⃣ Greenfield Sites & the Power Problem Finding raw land for a data center in KC isn't the hard part. The hard part is the utility conversation. How close are you to a substation? What's the available load? What does the interconnection timeline look like with Evergy? Can the site support 20 MW, 50 MW, 75 MW+? Kansas City recently rezoned data centers as industrial facilities, which matters for site selection. But even with the right zoning, the wrong power situation kills a deal before it starts. I'm working with landowners right now who have sites that look ideal on paper, highway access, industrial zoning, right-sized acreage, and my job is to run the utility scenario before anyone wastes time or money. Evergy's pipeline is already over 15 GW in active agreements. The sites that can plug in fast are worth a premium. The ones that can't? You better know that before you buy. 2️⃣ Conversions & Modular Not every data center gets built from the ground up. I'm also engaged on the conversion side, existing industrial and flex buildings that have the bones to become edge or modular data center deployments. Floor load. Clear height. Power access. Fiber proximity. Cooling options. The modular data center market is projected to grow at 17.7% CAGR and a lot of that growth isn't in brand-new hyperscale campuses. It's in adaptive reuse and modular deployments that can be stood up faster, closer to the edge, at lower capital cost. Kansas City's central geography, affordable power rates, and available industrial inventory make it one of the better-positioned markets in the country for this type of deployment. If you have land, a building, or capital that belongs in this conversation, reach out. This market is moving fast and the window for early positioning is closing. 📍 Kansas City

  • View profile for Obinna Isiadinso

    Global Sector Lead, Data Centers and Cloud Services Investments – Follow me for weekly insights on global data center and AI infrastructure investing

    22,578 followers

    Every billion-dollar data center begins with dirt. The land itself determines the limits of power, scale, and speed. Yet the best parcels today aren’t the cheapest they’re the ones wired for megawatts and milliseconds. A few things define who wins: • Power access: Sites within one mile of substations or transmission corridors. • Fiber connectivity: Low-latency paths to major network exchanges. • Policy alignment: Fast permitting, tax incentives, and community acceptance. Developers are now buying powered land years before construction starts. Others are co-locating next to generation hydro, gas, or nuclear to bypass grid congestion entirely. The model is shifting from single builds to multi-phase “AI corridors,” where power, fiber, and zoning are secured across entire regions. Microsoft’s 2024 Malaysia site captured this logic: a parcel beside a 500MW power plant, tied to new fiber routes, backed by state-level incentives. Data centers are no longer about square footage. They’re about control of electrons, connectivity, and permits. Whoever secures those first defines the next decade of digital infrastructure. Read the article below #datacenters

  • View profile for Srini V. Srinivasan

    CTO & Founder at Aerospike, Inc.

    13,801 followers

    I’ve watched too many databases choke on SLAs. So we decided to build a system where speed and scale finally coexist. If you have a fixed SLA of 50–100 milliseconds, most databases let you read only a little before the clock runs out. This leads to less data, less processing time, and weaker results. Aerospike flips that. We keep the index in memory and the data on fast SSDs, so every lookup is just one quick hop, even at massive scale. In the same SLA, you can read more data, faster, and give your algorithms more time to work. Fraud scores become sharper, risk analysis becomes richer, and recommendation engines become smarter. These predictive AI workloads have been running in production for over a decade on Aerospike. And in many cases, the competitive edge is measured in orders of magnitude. This is why companies like PayPal and many e-commerce leaders use Aerospike for fraud detection, recommendations, and risk analysis at global scale. In AI, the edge isn’t just speed. It’s giving your models the breathing room to be smarter… without breaking the SLA.

  • We were managing over 30 SAP systems across 3 continents with a team of just 7 engineers. The SLA was 99.9%. That sounds impressive—until you run the numbers. At 99.9%, you’re allowed just 43 minutes of downtime per month. Across 30+ systems, that means every second counts. And every mistake multiplies. In that kind of environment, manual recovery isn’t just inefficient—it’s unacceptable. There’s no time to wait for someone to notice a stuck job. No time to escalate. No time to log into four consoles to restart something by hand. We didn’t have a choice. We had to automate, not as a strategy, but as a survival mechanism. We wrote scripts. Built logic into workflows. Automated our monitoring. Codified our processes. That’s how we stayed ahead—not by scaling our team, but by scaling our capability. Those lessons from OZSOFT CONSULTING CORP. directly shaped what later became IT-Conductor. And today, when I talk to MSPs struggling with margin pressure, rising SLAs, and team burnout… I always come back to this: Automation isn’t optional when expectations are this high. Not because it's a trend. Because it's the only thing that gives you time back at scale.

Explore categories