What a surprise for the EU 😱 😉 A recently published expert opinion commissioned by the German Federal Ministry of the Interior has sparked a pivotal discussion on data governance and sovereignty. According to the report, US authorities can exert far-reaching access rights to cloud data managed by US-based companies, even when that data is stored in European data centers and administered through local subsidiaries. This is because legal instruments such as the Stored Communications Act extended by the Cloud Act and Section 702 of FISA focus on the provider’s control, not the physical location of the servers. This finding is a firm reminder that simply hosting data on European soil does not guarantee protection from extraterritorial legal claims. It reveals structural risks in relying on dominant foreign cloud providers for sensitive data and critical digital infrastructure. For Europe to truly uphold its data protection principles and strategic autonomy, the conversation must go beyond compliance checklists and contractual assurances. We need stronger investment in #opensource digital infrastructure and indigenous technologies that reduce dependency on non-European platforms. Open source fosters transparency and auditability while enabling communities and businesses to build on systems that are not bound by foreign legal systems. If #digitalsovereignty is to mean more than a buzzword, we must accelerate our efforts towards resilient, interoperable, and locally governed alternatives. Only then Europe can ensure that its data is governed by the laws and values that its citizens and organisations expect. Source: https://lnkd.in/dtpXiwYN
Cloud Migration Challenges and Solutions
Explore top LinkedIn content from expert professionals.
-
-
If your CEO asks for deal updates in Slack, don’t expect reps to update Salesforce. You can throw all the tech, training, and sales ops resources you want at CRM adoption - but if leadership isn’t leading by example, none of it will stick. Here's the tl;dr: Reps don’t hate updating Salesforce because they’re lazy. They hate it because they know no one actually uses it. When leaders bypass the CRM - asking for updates in Slack, emails, or meetings - they send a clear message: “This system doesn’t matter. Your notes don’t matter. Just tell me directly.” And that’s how $100k+ Salesforce investments turn into glorified Rolodexes. So, how do you fix it? 1. Top-down adoption Start with the CEO. If they want deal updates, they need to ask for them in Salesforce. Chatter, Slack integrations, whatever it takes...but it has to flow through the system. 2. Make sales managers accountable Reps won’t change unless their managers enforce it. Run pipeline reviews directly from Salesforce dashboards. No exceptions. If it’s not in Salesforce, it doesn’t exist. 3. Quantify the pain Show reps how missing data costs them deals. Lost follow ups, misaligned hand offs, deals slipping through the cracks...all because the CRM isn’t up to date. 4. Reward the right behaviors Sales culture loves to celebrate closers. But what about the reps who close and keep a clean pipeline? Make data hygiene part of what gets recognized (and compensated). The reality is that CRM adoption isn’t a sales ops problem - it’s a leadership problem. If the top isn’t setting the example, the bottom won’t follow. And until that changes, you’ll keep throwing money at Salesforce while your reps keep their real pipeline in a Google Doc.
-
This EY incident underscores a truth we often overlook: the most common cloud vulnerability isn't a zero-day exploit; it's a configuration oversight. A single misstep in cloud storage permissions turned a database backup into a public-facing risk. These files often hold the "keys to the kingdom" ie. credentials, API keys, and tokens that can lead to a much wider breach. How do we protect ourselves against these costly mistakes? Suggestions 1. Continuous Monitoring: Implement a CSPM for 24/7 configuration scanning. CSPM is Cloud Security Posture Management -> a type of automated security tool that continuously monitors cloud environments for misconfigurations, vulnerabilities, and compliance violations. It provides visibility, threat detection, and remediation workflows across multi-cloud and hybrid cloud setups, including SaaS, PaaS, and IaaS services 2. Least Privilege Access: Default to private. Grant access sparingly. 3. Data Encryption: For data at rest and in transit. 4. Automated Alerts: The moment something becomes public, you should know. 5. Regular Audits: Regularly review access controls and rotate secrets.
-
Your aws "EU cloud" just failed in Virginia. Again. October 20: DNS problem in US-EAST-1. UK banks down. HMRC offline. Gov.uk dark. French telecoms dead. 6.5 million outage reports across Europe. You're paying for EU-WEST-2. Your compliance team signed off. Your data "stays in Europe." Except IAM, control APIs, and replication endpoints all route through US-EAST-1. Even for European workloads. US East is the control plane for all AWS locations. European critical infrastructure - banking, government services, healthcare, telecoms - stops functioning when a DNS server fails in Virginia. Not a cyberattack. Not a cable cut. A monitoring subsystem in a US data center. Third Virginia outage in five years. Each time, Europe goes dark. Your Frankfurt instances can't authenticate without Virginia. Your "sovereign" database can't resolve its own endpoint without US infrastructure online. GDPR compliance says your data stays in EU borders. The architecture says your services live or die based on a data center 3,000 miles away that answers to US jurisdiction. Europe has no sovereign cloud infrastructure. Multi-region deployment is fiction when every region phones home to Virginia for permission to operate. Data sovereignty isn't where your data sits. It's who controls whether your systems can access it. Right now? That's US-EAST-1. #DataSovereignty #CloudComputing #AWS #DigitalSovereignty #CriticalInfrastructure #Europe #GDPR #CloudArchitecture #TechPolicy #Infrastructure
-
In 2023, 70% of SAP customers were considering RISE. By mid-2024, that number dropped into the low 40s. The shift raises a big question: what made so many rethink the move? I’ve supported RISE customers around the world — from North America to Europe to APAC. And the patterns I’m seeing? They’re consistent, and a bit unsettling. Here’s what’s driving the hesitation: 1. Service Level Disappointment - 99.7% uptime sounds fine until it hits your critical workflows. 2. Hidden Complexity - What looked like a bundled solution often turns into 100+ pages of service exclusions. 3. Slow Execution - Simple tickets take weeks. Why? Lack of automation and fragmented delivery teams. 4. The Premium Paywall - Want the service you thought you were buying? There’s a premium tier for that. What started as a CFO-friendly commercial model has hit resistance from Ops teams who are left holding the bag. RISE isn’t failing. But the experience gap is real. And for many customers, that’s been enough to hit pause...or backpedal entirely. If you’ve evaluated RISE or lived through the shift, what was your biggest surprise? I’d love to hear it in the comments.
-
Lift and shift is the most expensive way to avoid real cloud transformation. Moving your mess to the cloud just gives you an expensive mess. At Mayfair IT, we have built cloud platforms using fundamentally different approaches. The difference in outcomes is dramatic. Lift and shift is seductive. Take existing servers, virtualise them, run them in Azure or AWS. Call it cloud migration. Declare victory. The infrastructure is now in the cloud. The problems are unchanged. Applications still assume they run on dedicated hardware. Scaling requires manual intervention. Failures cascade because nothing was designed for distributed failure. You pay cloud prices for on premises architecture. What cloud native actually means, We have built greenfield platforms on Azure designed from the beginning for cloud. Platform as a Service and Software as a Service components doing what they do best. Azure Data Factory orchestrating data pipelines instead of custom ETL running on virtual machines. Cosmos DB providing distributed databases instead of clustered SQL servers. Serverless functions handling event driven workloads instead of always on application servers. The difference is economic and operational. What changes with cloud native architecture: → Scaling happens automatically based on demand, not manual capacity planning → Failures in individual components do not bring down entire services → You pay only for resources actually used, not capacity provisioned for peak load → Updates deploy without downtime because architecture assumes continuous change We have also migrated legacy systems to cloud where complete refactoring was not feasible. The challenge is knowing which approach fits which situation. Greenfield builds should always be cloud native. Legacy migrations require honest assessment of whether lift and shift provides enough value to justify the effort. Sometimes the answer is yes. Moving a stable system with known workloads to cloud can reduce operational overhead even without refactoring. But presenting lift and shift as cloud transformation is dishonest. You moved the location. You did not change the architecture. The organisations getting real cloud value are the ones willing to rebuild applications to use cloud capabilities properly. How much of your cloud spending is on virtualised servers that could be replaced by managed services? #CloudNative #Azure #DigitalTransformation
-
On International Data Centre Day, my hope is that the "rest of Africa" doesn't get left behind in the AI investment boom. It's critical for giving millions of Africans opportunities to progress and lead better lives. A few years back, I worked on building a data center real estate business at Agility with multiple data center ready sites in Africa and engaged with a large range of data center operators and hyperscalers. Some thoughts: 1. The lion's share of investment in data centers is (still) in South Africa and four other countries (Nigeria, Kenya, Morocco & Egypt). The "rest of Africa" has very little data center capacity and investment - risking leaving those economies and societies behind and disadvantaged. The African continent only accounts for 0.6% of global data center capacity according to the Africa Data Centres Association. 2. Demand for capacity is expected to rise from about 0.4 GW today to 1.5 to 2.2 GW by 2030 according to McKinsey & Company research by Kartik Jayaram, Luca Bennici & colleagues. It will require $10 billion to $20 billion in new investment to unlock an estimated revenue pool of $20-30 billion across the value chain by 2030. What will be critical to unlocking that demand is the pace of AI adoption and large-scale digitalization by the public sector / governments and by enterprises, enterprise cloud adoption and consumer growth demand aggregation, investable sites, reducing the cost of capital and affordable power. 3. From my experience, multiple challenges exist to greenfield development in Africa, including land acquisition, power and fiber connectivity (problems I was working on solving) and regulatory environments. The war stories I have heard from others and seen directly show that data center development in Africa requires a different level of grit and commitment - a lot of that will come from great entrepreneurs that I have had the opportunity of knowing and learning from, including Amine K., Ayotunde (Tunde) Coker, Ike Nnamani, Ranjith Cherickel, Robert Mullins and others like Strive Masiyiwa and Funke Opeke - and hopefully any more! It's good to also see global giants like Digital Realty & Equinix also expand on the continent. --- The video clip below is a throwback to a conversation I had with Andy Davis on the Inside Data Centre Podcast a few years back - link in the comments. Africa Data Centres Association | DIGITAL COUNCIL AFRICA
-
Everyone's chasing data center land. Almost everyone is missing the real constraint. It's not fiber. It's not even land. It's power. U.S. Interior Secretary Doug Burgum said at the Prologis conference: "To win the AI arms race against China, we've got to figure out how to build these artificial intelligence factories close to where the power is produced, and just skip the years of trying to get permitting for pipelines and transmission lines." Translation: The next generation of data centers won't be built where the land is cheap. They'll be built where the power is available. Three implications for dirt investors: 1. Nuclear Proximity = New Premium: Amazon already signed deals with Dominion Energy near the North Anna nuclear power station in Virginia and expanded partnerships with Talen Energy at the Susquehanna nuclear plant. Sites within transmission distance of existing nuclear facilities just became exponentially more valuable. 2. Warehouse Conversions Accelerate: If Prologis is eyeing their 6,000 buildings for data center conversion, every industrial site with surplus power capacity needs re-evaluation. What looks like a struggling warehouse today might be a data center tomorrow. 3. Grid Capacity > Geographic Desirability: Constellation Energy CEO Joseph Dominguez noted that data economy customers "want to run their systems 24-7" with "firm pricing so that they know the price for energy for 20 years". Long-term power contracts are becoming the new land entitlements. But here's what nobody's talking about: The same power constraints driving this opportunity are also creating massive project risks. According to a recent CoStar analysis, data centers will account for up to 60% of total power load growth through 2030. But there's a timing mismatch: data centers take 2-3 years to build, while power system upgrades take 8 years. That gap is forcing developers to either wait or find sites with existing capacity. The Community Resistance Factor Data Center Watch estimates $64 billion in data center projects were blocked or delayed over a recent two-year period. There are now 142 activist groups across 24 states organizing against data center development. Northern Virginia alone-the nation's largest data center market-has 42 activist groups fighting projects. Reasons cited: water consumption, higher utility bills, noise, decreased property values, loss of open space. Translation for land investors: Sites with existing power capacity + community support just became exponentially more valuable than sites with just land and zoning. The power infrastructure thesis isn't just about finding available capacity. It's about finding that capacity in counties that actually want data centers. Not every market will roll out the welcome mat. Are you evaluating community sentiment alongside power infrastructure access?
-
Thailand plans dozens of data centers. Locals ask: where will the water come from? Thailand’s eastern seaboard is becoming a focal point for the global expansion of data centers, reports Gerry Flynn. Developers are planning dozens of facilities in Chonburi and neighboring Rayong province as the country seeks to position itself as a regional hub for artificial intelligence infrastructure. Investment has accelerated rapidly. In 2025 alone, Thailand’s Board of Investment approved more than $23 billion in data-center projects. Many of the new facilities are concentrated in the Eastern Economic Corridor, a special economic zone southeast of Bangkok established to modernize Thailand’s industrial base. Petrochemicals, automobile assembly and electronics manufacturing already dominate the region. Data centers represent a different type of industry. Their physical footprint is modest compared with factories, but their demand for electricity and water can be substantial. One example is a hyperscale facility known as QHI01, now under construction in Chonburi province. Developers say the project will draw about 3.3 million cubic meters of water each year to cool computer processors. That volume is roughly equivalent to the annual water consumption of tens of thousands of residents. Contractors working on related infrastructure have suggested the facility’s eventual demand could be higher. Water availability is already a concern in the corridor. Reservoir levels have fluctuated in recent years, and waterways have long absorbed wastewater from surrounding industrial estates. Local activists say little information has been released about how much water new data centers will use or how wastewater from cooling systems will be treated. Many developers declined to answer questions about environmental assessments or resource consumption. Officials maintain that existing infrastructure can handle additional demand. Provincial water authorities note that treatment plants in parts of the corridor still operate below capacity. Industry groups emphasize the economic benefits of the sector, including investment and high-skilled jobs. The rapid expansion nevertheless raises broader questions about how resource-intensive digital infrastructure will fit into regions already shaped by decades of industrial development. Data centers may occupy less land than traditional factories, but the scale of their energy and water needs suggests they could become a significant new pressure on local systems. ⚡ The investigation: https://lnkd.in/gZ7w_8PK
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development