Successful Data Migration

Explore top LinkedIn content from expert professionals.

  • View profile for Shobha Moni

    25+ years transforming industries with ERP systems | Partner founder Triad Software Solutions

    23,154 followers

    I’ve audited 120+ ERP data migrations in the last 5 years. 80% of them failed. And most ERP failures are not because it’s SAP, Oracle, or Dynamics. Not even the custom build from 2012. They fail because the data going in was never cleaned. Here’s what I keep seeing (even in $10M+ projects): In 80% of failed ERP migrations, I found: ☠️ UOM mismatches that break inventory. ☠️ Customer and vendor duplicates. ☠️ Zombie SKUs and dead warehouses. ☠️ Orphaned transactions. ☠️ No audit trail of what got transformed. Here’s my Data Migration Checklist (to use before go-live): ✅ Units of Measure (UOM): → Are all UOMs mapped 1:1 between legacy and new ERP? → Have we tested conversion logic in live transactions? ✅ Master Data Uniqueness: → Do we have duplicate SKUs, vendors, or customers? → What’s the deduplication logic? Who owns it? ✅ Historical Data Mapping: → Are all past transactions (GR/IR, payments, returns) traceable? → Can we audit them after go-live? ✅ Open Transactions Review: → How many open POs, SOs, GRNs exist in legacy? → Who validated carry-forward rules? ✅ Dummy Runs with Real Data: → Did we run full-cycle transactions with migrated data in UAT? → Were accounting, tax, and inventory balances reconciled? ✅ Cleanup Ownership: → Who is responsible for final data sign-off—IT or Finance? → Is it documented? I think ERP is not an Excel import. It’s a financial and operational rebirth. And the data is either your foundation or your downfall. How confident are you in the quality of the data being loaded into your next ERP? ♻️ 𝐑𝐄𝐏𝐎𝐒𝐓 so others can learn.

  • View profile for Puneet Patwari

    Principal Software Engineer @Atlassian| Ex-Sr. Engineer @Microsoft || Sharing insights on SW Engineering, Career Growth & Interview Preparation

    68,304 followers

    This is a dependency-discovery, traffic-control, and safe-cutover problem under organizational failure. Here is how I would think about it as a Principal Engineer [1] Migration emails are not your source of truth. Traffic is. The first mistake is trusting team responses more than production evidence. When 6 teams do not reply, I do not assume they are migrated. I assume I do not have enough visibility yet. So the first thing I want is a hard dependency map from: - request logs - tracing - service mesh / gateway metrics - caller identity headers - endpoint-level RPM by consumer Btw, I’ve compiled ideas like this, plus 90+ other system design fundamentals, into a guide for Senior-to-Principal engineers for interview preparation. You can check it out here: puneetpatwari.in That immediately tells me: - which 17 services still call me - which endpoints they use - how much traffic each one contributes - whether the 6 silent teams are actually active or just stale docs Deprecation starts with observability, not reminders. [2] I would classify consumers before I touch traffic All dependents are not equal. I would split them into: - migrated and verified - low-risk consumers with fallback path - high-risk consumers on critical paths - unknown / unowned callers Now the 40,000 RPM becomes manageable. Maybe 28,000 RPM is already on the new path. Maybe 8,000 RPM comes from 2 critical consumers. Maybe 4,000 RPM is zombie traffic nobody owns. That changes the plan completely. [3] I would make shutdown a routing exercise, not a trust exercise At Staff level, I do not want a plan that depends on every team doing the right thing manually by the deadline. I want control at the boundary. So I would put the old service behind an enforced migration layer: - per-caller routing rules - feature flags - canary cutover to replacement service - shadow traffic where safe - endpoint-level denylist / allowlist - kill switch for fast rollback This lets me migrate consumer by consumer, not through one big-bang shutdown And for silent teams, I can start with: - warning headers - lower-environment block first - rate limiting on deprecated endpoints - scheduled production cutoff windows with leadership visibility [4] The 6 silent teams are now a leadership and risk issue After 3 weeks of silence, this is no longer an engineer politely waiting for replies I would escalate with hard data: - current RPM by silent team - business criticality - cutoff date - fallback / replacement readiness - exact blast radius if they do nothing Now the conversation becomes: “These 2 teams still send 11k RPM to deprecated endpoint X. Here is the cutoff plan. Here is the owner. Here is the risk” [5] The actual shutdown should happen in stages Week 1: Map deps, classify consumers, dashboards, alerts Week 2: Canary cutover for responsive teams, shadow traffic Week 3: Enforce for silent teams, escalate with data Week 4: Drain remaining callers, block gradually, keep rollback ready

  • View profile for Hirenkumar G.

    Sr Technical Support Engineer | DevOps Engineer | AWS | Azure | GCP | CI/CD | PowerShell | Windows Server | Linux | Azure Az104 Certified

    12,022 followers

    On prem to Cloud migration Step-by-Step AWS Cloud Migration Process 1. Plan the Migration Assessment: Identify the current environment (servers, databases, dependencies, and configurations). Inventory: Document application components and dependencies. Sizing: Determine AWS resources (EC2 instance types, RDS configurations, etc.) based on current usage. Network Design: Plan VPC setup, subnets, security groups, and connectivity. Backup Plan: Create a fallback plan for any issues during migration. 2. Prepare the AWS Environment VPC Setup: Create a VPC with subnets across multiple Availability Zones (AZs). Security: Configure security groups, IAM roles, and policies. Database Configuration: Set up an Amazon RDS instance or EC2-based database for the migration. AD Server: Use AWS Managed Microsoft AD or deploy your AD on EC2. Application Server: Launch EC2 instances and configure the operating system and required dependencies. 3. Migrate Database Backup: Create a backup of the current database. Export/Import: Use database migration tools (e.g., AWS DMS or native database tools) to migrate data to the AWS database. Replication: Set up database replication for real-time sync with the on-prem database. Validation: Verify data consistency and integrity post-migration. 4. Migrate Application Server Packaging: Package the application (e.g., as Docker containers, AMIs, or simple binaries). Deployment: Deploy the application on AWS EC2 instances or use AWS Elastic Beanstalk. DNS Configuration: Update DNS records to point to the AWS environment. 5. Migrate Active Directory (AD) Replication: Create a replica of the on-prem AD in AWS using the AD Trust setup. DNS Sync: Sync DNS entries between on-prem and AWS environments. Validation: Test authentication and resource access. 6. Test and Validate End-to-End Testing: Validate the complete environment (application, database, and AD). Performance Check: Monitor performance using CloudWatch and address any issues. Failover Testing: Simulate failure scenarios to ensure HA/DR readiness. 7. Cutover and Go Live Schedule Downtime: Coordinate with stakeholders and users for a minimal downtime window. Final Sync: Perform a final sync of the database and switch traffic to AWS. DNS Propagation: Update DNS settings to route traffic to the AWS environment (may take up to 24 hours). Monitoring: Continuously monitor AWS resources and performance post-migration. 8. Post-Migration Optimization Scaling: Implement auto-scaling policies for the application. Security: Regularly review and improve security configurations. Cost Optimization: Use AWS Cost Explorer to analyze and optimize resource usage. Downtime Considerations Database Migration: Plan a maintenance window of 2–4 hours for the final database sync and cutover. DNS Propagation: Approx. 15 minutes to 24 hours, depending on TTL settings. Use short TTLs during migration to minimize delays. #AWSMigration #CloudMigration #MinimalDowntime #DatabaseToAWS #ApplicationToAWS #ADToAWS

  • View profile for Omer Robinowitz

    Co-Founder and Chief Growth Officer @Faddom | Spearheading Marketing and Business Development to drive growth and fuel the top-of-the-funnel

    13,088 followers

    I constantly hear shocking stories of cloud migration mistakes that spiral into unexpected, skyrocketing costs beyond what anyone ever imagined. Most companies underestimate the complexity. Skip dependency mapping. Pay the price. Cloud migrations go beyond moving workloads - they require knowing what to move, when, and how it affects the rest of your environment. Without a solid plan, you risk unplanned downtime, security gaps, and overspending on misconfigured cloud resources. Here’s how to migrate without chaos: 1. Start with full visibility. Map every application, service, and dependency before migration. Unknown connections lead to downtime, security risks, and hidden costs. Many organizations don’t realize how interconnected their systems are until something breaks. 2. Assess workloads before moving them. Not everything belongs in the cloud. Classify applications by criticality, complexity, and cloud readiness. Legacy systems often need refactoring or special configurations, while certain workloads may be better off staying on-premises. 3. Move in phases, not all at once. A "lift and shift" migration can break critical systems. Migrate in controlled stages, test thoroughly, and adjust before moving forward. Pilot test with non-critical workloads first, gather insights, then move mission-critical systems. 4. Optimize before the migration. Unused resources drain your budget. Right-size workloads, eliminate redundant services, and continuously monitor costs. Cloud sprawl - where forgotten instances keep running - can waste thousands per month. 5. Avoid compliance blind spots. Migrating nodes without visibility can lead to regulatory violations and security gaps. Ensure sensitive workloads follow security best practices before, during, and after migration. The hard truth? You can’t migrate what you don’t know about. Map -> Plan -> Migrate. NO SHORTCUTS.

  • View profile for Jaswindder Kummar

    Engineering Director | Cloud, DevOps & DevSecOps Strategist | Security Specialist | Published on Medium & DZone | Hackathon Judge & Mentor

    22,860 followers

    𝐀𝐟𝐭𝐞𝐫 𝐦𝐢𝐠𝐫𝐚𝐭𝐢𝐧𝐠 𝟓𝟎+ 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐰𝐨𝐫𝐤𝐥𝐨𝐚𝐝𝐬 𝐭𝐨 𝐀𝐳𝐮𝐫𝐞,  Here's the Decision Framework that saved teams Millions in Cloud Spend. Most engineers jump straight to Kubernetes because it's popular.  But I've seen organizations burn 60% of their budget running AKS for workloads that needed App Service. 𝐇𝐞𝐫𝐞'𝐬 𝐦𝐲 𝐁𝐚𝐭𝐭𝐥𝐞-𝐓𝐞𝐬𝐭𝐞𝐝 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: 🎯 𝐒𝐭𝐚𝐫𝐭 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞𝐬𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬: • Already running on-prem? Consider lift-and-shift first, optimize later • Need OS-level control? VMs are still valid (don't let anyone shame you) • Containerized? Great but that doesn't automatically mean Kubernetes • Event-driven with short bursts? Functions will cut your costs dramatically 💡 𝐌𝐲 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐟𝐫𝐨𝐦 𝐭𝐡𝐞 𝐭𝐫𝐞𝐧𝐜𝐡𝐞𝐬: For new builds: • Default to managed services (App Service, Container Apps) unless you have a compelling reason not to • Functions for APIs under 5 min execution I've seen 80% cost reduction • AKS only when you need multi-cloud portability or complex orchestration For migrations: • Lift-and-shift to VMs first, then containerize incrementally • Azure Batch for HPC underrated and incredibly cost-effective • Service Fabric if you're deep in .NET (but evaluate carefully it's legacy) For containers: • Container Apps for 80% of microservices workloads • AKS when you need Kubernetes API access or custom controllers • Container Instances for CI/CD agents and batch jobs ⚠️ Red flags I've seen: • Running stateful databases on Functions • Using VMs when you just need to run a web app • Choosing AKS without dedicated platform team The truth?  There's no "best" service only the right fit for your workload, team skills, and operational maturity. What's your compute selection horror story?  Let's learn from each other. 👇 ♻️ Repost if you found it valuable ➕ Follow Jaswindder for more insights on Cloud Strategy, DevOps, and AI-led Engineering. #DevOps #Azure #CloudArchitecture

  • View profile for Stephen Sumner

    Lead, Cloud Adoption Framework @ Microsoft

    8,672 followers

    NEW MIGRATION GUIDANCE - Cloud migrations can be complex, but they don’t have to be uncertain, whether you're moving from on-premises environments or other clouds. To help bring more clarity, we published new Cloud Migration guidance in Microsoft’s Cloud Adoption Framework. This guidance offers a structured roadmap for migrating workloads to Azure from both on-premises and other cloud platforms. It’s the result of close collaboration with Microsoft experts and Microsoft MVPs. It reflects lessons learned from thousands of real-world migrations. The goal is to support teams at any stage of their cloud journey with clear, actionable steps.   Migration Process Overview: 1️⃣ Plan Your Migration 1. Assess readiness and team skills 2. Choose data migration paths 3. Define migration sequencing and rollback plans 4. Engage stakeholders 2️⃣ Prepare Workloads for the Cloud 1. Fix compatibility issues 2. Validate workloads' functionality 3. Build reusable infrastructure 4. Document deployment steps 3️⃣ Execute Migration to the Cloud 1. Prepare stakeholders and freeze changes 2. Finalize production environment 3. Execute cutover and validate success 4. Provide stabilization support 4️⃣Optimize Workloads After Migration 1. Fine-tune configurations in the cloud 2. Collect and act on user feedback 3. Review workloads regularly 4. Optimize hybrid and multicloud dependencies 5️⃣Decommission Source Workloads 1. Confirm decommissioning with stakeholders 2. Reclaim or reassign licenses 3. Preserve data for compliance 4. Update documentation and architecture records 🔗 Explore the new migration guidance here: https://lnkd.in/e2VgCU8m If you're navigating a cloud migration or supporting those who are, I hope this provides the guidance you need. 📣 Acknowledgments: This work reflects the contributions of many across the Microsoft community:   Microsoft MVPs: Stéphane Eyskens, Michael Stephenson, Danny McDermott, Stanislav Zhelyazkov, Joe Carlyle, Scott Corio, Simon Wåhlin, Bert Wolters, Elton Bordim, Haiko Hertes, Robert Hogg, Vladimir Stefanović, Andrew Wilson   Microsoft colleagues: Daniel Söderholm, Ivan Bondy, Rob Rinear, Brody Schulke, Philip Sills, Sandra Patricia Sánchez Martínez, Jack Tracey, Sunil Seth, Timo Salomäki, Michael Lemire, Tomas Kovarik, Larz Stridh, Konstantinos Pantos, Ryan Pfalz, Oscar Zamora, Courtney Taylor, PMP, Kevin Bell, John Lunn, Mannan Mohammed, Mark Piggott, Phani Kumar Teluguti, Yudhbir Singh, Alvaro Guadamillas Herranz CAF Engineering Lead: Jason Bouska Luke Nyswonger, Martin Ekuan, Hans Yang

  • View profile for Prafful Agarwal

    Software Engineer at Google

    33,120 followers

    In 2018, Etsy made a bold move: leaving behind its self-managed data centers to embrace Google Cloud Platform (GCP). The goal? Stop wasting time on hardware and start focusing on what matters: building features that make Etsy the marketplace we all love. Here’s how they pulled off this massive migration and the lessons engineers can learn: ➥ Identifying and Scoping Projects  - What They Did:     - Divided the migration into 8 major projects (e.g., production render path, search services) and further into 30+ sub-projects.     - Used a RACI model to assign roles: Responsible, Accountable, Consulted, and Informed.  - Key Insight:     Clear ownership and scope ensure smooth coordination across teams, even in large-scale migrations.  ➥ Architectural Reviews  - What They Did:     - Conducted 25 architectural reviews and 8 workshops to evaluate tools and workflows.     - Decided on Terraform and Packer for provisioning, prioritizing flexibility, security, and centralized access.  - Key Insight:     Peer-reviewed architectural decisions reduce risks and align tools with long-term goals.  ➥ Experimentation with Cloud Services  - What They Did:     - Ran Hadoop jobs on cloud services to understand migration challenges.     - Tested GCP’s Dataproc and Dataflow but opted for Airflow due to alpha-stage limitations in GCP services.  - Key Insight:     Early experiments help identify gaps and make informed decisions on using vendor tools versus custom-built solutions.    ➥ Dependency Mapping and Planning  - What They Did:     - Used dependency graphs to map interactions between systems, such as caching pools, monitoring tools, and streaming services.     - Created Gantt-style plans to estimate effort, timing, and interdependencies.    - Key Insight:     Visualizing dependencies minimizes surprises during migration and ensures systematic execution.  ➥ Decision Matrix for Vendor Selection    - What They Did:     - Evaluated vendors using a matrix of 200+ requirements, weighted across seven functional areas (e.g., cost, security, scalability).     - Scored each vendor on a 0–9 scale, with GCP emerging as the best fit by exceeding competitors by 10%.  - Key Insight:     A structured decision-making process aligns engineering needs with vendor capabilities.  ➥ Building Partnerships  - What They Did:     - Engaged with GCP through deep-dive sessions on container services and infrastructure tools.     - Consulted reference customers to learn best practices and potential pitfalls.  - Key Insight:     Collaboration with vendors and peers accelerates learning and fosters a shared engineering culture.   Check the first comment for bonus insights!

  • Before you move a single SAP system, you need to answer 5 questions. Miss even one and your migration might fail before it starts. Most teams skip this part. They jump straight into provisioning cloud resources, copying environments, and trying to meet a go-live deadline. But that’s like building a train schedule without knowing how many trains you’ve got, or where they’re going. Back when I consulted for large SAP migrations - from Colgate to Fortune 100 manufacturers - we never started with tooling. We started with assessment. Because without a clear understanding of what you’re moving, how it’s connected, and what it impacts - you're flying blind. These are the 5 things I always map before touching a single system: 1. System inventory — what exists, and what’s connected You’d be surprised how many environments have orphaned or undocumented dependencies. Miss one? That’s your failure point. 2. Business criticality — what can’t go down, even for a minute Not all systems are equal. Some run background jobs. Others run revenue. You migrate those differently. 3. Resource constraints — who’s available, when, and for how long Most IT teams are already overloaded. You need to know what talent you have before committing to timelines. 4. Downtime thresholds — what’s the business actually willing to tolerate? I’ve seen 80-hour migration estimates get crammed into 24-hour windows. You don’t negotiate after you start. You plan ahead. 5. Migration sequencing — what moves first, and what moves in parallel Dependencies aren’t just technical — they’re operational. Order matters. Or everything stalls. Assessment isn’t overhead. It’s insurance. And the cost of skipping it? Blown deadlines. Missed shipments. Angry execs. And a team stuck in recovery mode for weeks. Every successful migration I’ve ever led had this phase built in from the start. And every failed one I’ve seen? Didn’t.

  • View profile for Nathaniel Alagbe CISA CISM CISSP CRISC CCAK CFE AAIA FCA

    IT Audit & GRC Leader | AI & Cloud Security | Cybersecurity | Transforming Risk into Boardroom Intelligence

    22,324 followers

    Dear IT Auditors, Auditing Data Migration Data migration projects are among the riskiest IT initiatives an organization can undertake. Whether it’s moving from on-prem to cloud, consolidating legacy systems, or integrating after a merger, the stakes are high. A single error can lead to data corruption, compliance violations, or business downtime. That’s why data migration assurance has become a critical part of IT audit and GRC. Here’s how auditors can add value when reviewing migration projects: 📌 Pre-Migration Planning: The foundation of assurance is in the planning. Review project charters, migration strategies, and risk assessments. Confirm that the scope is clearly defined (which data, which systems, what timelines). Lack of upfront clarity is often the root cause of failed migrations. 📌 Data Mapping and Transformation Rules: Check whether data mapping is documented and transformation logic is validated. Auditors should ensure data formats, field lengths, and relationships are consistent across systems. If this step is rushed, errors cascade downstream. 📌 Test Migration Runs: Review evidence of test migrations. Were trial loads conducted with sample data? Did the organization reconcile totals and critical records? This is where issues surface early, and auditors should confirm there’s evidence of structured testing. 📌 Reconciliation and Validation: After migration, controls should validate that all data migrated accurately and completely. Audit procedures include reconciling record counts, financial totals, and critical data fields between legacy and new systems. Spot checks on high-risk data (like customer balances) are essential. 📌 Access and Security Controls: Migrations often involve temporary elevated access for IT teams. Confirm that privileged access was approved, monitored, and revoked post-migration. Review whether sensitive data was encrypted in transit. 📌 Business Continuity and Rollback: Strong migration assurance requires consideration of what if the migration fails. Auditors should verify rollback procedures, data backups, and business continuity testing. It’s not enough to hope the migration works; the plan must cover failure scenarios. 📌 Post-Migration Monitoring: The job isn’t done after cutover. Review post-migration monitoring reports, error logs, and end-user acceptance testing. Assurance means confirming that business processes continue smoothly without disruption. Data migration assurance goes beyond ticking boxes. It provides stakeholders with confidence that systems, data, and compliance remain intact during one of the most disruptive IT events. For auditors, this presents an opportunity to demonstrate real business value, not just control testing. #DataMigration #ITAudit #RiskManagement #InternalAudit #DataGovernance #GRC #CyberSecurityAudit #ITControls #CloudAudit #ITRisk #CyberYard #CyberVerge

Explore categories