The biggest threat to your data isn’t happening tomorrow. It happened yesterday. If you haven’t heard of HNDL (Harvest Now, Decrypt Later), your long-term data strategy has a massive blind spot. Here is the reality: State actors and cybercriminals are capturing your encrypted data today. They can’t read it yet, so they’re storing it in massive data vaults, waiting for the "Qday"—the moment quantum computers become powerful enough to break current encryption. If your data needs to stay private for 5, 10, or 20 years, it’s already at risk. What’s on the line? ↳ Intellectual Property (IP) and trade secrets. ↳ Government and identity data. ↳ Long-term financial records and contracts. ↳ Sensitive customer health data. How do we solve it? 🛠️ We cannot wait for quantum supremacy to react. The fix starts now: ↳ Inventory: Identify which data has a long shelf-life. ↳ Crypto-Agility: Move toward systems that can swap encryption methods without a total overhaul. ↳ Hybrid PQC: Implement Post-Quantum Cryptography alongside classical methods to ensure traffic captured today remains a mystery tomorrow. The transition to quantum-resistant security is a marathon, not a sprint. Are you tracking HNDL on your current risk register? Let’s discuss in the comments. 👇 P.S. If you want help mapping your exposure or building a PQC migration plan, drop me a message. ♻️ Share this post if it speaks to you, and follow me for more. #QuantumSecurity #PQC
Data Migration
Explore top LinkedIn content from expert professionals.
-
-
The imperative to prepare for the transition to quantum-safe cryptography doesn't necessarily mean an immediate switch. Consider these two critical aspects: ☝ Complexity of Cryptographic Algorithm Transition: Transitioning cryptographic algorithms is a complex undertaking. A quick examination within your organization or with your service providers may reveal the use of obsolete algorithms like SHA-1 or TDEA. For example, the payment card industry still employs TDEA, despite its obsolescence was announced in 2019. It's essential to enhance your organization's cryptography management capabilities before embarking on the transition to quantum-safe cryptography. ✌ Scrutiny Required for New PQC Algorithms: The new Post-Quantum Cryptography (PQC) algorithms are relatively recent and warrant careful examination. Historically, we have deployed cryptographic algorithms on a production scale only after several years of existence, allowing comprehensive scrutiny. While PQC standardization offers some security assurances, it doesn't cover the software implementations deployed in your environment. Consider employing phased deployments and hybrid implementations to avoid compromising the existing security provided by classical cryptography. Recent news, as mentioned in this article, highlights the immaturity of implementations of new PQC algorithms. While the title might be somewhat misleading, it's crucial to recognize that occasional flaws in implementations, like those found (and solved) in various instances of Kyber, serve as reminders. As we transition to these new implementations, we must first gain control over our cryptography. Here's a suggested action plan: 🚩 Cryptography Management: Prioritize gaining control over your cryptography. 🚩 Understanding Quantum-Safe Cryptography: Familiarize yourself with the development of quantum-safe cryptography. 🚩 Transition Plan Preparation: Follow recommendations to prepare a comprehensive transition plan. Some of my favourite resources are: - Federal Office for Information Security (BSI)'s "Quantum-safe cryptography" (https://lnkd.in/dqkSAQSP) - Government of Canada CFDIR's "BEST PRACTICES AND GUIDELINES" (https://lnkd.in/d-w_Nbfj) - National Institute of Standards and Technology (NIST)'s "Migration to Post-Quantum Cryptography" (https://lnkd.in/dYMKnqBb) 🚩 Decision-Making: Make informed decisions based on the acquired knowledge. In summary, a thoughtful and phased approach is key to ensuring a smooth transition to quantum-safe cryptography. https://lnkd.in/dxAgF2ac #cryptography #quantumcomputing #security #pqc #cybersecurity
-
🛡️ The Quantum Clock is Ticking quietly: Is Your Financial Infrastructure Ready? The financial industry is built on a foundation of digital trust, currently secured by #cryptographic standards like RSA and ECC. However, the rise of Cryptographically Relevant Quantum Computers (CRQC) poses an existential threat to this foundation. As we navigate this transition, here are 3 key pillars from the latest Mastercard R&D white paper that every financial leader must prioritize: 1. Addressing the 'Harvest Now, Decrypt Later' (HNDL) Threat 📥 Malicious actors are already intercepting and storing sensitive #encrypted data today, intending to decrypt it once powerful quantum computers are available. Financial Use Case: Protecting long-term assets such as credit histories, investment records, and loan documents. Unlike transient transaction data (which uses dynamic cryptograms), this "shelf-life" data requires immediate risk analysis and the adoption of quantum-safe encryption for back-end systems. 2. Quantum Resource Estimation & The 10-Year Horizon ⏳ While a CRQC capable of breaking RSA-2048 in hours might be 10 to 20 years away, the migration process itself will take years. Financial Use Case: Developing Agile Cryptography Plans. Financial institutions should set "action alarms" for instance, once a quantum computer reaches 10,000 qubits, a pre-prepared 10-year migration plan must be triggered to ensure infrastructure is updated before the "meteor strike" occurs. 3. Hybrid Implementations: The Bridge to Security 🌉 The transition won't happen overnight. The paper highlights the importance of Hybrid Key Encapsulation Mechanisms (KEM), which combine classical security with PQC. Financial Use Case: Enhancing TLS 1.3 and OpenSSL 3.5 protocols. By implementing hybrid models now, banks can protect against current quantum threats (like HNDL) while maintaining compatibility with existing classical systems, ensuring a smooth and safe transition. The Bottom Line: A reactive approach is no longer an option. Early adopters who evaluate their data's "time value" and begin the migration today will be the ones to maintain resilience and protect global financial assets tomorrow. #QuantumComputing #PostQuantumCryptography #FinTech #CyberSecurity #DigitalTrust #MastercardResearch
-
𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀: 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗗𝗲𝗹𝘁𝗮 𝗟𝗮𝗸𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗼𝗻 𝗔𝗪𝗦 =========================================== Imagine you have data in your company's local servers (on-premises) and want to: 1. Move this data to AWS 2. Analyze it without managing servers 3. Use an event-driven approach Here's how TrueBlue, a company facing this challenge, solved it using AWS services: 𝟭. 𝗗𝗮𝘁𝗮 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 ----------------- • Used AWS Database Migration Service to copy data from local databases to Amazon S3 • Ensures up-to-date information for jobs, job requests, and workers • Enables accurate job matching 𝟮. 𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 ------------------------------ • Set up S3 event notifications when new data arrives • Used Amazon SQS (Simple Queue Service) to capture these events • Created 3 SQS queues for different update frequencies: - 10-minute updates - 60-minute updates - 3-hour updates • AWS EventBridge rules trigger Step Functions based on these time intervals • Step Functions orchestrate AWS Glue jobs for data processing 𝟯. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 -------------------------- • Chose AWS Glue over Amazon EMR (Elastic MapReduce) for serverless data processing • Reasons for choosing Glue: - Team's expertise in serverless development - Easier to manage and debug - Achieves similar results to EMR without server management • Glue jobs transform and load data into the Delta Lake format 𝟰. 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 ------------ • Data scientists use PySpark SQL to query the Delta Lake • Delta Lake has three tiers: 1. Bronze: Raw data from source systems 2. Silver: Cleaned and joined data from bronze tier 3. Gold: Prepared data for machine learning (feature store) • Glue jobs keep the Delta Lake up-to-date with reliable upserts (updates and inserts) • Enables data scientists to: - Perform accurate job matches - Extract datasets for analysis - Build and train machine learning models 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝘁𝗵𝗶𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲: ------------------------------ 1. Serverless: No need to manage infrastructure 2. Scalable: Can handle increasing data volumes 3. Cost-effective: Pay only for resources used 4. Real-time: Event-driven updates keep data fresh 5. Flexible: Supports various data processing needs This architecture showcases how to build a modern, serverless data lake using AWS services, enabling efficient data migration, processing, and analytics without the complexity of managing servers. #dataengineer #dataengineering #deltalake #aws
-
The “Before & After” Data Transformation Story In the lead-up to our SAP migration, we weren’t just preparing systems — we were unearthing years of neglected, inconsistent, and chaotic data. If we are honest, most of the time, it felt less like digital transformation and more like an archaeological excavation. We were buried in layers of spreadsheets, conflicting legacy reports, and systems that hadn’t seen a clean-up in over a decade. Each click revealed more clutter: customer names spelled five different ways, address fields mixing “St.” and “Street” like it was a coin toss, duplicate records stacked on top of each other, and critical fields left blank or filled with guesswork. It was more than just messy — it was risky - A complete nightmare! Data was being pulled from everywhere and nowhere. No single source of truth. No consistency. Just a patchwork of outdated inputs fuelling vital business operations. The worst part? We had to tackle it manually. A Time Sink: Highly skilled people stuck doing low-value, repetitive tasks. An Error Magnet: Fatigue set in. Errors crept through. Fix one issue, uncover two more. A Business Risk: Dirty data meant dirty output. Reports couldn’t be trusted. Customers were misbilled. Orders were sent to the wrong place. And confidence in the system? Gone. We knew we couldn’t carry that baggage into SAP. Something had to change. At this point, we built a purpose-specific solution which was created to automate and streamline data cleansing and validation, giving us the ability to: Proactively identify and rectify errors with precision. Ensure data consistency across all records. Validate information against business rules before migration. This impacts business by: 🔹Reducing Pre-Migration Data cleansing and validation Effort by Up to 75% Freeing up SMEs for strategic tasks, cutting contractor costs, and accelerating migration timelines. 🔹Delivering >99% Accuracy in Key Master Data Minimising migration errors, de-risks go-live, building trust in the new SAP system from day one. 🔹Reducing Migration Delays and Rework by 20–40% Fewer surprises in load cycles and UAT, protecting timelines, budgets, and overall project momentum. 🔹Achieving 100% Data Auditability and Compliance Ensuring full traceability, streamlining audits, and providing a defensible position on data quality from day one. 🔹Reducing Post-Go-Live Errors by 15–30% Fewer issues like misbilling and mis-shipments, leading to smoother operations, faster user adoption, and trusted SAP insights. If any of this sounds familiar, you're not alone. The good news is that we have built a solution which has already helped others through their migration journey, and we’d be happy to share it if it’s useful. Just drop us a message. Created in collaboration with Pawel Lipko ↗️
-
On prem to Cloud migration Step-by-Step AWS Cloud Migration Process 1. Plan the Migration Assessment: Identify the current environment (servers, databases, dependencies, and configurations). Inventory: Document application components and dependencies. Sizing: Determine AWS resources (EC2 instance types, RDS configurations, etc.) based on current usage. Network Design: Plan VPC setup, subnets, security groups, and connectivity. Backup Plan: Create a fallback plan for any issues during migration. 2. Prepare the AWS Environment VPC Setup: Create a VPC with subnets across multiple Availability Zones (AZs). Security: Configure security groups, IAM roles, and policies. Database Configuration: Set up an Amazon RDS instance or EC2-based database for the migration. AD Server: Use AWS Managed Microsoft AD or deploy your AD on EC2. Application Server: Launch EC2 instances and configure the operating system and required dependencies. 3. Migrate Database Backup: Create a backup of the current database. Export/Import: Use database migration tools (e.g., AWS DMS or native database tools) to migrate data to the AWS database. Replication: Set up database replication for real-time sync with the on-prem database. Validation: Verify data consistency and integrity post-migration. 4. Migrate Application Server Packaging: Package the application (e.g., as Docker containers, AMIs, or simple binaries). Deployment: Deploy the application on AWS EC2 instances or use AWS Elastic Beanstalk. DNS Configuration: Update DNS records to point to the AWS environment. 5. Migrate Active Directory (AD) Replication: Create a replica of the on-prem AD in AWS using the AD Trust setup. DNS Sync: Sync DNS entries between on-prem and AWS environments. Validation: Test authentication and resource access. 6. Test and Validate End-to-End Testing: Validate the complete environment (application, database, and AD). Performance Check: Monitor performance using CloudWatch and address any issues. Failover Testing: Simulate failure scenarios to ensure HA/DR readiness. 7. Cutover and Go Live Schedule Downtime: Coordinate with stakeholders and users for a minimal downtime window. Final Sync: Perform a final sync of the database and switch traffic to AWS. DNS Propagation: Update DNS settings to route traffic to the AWS environment (may take up to 24 hours). Monitoring: Continuously monitor AWS resources and performance post-migration. 8. Post-Migration Optimization Scaling: Implement auto-scaling policies for the application. Security: Regularly review and improve security configurations. Cost Optimization: Use AWS Cost Explorer to analyze and optimize resource usage. Downtime Considerations Database Migration: Plan a maintenance window of 2–4 hours for the final database sync and cutover. DNS Propagation: Approx. 15 minutes to 24 hours, depending on TTL settings. Use short TTLs during migration to minimize delays. #AWSMigration #CloudMigration #MinimalDowntime #DatabaseToAWS #ApplicationToAWS #ADToAWS
-
🚨 NEW PEER-REVIEWED RESEARCH: PQC Migration Timelines Excited to share my latest paper published in MDPI Computers: "Enterprise Migration to Post-Quantum Cryptography: Timeline Analysis and Strategic Frameworks." The transition to Post-Quantum Cryptography (PQC) represents a watershed moment in the history of our digital civilization. Organizations planning for a 3-5 year "upgrade" will fail. The reality is a 10-15-year systemic transformation. Key Contributions: 📊 Realistic Timeline Estimates by Enterprise Size: Small (≤500 employees): 5-7 years Medium (500-5K): 8-12 years Large (>5K): 12-15+ years ⚠️ Critical Finding: With FTQC expected 2028-2033, large enterprises face a 3-5 year vulnerability window—migration may not complete before quantum computers break RSA/ECC. 🔬 Novel Framework Analysis: Causal dependency mapping (HSM certification, partner coordination as critical paths) "Zombie algorithm" maintenance overhead quantified (20-40%) Zero Trust Architecture implications for PQC 💡 Practical Guidance: Crypto-agility frameworks and phased migration strategies for immediate action. Strategic Recommendations for Leadership: 1. Prioritize by Data Value, Not System Criticality: Invert the traditional triage model. Systems protecting long-lived data (IP, PII, Secrets) must migrate first, regardless of their operational uptime criticality, to mitigate SNDL. 2. Fund the "Invisible" Infrastructure: Budget immediately for the expansion of PKI repositories, bandwidth upgrades, and HSM replacements. These are long-lead items that cannot be rushed. 3. Establish a Crypto-Competency Center: Do not rely solely on generalist security staff. Invest in specialized training or retain dedicated PQC counsel to navigate the mathematical and implementation nuances. The talent shortage will only worsen. 4. Demand Vendor Roadmaps: Contractual language must shift. Procurement should require vendors to provide binding roadmaps for PQC support. "We are working on it" is no longer an acceptable answer for critical supply chain partners. 5. Embrace Hybridity: Accept that the future is hybrid. Design architectures that can support dual-stack cryptography indefinitely, viewing it not as a temporary bridge but as a long-term operational state. 6. Implement Automated Discovery: You cannot migrate what you cannot see. Deploy automated cryptographic discovery tools to continuously map the cryptographic posture of the estate, identifying shadow IT and legacy instances that manual surveys miss. The quantum clock is ticking. Start planning NOW. https://lnkd.in/eHZBD-5Y 📄 DOI: https://lnkd.in/ejA9YpsG #PostQuantumCryptography #Cybersecurity #QuantumComputing #PQC #InfoSec #NIST #CryptoAgility
-
8 years ago, I learned an expensive lesson about “Green Dashboards.” as a Jr. Engineer at Amazon. Just because the alarms are not ringing does not mean the house is not on fire. We migrated from Oracle to DynamoDB. It looked perfect. Until the bill arrived. This was early in my time at Amazon. DynamoDB had been around for years, but most core systems still ran on Oracle. Then came a company wide migration push, internally called Rolling Stone, to move consumer workloads from Oracle to AWS services like DynamoDB and Aurora. One of the first services I touched looked simple: a few Oracle tables, some batch writes, read heavy APIs. We redesigned the schema for DynamoDB, ran backfills, validated row counts, replayed test traffic. Latency, error rates, CPU all looked green. We cut over production. For the first days, nothing broke. Dashboards looked great. About two weeks in, p95 and p99 latency started creeping up for a small slice of traffic. No alerts fired. Averages still looked fine. Behind those green graphs, three things were going wrong: – Our Oracle style access patterns turned into inefficient DynamoDB queries. – Traffic created hot partitions we never hit in testing. – Short throttling spikes were hidden inside healthy looking table averages. Once customers started timing out, we did the only thing that worked fast: cranked up provisioned capacity. Throttling dropped. Latency went back to normal. And the DynamoDB bill quietly exploded. By the time finance asked questions, DynamoDB was the system of record. Rolling back to Oracle would have meant data reconciliation, traffic freezes, and real downtime. So we fixed forward: redesigned keys, cleaned up access patterns, added real p99 and cost visibility. Since then my rule is simple: if dashboards are green but tail latency or cost is drifting, you are not healthy, you are blind. And if your rollback only works when everything is perfect, it is not a plan, it is hope. If you want to see the broader Oracle to AWS migration story, AWS has a public write up of it on their news blog:https://lnkd.in/gEn3iSSW
-
🔐Word o’ the Day | Year | Decade: Crypto-agility, Baby! Yesterday morning, I did a fun fireside chat with Bethany Gadfield - Netzel at the FIA, Inc. Expo in Chicago. We talked about cyber resilience, artificial intelligence, Rubik’s cubes, and that thing called quantum! A question came up at the end, “What can firms actually do today to begin transitioning to post-quantum cryptography?” So thought I would take the opportunity to share my thoughts more broadly on this important, but not super well understood, topic: 1. Don’t wait. The clock for quantum-safe cryptography is already ticking. NIST released its first set of post-quantum standards last year (https://lnkd.in/esTm8uPw) and CISA put out a “Strategy for Migrating to Automated Post-Quantum Discovery and Inventory Tools” last year as part of its broader Post Quantum Cryptography (PQC) Initiative (https://lnkd.in/evpF4umv). h/t Garfield Jones, D.Eng.! 2. Inventory & prioritize. Map all cryptographic usage: what keys, certificates, protocols, and data streams exist today? Which assets hold long-lived value and are at risk of “harvest-now, decrypt-later”? Build a migration roadmap that prioritizes highest-risk systems (e.g., financial settlement platforms, inter-bank links, legacy encryption). 3. Establish crypto-agility. Ensure your architecture supports swapping algorithms, updating certificates, & layering classical + post-quantum primitives without a full system rebuild. This kind of flexibility is key for resilience. 4. Pilot and migrate. Use the new NIST-approved algorithms; experiment first on less time-sensitive systems, validate performance and interoperability, then scale to mission-critical applications. NIST’s IR 8547 report provides a framework for this transition. 5. Vendor & supply-chain alignment. Ask your vendors & service providers: “What’s your PQC transition plan? When will you support NIST-approved post-quantum algorithms? Are your update paths crypto-agile?” If the answer isn’t clear or (as a former boss of mine used to say) they look at you like a “pig at a wristwatch,” you’ve got a potentially serious third-party risk. 6. Board and Exec engagement. Position this not as an IT problem but a fiduciary risk and resilience imperative. The transition to quantum-safe cryptography is multi-year and multi-layered—waiting until it’s urgent means it will be too late.
-
We almost brought a 20-year-old mistake into S/4HANA. During a recent S/4 migration for a pharma client, "Clean Core" was the mandate from the steering committee. But when we ran the readiness check, the system flagged over 12,000 custom Z-programs. The project timeline was tight. The business sponsor panicked. "Just lift and shift them all," he said. "We can’t risk breaking operations. We will clean up the custom code in Phase 2." If you’ve been in the SAP world long enough, you know the ugly truth: Phase 2 never happens. Instead of arguing, I asked our Basis team to run a simple background job: a 12-month usage report on those 12,000 custom programs. The results were staggering. The Reality Check: Custom objects in the system: 12,000 Objects executed in the last year: 2,400 Objects executed in the last 30 days: 850 They were about to spend hundreds of thousands of dollars and risk the stability of their new S/4 system, just to migrate digital ghosts. Code that belonged to employees who had retired a decade ago. Workarounds for business processes that no longer existed. We didn't just delete the code. We printed the report and put it on the sponsor's desk. The conversation shifted instantly from "How do we migrate this?" to "Why are we hoarding this?" An S/4HANA migration is not an IT infrastructure project. It is a corporate garage sale. If you don't have the courage to throw things away before you move, you aren't transforming. You're just relocating your mess. What is the craziest piece of legacy Z-code you’ve seen someone try to drag into an S/4HANA system?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development