Post-Quantum Data Security
From Confidentiality First to Detectability Driven Protection
Executive Summary
Quantum computing challenges a long-standing assumption in enterprise data security: that encryption strength alone is sufficient to protect sensitive information.
While post-quantum cryptography (PQC) is essential to ensure long-term cryptographic survivability, it does not address the most damaging and common failure mode observed in real-world breaches — silent data compromise.
In both pre-quantum and post-quantum environments, the primary business risk is not eventual decryption, but undetected access to sensitive data. When meaningful access remains invisible for weeks or months, cryptographic strength becomes largely irrelevant to business, regulatory, and reputational impact.
This paper argues for an evolution in data security strategy: from a confidentiality-only model to a layered approach that treats detectability as a first-class objective alongside encryption.
In a post-quantum world, secrecy duration matters — but awareness speed matters more.
1. The Post-Quantum Threat Reality
Quantum computing introduces credible long-term risk to widely deployed public-key cryptographic algorithms. Adversaries may already be harvesting encrypted data today with the expectation that future computational advances will enable retrospective decryption.
However, well before quantum threats materialize at scale, organizations already face a more immediate and proven challenge: data breaches that remain undetected for extended periods.
Attackers routinely access, validate, and exfiltrate sensitive data quietly, with detection occurring only after the data has been misused, sold, or publicly disclosed.
This reveals a critical reality:
Encryption is increasingly time-bound, while data value is time-sensitive.
Security effectiveness must therefore be measured not only by cryptographic strength, but by how quickly meaningful data access is detected and contained.
2. Post-Quantum Cryptography: Necessary but Insufficient
Post-quantum cryptography is a critical foundation for future-proofing data confidentiality.
It provides:
However, PQC does not provide visibility into how, when, or by whom decrypted data is used.
It does not:
PQC protects mathematical secrecy, not operational awareness.
It must therefore be treated as a foundational control, not a complete data security solution.
3. Redefining Data Security Success
Traditional security models equate encryption with protection and define breaches primarily as confirmed data disclosure.
In modern environments, breach impact is determined less by whether data was accessed and more by how long that access went unnoticed. Silent access is often more damaging than noisy failure.
This necessitates a reframing of security objectives.
Detectability must become a first-class goal alongside confidentiality.
Detectability is defined as: The ability to observe, with high confidence and low latency, that sensitive data has been meaningfully accessed or validated.
Detectability augments, rather than replaces the preventive controls such as encryption, access control, and key management.
While post-quantum cryptography applies to both data at rest and data in transit, the detectability model described in this paper operates at the data usage and persistence layers, rather than at the transport encryption layer.
4. Data Containers as a Security Control
To operationalize detectability, this paper introduces a data container model.
Sensitive information is deliberately structured into multiple datasets, each encrypted and realistic, but serving distinct defensive purposes.
This approach is not deception or obscurity. It does not rely on fake systems or fabricated data. Instead, it introduces structured uncertainty designed to force observable attacker commitment.
Attackers cannot determine, through cryptography alone, which dataset represents true operational success. To gain confidence, they must interact with the data, and that interaction creates signal.
5. The Data Container Model
Sensitive data is distributed across multiple containers, each with a defined security role.
Decoy Container
Protected Container
Core (Authoritative) Container
Key characteristics of the model include:
Attackers cannot declare success without validating data, and validation creates signal.
6. Alerting on Confidence, Not Just Access
A critical distinction of this model is how alerting is triggered.
Alerts are not generated by:
Instead, alerts are generated when attacker behaviour demonstrates confidence of success, such as:
By focusing on confidence building behaviour rather than raw access, the model reduces false positives while producing high confidence indicators of compromise.
7. Forced Attacker Commitment Principle
This approach relies on a principle that applies equally in pre- and post-quantum environments:
Recommended by LinkedIn
Attackers must commit time, behaviour, and resources to confirm success, and that commitment is observable.
Even insiders or repeat attackers must:
The objective is not to deceive attackers indefinitely, but to force early declaration of malicious intent.
8. Breach Scenario Comparison
Encryption-Only Model
Outcome: Business, regulatory, and reputational damage.
Layered Data Container Model
Outcome: Early awareness, limited exposure, controlled response.
9. Metrics That Matter
This model introduces measurable outcomes aligned to business risk, including:
Security success becomes quantifiable rather than theoretical. These metrics are directional indicators, not industry-standard benchmarks
10. Applicability Pre- and Post-Quantum
This approach is:
Quantum computing increases urgency, but does not create the problem. Silent compromise already exists.
Conclusion
Post-quantum security is not about building unbreakable locks.
It is about ensuring that when locks are tested or even broken, failure is visible, contained, and operationally irrelevant.
Encryption delays compromise. Detectability limits damage. Mature data security requires both.
Appendix
Limitations, Extensions, and Governance Considerations
Limits of Real-Time Detectability
The data container model prioritizes early awareness through observable attacker interaction. However, like all security controls, it operates within defined constraints.
There are legitimate scenarios in which a data container may be accessed or removed without any active connection to an internal network or the Internet. Examples include physical theft of storage media, compromise of offline backups, air-gapped environments, or insider-driven data extraction using removable media.
In such cases, real-time alerting cannot be assumed. This limitation is not unique to the proposed model; it applies equally to traditional encryption, access logging, and network-based detection systems. Encryption therefore remains the primary control for protecting data confidentiality in offline scenarios, and post-quantum cryptography further strengthens this protection against long-term cryptographic threats.
Importantly, the absence of immediate alerts does not negate the value of structured data containers. Even when data is accessed offline, attackers must still validate, interpret, and operationalize the information they obtain. Embedded decoy elements, canary identifiers, and controlled inconsistencies increase the likelihood of delayed but attributable detection when the data is later used, shared, or monetized.
Detection Beyond the Moment of Theft
The proposed model explicitly recognizes that detection does not always occur at the moment of access. Instead, detectability is extended into later stages of the attack lifecycle, including:
By structuring data into multiple containers with differing confidence levels, the model increases the probability that attackers interact with monitored elements before achieving operational certainty. This remains true even when initial access occurs in offline or constrained environments.
Optional Extension: Fallback Detection Channels for High-Value Data
For a narrow class of high-value, high-risk data assets, organizations may introduce an optional outbound-only detection channel as an extension to the data container model.
This capability is not required for baseline security guarantees and should not be treated as a dependency. Instead, it functions as a last-resort detection signal, comparable to a tamper alarm or flight recorder.
Key characteristics of this extension include:
Typical activation conditions may include bulk interaction with decoy elements, reconstruction attempts across multiple data structures, or cryptographic integrity violations.
This extension is most appropriate for environments where:
Risk and Governance Considerations
The inclusion of fallback detection mechanisms introduces additional considerations, including hardware trust assumptions, supply-chain complexity, regulatory approvals, and insider tampering risks. As such, these mechanisms must be governed explicitly and deployed selectively.
Crucially, the security model does not assume the availability or success of such channels. Their role is additive rather than foundational. The absence of fallback connectivity does not invalidate the data container approach; its presence merely increases the probability of earlier detection under extreme conditions.
What This Model Does and Does Not Claim
This approach does not claim to:
It does claim to:
Appendix Summary
The data container model acknowledges that no single control can eliminate all risk, particularly in offline or air-gapped scenarios. Instead, it introduces architectural resilience by combining confidentiality, uncertainty, and detectability.
Optional extensions, such as fallback detection channels, further enhance awareness for the most sensitive data assets without becoming prerequisites for security.
In a post-quantum world, the objective is not perfect prevention, but controlled exposure with timely awareness.
100% agree on detection being the real gap. Encryption is table stakes at this point. The orgs getting breached usually had encryption in place but zero visibility into anomalous access patterns.