The Era of Confidential Computing on Cloud
Security Whitepaper
Originally published: January 20, 2023. Substantially revised: April 2026.
Author: Sam Sumit Kaul, MSc CyberSec Oxford University (CISSP, TOGAF) — Security, Multicloud Architect and Engineer
CISSP Domains: Domain 1 — Security and Risk Management; Domain 2 — Asset Security; Domain 3 — Security Architecture and Engineering
CPE Credit Guidance: Authors of published professional articles may claim 1 Group A CPE upon verified publication. Readers undertaking independent study may self-report Group B CPE based on documented reading and research time in accordance with (ISC)² CPE Handbook guidelines.
Abstract
Trusted computing, once the preserve of the enterprise space, is now ready for primetime on the cloud, just as cloud computing continues to make inroads into space once occupied by dedicated niche datacentres. As new trust frameworks emerge, the cloud is now becoming the first line of defence. Trusted computing — or confidential computing as it is known in the cloud — provides the highest level of security when running workloads that require complete isolation, verification, protection, and assurance. This whitepaper serves to provide not only the basis for confidential computing but also a comparison of offerings from the three major cloud providers: AWS, Azure, and GCP. It will find its audience amongst trusted computing infrastructure architects and engineers, as well as technologists advising on or planning to adopt cloud for trusted computing workloads.
Introduction
Cloud computing has gradually become ubiquitous to modern business operations, providing cost-effective and scalable solutions for data storage and processing. As more sensitive information is stored and processed in the cloud, the need for enhanced security and privacy cannot be ignored — driven not only by business case and workload sensitivity, but also by regulations such as GDPR in the EU, CCPA in California, and equivalent frameworks across other jurisdictions.
Confidential computing on the cloud addresses use cases ranging from protecting intellectual property and collaborating securely, to protecting against data-hungry machine learning algorithms and providing assurance in environments where the cloud provider itself offers competing business services. Traditional cloud security measures — access control and firewalls — are no longer sufficient in isolation to protect sensitive data from cyber threats.
Where a workload is security-sensitive, as in healthcare, national security, financial services, or retail environments where competing actors present real security challenges, confidential computing offers trusted computing with the full benefits of cloud scale.
While end-to-end encryption (E2E) became the dominant security narrative during the remote working expansion of the pandemic years, E2E often falls short in a critical respect: it protects data at rest and in transit, but does not protect data while it is being actively used in memory.
Confidential cloud computing is a Trusted Execution Environment (TEE)-based approach to data storage and processing that addresses this gap — providing end-to-end protection including data in active memory through hardware-enforced isolation. Confidential cloud computing uses secure multi-party computation, which allows multiple parties to jointly compute on encrypted data without revealing the underlying plaintext, enabling the sharing of sensitive information without compromising security or privacy. These workloads may include algorithms running on medical data, federated workloads on privacy-sensitive datasets, and machine learning processing of raw data. Confidential computing reduces both attack surface and threat boundaries.
This whitepaper compares the confidential cloud computing offerings from AWS (Nitro), Microsoft Azure Confidential Computing, and Google Cloud Confidential Computing. It assumes knowledge of trusted computing and its building blocks and is not a deep dive into the Intel Safer Computing initiative, virtualisation, or E2E encryption. For brevity, ARM and IBM cloud computing are not discussed except when relevant to hardware context.
Technical Underpinnings of Confidential Cloud Computing
Threats such as target interception and tampering with valuable information in off-chip memory are no longer novel. Spectre and Meltdown, hardware bus probing, and DRAM-targeting attacks such as Rowhammer [1] have kept security architects focused on the hardware layer. While software-based defence mechanisms exist, the defence-in-depth principle holds that securing the software requires securing the hardware beneath it.
The elements of trusted computing are based on both software and hardware. A trusted system behaves predictably under stress based on its Trusted Computing Properties (TCP). Securing cloud data in use employs hardware-enabled features to isolate and process encrypted data in memory, reducing exposure and compromise from both co-located workloads and the underlying system and platform.
In the application, business, and personal computing space, Intel SGX has been the most widely deployed enclave solution, providing a secure enclave protecting data and applications while in use. Intel SGX has known limitations for cloud due to its historically small enclave size. Secure enclaves are common in most smartphones for processing identity and payment data, with both AMD and ARM providing their own Trusted Execution Environments (TEE). IBM S390 provides large enclaves capable of holding both the application and the virtual machine it resides in. AMD EPYC processors with SEV-SNP (Secure Encryption Virtualisation–Secure Nested Paging) provide a hardware-based security feature that not only encrypts system memory and VMs but also isolates VM memory from the hypervisor [2]. AMD EPYC's memory enclave tops out at 896 gigabytes, while the latest Intel Xeon processors with TDX (Trust Domain Extensions — Intel's successor TEE architecture for cloud) can address a terabyte. AWS Nitro enclaves similarly impose no memory ceiling comparable to earlier SGX implementations.
As security violations continue to rise and supply chain integrity becomes a primary concern, trusted computing — built on hardware root of trust — is ever more critical for secure operation of cloud computing stacks. Trusted Computing Base (TCB) components encompass system hardware, firmware, cryptographic functions, and software components that enable measured and secure boot. Protection of data in use is accomplished through computation in a hardware-based TEE. With data at rest secured by encryption, data in transit secured by TLS, and data in active memory secured by the TCB, each stage can now be independently verified and attested.
The verification process provides assurance through remote attestation for certificate and enclave validation, TPM for secure boot and boot integrity measurement, root of trust, and trusted launch, as well as HSM for key generation where required. This stack not only prevents unauthorised access but enables secure collaboration while meeting regulatory requirements and provides blind processing on private data.
Assurance in a Nutshell
The table below maps the STRIDE threat model elements to confidential computing protections and CIA triad guidance.
Threat Model (STRIDE)Element of Confidential Computing (Protection)Guide (Confidentiality, Integrity, Availability)SpoofingTEE, Attestation, TPM extend, hardware identityImpersonation (not limited to authentication)TamperingSRTM and DRTM with TPMIntegrity violationRepudiationPKI and attestationNon-repudiation (I didn't do it)Denial of ServiceLimited protectionAvailabilityElevation of PrivilegeMeasured boot (Elevation controls)Authorisation (platform level)
To provide assurance of what software is running on a given system, a boot process monitor must be available and an anchor established from a root of trust (RTM). Once the system is in a defined state, the measurement must be reported either locally or remotely for an assurance claim. A TPM fulfils this function through its Platform Configuration Registers (PCRs), which store hash measurements of entities running on the platform. PCRs can only be extended at runtime and are initialised at boot. The RTM, together with an Endorsement Key (EK) — a PKI key backed by an EK certificate issued by the manufacturer — is used with the Root of Trust for Reporting (RTR) to communicate with an external party in a secured way. The Root of Trust for Storage (RTS) allows access to the platform only when it is in a defined state, so only the TPM to which the key pair belongs can decrypt the data. Cryptographic keys can be stored in tamper-resistant modules such as TPM, HSM, or smart card. Communication and memory buses of the platform are physically and cryptographically protected against eavesdropping and tampering.
Confidential Computing Across AWS, Azure and GCP
AWS Nitro
AWS Nitro is a proprietary hypervisor developed by Amazon Web Services that provides a secure environment for running virtual machines. It is designed to provide enhanced security for AWS instances by isolating them from the underlying host using a purpose-built Nitro security chip as the hardware root of trust. AWS Nitro provides a secure boot process that verifies the integrity of the host and instance at start-up, and AWS Nitro Enclaves provide isolated compute environments within EC2 instances for processing highly sensitive data.
Azure Confidential Computing
Azure Confidential Computing uses hardware-based Trusted Execution Environments (TEEs) to protect data in use and ensure that only authorised parties can access it. Azure's confidential VM portfolio has evolved significantly since its initial offering. Intel SGX-based confidential computing remains available on DCsv2 and DCsv3 series VMs for legacy workloads; however, Intel has deprecated SGX on client platforms, and Azure's primary path for new confidential VM deployments is now AMD EPYC processors with SEV-SNP, alongside Intel Xeon with TDX (Trust Domain Extensions) — Intel's successor cloud TEE architecture that provides VM-level isolation without the enclave size constraints of SGX. Microsoft Azure Attestation (MAA) provides remote attestation across all three hardware roots of trust. Azure's confidential computing stack also includes Azure SQL queries in TEE and confidential containers via Azure Kubernetes Service.
Google Cloud Confidential Computing
Google Cloud Confidential Computing allows organisations to protect sensitive data stored and processed in the cloud using a combination of hardware and software security features, including secure enclaves, to isolate sensitive data and ensure that it is only accessible to authorised parties. Google Cloud Confidential Computing (GCCC) uses AMD EPYC processors with AMD Secure Encrypted Virtualisation (SEV) hardware-based enclaves. GCCC supports homomorphic encryption capabilities, allowing computations to be performed on encrypted data without decryption. Google also offers Confidential GKE Nodes, providing hardware-level isolation for container workloads.
Recommended by LinkedIn
Comparison
STACKAWSAZUREGCPSolutionAWS NitroAzure Confidential ComputingGoogle Cloud Confidential ComputingBasisPurpose built (Nitro chip)AMD EPYC (SEV-SNP) / Intel TDXAMD EPYCOfferingConfidential EC2Confidential VMShielded and Confidential VMMemory EncryptionTotal memory encryption by default on Graviton-based instancesFull memory isolation and encryption, RTMSecure encrypted virtualisation. Memory encrypted with only secure processor access to keys, that cannot be accessed outside confidential boundary. RTM and isolationContainer SecurityBinary Authorisation (images required to be signed by trusted CA)EnclavesAWS Nitro EnclavesAzure confidential computing stack with enclaves based on Xeon TDX and AMD EPYC SEV-SNPConfidential GKE nodes, enclaves based on AMD EPYC with shielded VM using TPM and RTM with verified integrity checksCertificate ManagementAWS Certificate ManagerMicrosoft Entra Certificate ServicesCertificate Authority ServiceHSMAWS Cloud HSMAzure Dedicated HSMCloud HSMZero TrustMicrosoft Entra ID Conditional AccessBeyondCorp EnterpriseFIPS ValidationFIPS 140-2FIPS 140-2/3FIPS 140-2
Comparison: Key Differences
All three providers — AWS Nitro, Azure Confidential Computing, and Google Cloud Confidential Computing — provide advanced security features that allow organisations to protect sensitive data stored and processed in the cloud. Key differences include the following.
AWS Nitro is based on a proprietary hypervisor and purpose-built Nitro security chip, while Azure and GCP use commercially available AMD EPYC and Intel Xeon processor-based TEEs with publicly documented architectures. AWS Nitro provides enhanced security for AWS instances by isolating them from the underlying host at the hypervisor level, while Azure and Google Cloud use hardware enclaves to isolate sensitive data at the VM or container level.
All three providers offer secure boot processes that verify the integrity of the host and instance at start-up, memory encryption and isolation. Google Cloud Confidential Computing provides homomorphic encryption support; both AWS and Azure provide toolkits to achieve equivalent outcomes. Binary authorisation — while not explicit to confidential computing — is a native feature in Google Cloud and can be implemented in both AWS and Azure with additional configuration.
A key difference between the three is the hardware root of trust. AWS uses a purpose-built Nitro security chip. Google Cloud Confidential Computing uses a hardware root of trust through either the Titan chip, lockable firmware, or vTPM with microcontroller verification. Azure's hardware root of trust for cloud confidential VMs is provided by AMD EPYC SEV-SNP and Intel TDX processor hardware, backed by Microsoft Azure Attestation (MAA) for remote verification.
In terms of industry compliance, all three platforms provide features that can help customers meet encryption, secure storage, and access control requirements across SOC, PCI-DSS, ISO 27001, FIPS, and HIPAA frameworks.
Analysis
While all three cloud computing providers deliver confidential computing, AWS operates a proprietary technology that is not based on either AMD EPYC or Intel SGX/TDX enclaves, though it uses similar principles of RTM and TPM via the Nitro security chip and Nitro cards. Azure and GCP utilise proven commercial chipsets combined with their own firmware and attestation infrastructure.
One distinction that stands out between AWS and the other two is Consortium membership. Both GCP and Azure are members of the Confidential Computing Consortium (CCC), which promotes openness, collaboration, and shared vulnerability research. AWS joined the CCC as a general member in 2023, though its Nitro-based proprietary architecture remains outside the AMD/Intel TEE ecosystem that forms the basis of most CCC technical workstreams. While the internals of Intel SGX/TDX and AMD EPYC SEV-SNP are publicly documented, some elements of AWS Nitro's implementation are not. CCC participation enables knowledge sharing and earlier dissemination of vulnerability disclosures — a meaningful operational advantage given the pace of microarchitectural side-channel research.
Confidential cloud computing is relatively new as a cloud technology and remains susceptible to new research disclosures in microarchitectural side-channels. Research in this area has accelerated significantly, not only from bug-bounty hunters and cloud providers, but also from chip designers and state actors. Supply chain security is a particular concern: the entire confidential computing stack relies on measurements established at zero day, and supply chains scattered across the globe present unique challenges. The SolarWinds/SUNBURST compromise of 2020, in which a trusted software update mechanism was subverted to distribute a backdoor to approximately 18,000 organisations including US federal agencies, and the XZ Utils backdoor (CVE-2024-3094, 2024), in which a patient multi-year social engineering campaign embedded malicious code in a widely-deployed open-source compression library, both demonstrate how supply chain integrity failures can undermine the measurement foundations on which confidential computing depends [3a, 3b]. Hardware-anchored attestation and measurement at zero day are the primary technical defences against this class of attack.
Post-Quantum Cryptography: An Emerging Imperative
Since this whitepaper was first published, the post-quantum threat to confidential computing's cryptographic foundations has moved from theoretical to imminent. Confidential computing's attestation, key exchange, and certificate chains currently depend on public-key cryptography — RSA, ECDSA, and ECDH — which is vulnerable to sufficiently capable quantum computers running Shor's algorithm. In August 2024, NIST finalised its first post-quantum cryptography (PQC) standards: FIPS 203 (ML-KEM, for key encapsulation, based on CRYSTALS-Kyber), FIPS 204 (ML-DSA, for digital signatures, based on CRYSTALS-Dilithium), and FIPS 205 (SLH-DSA, a hash-based signature alternative). All three major cloud providers are actively implementing PQC hybrid schemes: Google has deployed X25519/ML-KEM hybrid key exchange in Chrome and GCP TLS; AWS KMS supports post-quantum hybrid TLS; and Microsoft is incorporating FIPS 203/204 into Azure services. Architects deploying confidential computing solutions today should verify that their attestation and key management chains have a documented migration path to PQC-compliant algorithms. Any solution with no PQC roadmap carries unquantified long-term cryptographic risk — particularly relevant for data classified at sensitivity levels requiring multi-decade confidentiality.
Advancements in quantum computing present a further challenge broadly, though Google, AWS, and Microsoft all now operate quantum computing research programmes and are positioned to trial PQC integration across their confidential computing stacks. From a consumer standpoint, running a trusted workload in one's own infrastructure versus running it on a cloud-based confidential computing stack carries the same comparative pros and cons as cloud computing generally. Organisations not yet on a confidential computing stack risk a competitive and security disadvantage as peers adopt it.
Conclusion
AWS Nitro, Azure Confidential Computing, and Google Cloud Confidential Computing are the leading confidential cloud computing offerings, each providing advanced security features that allow organisations to protect sensitive data stored and processed in the cloud. While all three deliver similar core capabilities — hardware-rooted trust, memory encryption, secure boot, attestation, and TEE isolation — key differences exist in approach, cost, complexity, multi-cloud interoperability, and consortium participation.
Organisations should evaluate options against their specific requirements: complexity and operational overhead, total cost of ownership, availability of specialised resources, environment footprint, and regulatory obligations. Threat actors targeting confidential computing workloads are generally highly motivated, well-funded, and sophisticated — including state actors. It is therefore essential that confidential computing solutions remain invested in new research, engage with vulnerability disclosures through consortiums such as the CCC, and incorporate post-quantum migration planning into their architectural roadmap. All three cloud providers are well positioned in their research and development programmes in this space.
Further Reading
Glossary of Terms
Confidential Computing: The use of secure enclaves or Trusted Execution Environments (TEEs) to protect sensitive data while it is being processed. TEEs are isolated areas of a computer or server where data is protected from access by unauthorised parties, including the cloud provider itself.
Threat Model: The set of assumptions and expectations about the possible attacks and attackers that a system must defend against. A threat model for confidential computing in the cloud considers potential risks to the confidentiality, integrity, and availability of sensitive data being processed in the cloud — including unauthorised data access, data breaches, denial of service attacks, malicious insiders, side-channel attacks, tampering, supply chain attacks, and privilege escalation. A TEE is not a silver bullet: a comprehensive security strategy incorporating encryption, network security, and access controls must complement the TEE to address all possible threat vectors.
Trusted Execution Environment (TEE): A secure area of a processor that ensures sensitive data is processed in a trusted, isolated environment. TEEs use a combination of hardware and software security measures to protect sensitive data from unauthorised access, including from the host operating system, hypervisor, and other co-located workloads.
Memory Protection: The use of hardware and software security measures to protect the memory of a computer or device from unauthorised access, including memory encryption and secure boot.
Attestation: The process of verifying the integrity and authenticity of a device or system, typically using a trusted third-party service. Remote attestation provides assurance that the expected software stack is running on the expected hardware, and that neither has been tampered with. Intel SGX attestation uses the Intel Attestation Service (IAS); Google Cloud uses a Remote Attestation service backed by the Titan chip; Azure uses Microsoft Azure Attestation (MAA) across AMD SEV-SNP and Intel TDX hardware roots of trust.
Post-Quantum Cryptography (PQC): Cryptographic algorithms designed to be secure against attacks by quantum computers. NIST finalised its first PQC standards in August 2024 — FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA) — replacing the public-key algorithms (RSA, ECDSA, ECDH) on which current attestation and key exchange chains depend.
Submitted for CISSP Continuing Professional Education credit. Authors of published professional articles may claim 1 Group A CPE upon verified publication; readers may self-report Group B hours for documented study time, per the (ISC)² CPE Handbook. Originally published January 2023; substantially revised April 2026 to reflect current cloud provider architectures, Intel TDX, Microsoft Entra ID, NIST post-quantum cryptography standards (FIPS 203/204/205), and updated supply chain attack references. Views are the author's own and do not represent any employer or client organisation.
This shift in confidential computing reminds me of a recent ISACA report finding that 72% of enterprises consider data privacy their top security concern, underscoring the urgent need for precisely these advanced protections. #ConfidentialComputing #Cybersecurity