Skip to main content
The short version: PCCI’s privacy is not based on promises — it’s based on hardware isolation and cryptographic proof. The processors themselves (made by AMD, Intel, NVIDIA) seal your data so that nobody can access it, and you can verify this independently. This page explains how, and where the limits are.

Why “Trust Us” Isn’t Enough

Every AI API provider tells you your data is safe. But traditional security relies on trust:
  • Trust that employees won’t access server memory
  • Trust that the infrastructure provider won’t inspect VMs
  • Trust that the privacy policy reflects the actual implementation
  • Trust that a breach hasn’t already happened
PCCI takes a different approach: don’t trust, verify. Every security claim we make is backed by hardware-enforced isolation and cryptographic attestation that you can check independently.

Trusted Execution Environments — The Foundation

A Trusted Execution Environment (TEE) is a sealed area inside a processor with its own encrypted memory. The key property: nothing outside the TEE can read or modify what’s inside it — not the operating system, not the hypervisor, not the server administrator, not even someone with physical access to the machine.

Sealed Memory

All data inside the TEE is encrypted in hardware. Reading the RAM chip directly yields only scrambled data.

Isolation from Everything

The host OS, the hypervisor, and all other software on the machine are locked out. Only the code inside the TEE can access its memory.

Tamper Detection

Every piece of code loaded into the TEE is measured (fingerprinted). Change even one byte, and the fingerprint changes — making tampering detectable.

Provable Integrity

The TEE can produce a hardware-signed report proving exactly what code is running and that isolation is intact. This is called attestation.

An Analogy

Imagine sending a sealed, tamper-evident envelope to a bank vault. The bank can’t open the envelope. The vault processes your request inside a sealed chamber, puts the result in a new tamper-evident envelope, and sends it back. You can verify the vault’s seal is genuine by checking the manufacturer’s stamp — and if anyone opened the vault or changed what’s inside, the stamp would be different. That’s essentially what a TEE does — at the silicon level, enforced by physics rather than policy.

CPU Confidential Computing

PCCI enclaves run on Confidential Virtual Machines (CVMs) using:
  • AMD SEV-SNP — Encrypts VM memory with per-VM keys managed by a dedicated security processor inside the AMD chip. The hypervisor cannot read or tamper with guest memory.
  • Intel TDX — Equivalent isolation using Intel’s Trust Domain Extensions. Provides memory encryption, integrity protection, and its own attestation system.
Both are industry-standard technologies used across cloud providers globally. PCCI supports both and can attest to either.

GPU Confidential Computing

AI inference runs on GPUs, so CPU isolation alone isn’t enough. PCCI uses NVIDIA GPUs (Hopper and Blackwell architectures) in confidential compute mode:
  • GPU memory is encrypted and isolated from the host
  • Data moves between CPU and GPU over an encrypted channel
  • The GPU produces its own attestation report, verifiable independently
  • The host driver cannot access GPU memory contents
Your data stays encrypted on both the CPU and the GPU — there is no gap where it’s exposed.

Where the Infrastructure Lives

PCCI operates a hybrid infrastructure — hardware we own and capacity we rent from providers. This is intentional: it gives us geographic flexibility and avoids lock-in. The security is the same in both environments. Here’s why: All machines are unattended. There is no operator access — no SSH, no console, no debug interfaces. The TEE hardware enforces isolation regardless of who owns the physical server. Attestation provides identical cryptographic proof whether the machine is ours or a provider’s.
  • Owned infrastructure (Switzerland) — Based in Switzerland, operating under Swiss data protection law. We control the physical supply chain. Lower physical tampering risk, but attestation proves the enclave state regardless.
  • Rented infrastructure (Europe, US) — Primarily located in Europe, with some deployments in the United States. The provider controls the physical hardware, but TEE guarantees are identical: encrypted memory, measured boot, hardware-signed attestation. The provider cannot access what runs inside the CVM.
For compliance teams: The attestation report is the same regardless of infrastructure ownership. It does not say “trust this machine” — it says “this specific code is running in a hardware-isolated environment with these exact measurements, signed by the hardware manufacturer.” This holds whether the machine is in our Swiss facility, a European provider’s data center, or a US deployment.

Everything Runs Inside CVMs

The sealed environment isn’t just the main enclave. Every service that touches your unencrypted data runs inside Confidential Virtual Machines:
ServicePurposeIn CVM?
PCCI EnclaveDecryption, encryption, request orchestrationYes
Model RouterDirects requests to the right AI modelYes
LLM InferenceRuns the AI model on your prompt (all models self-hosted)Yes (GPU TEE)
Speech-to-Text (Deepgram, Whisper)Audio transcriptionYes
The only component outside the CVM is the proxy gateway — and it handles only encrypted data with no access to decryption keys. There is no point in the processing chain where your data exists in plaintext outside a CVM.

Attestation — Verify, Don’t Trust

Attestation is the mechanism that makes everything above provable rather than promissory. It works like this: What an attestation report tells you:
  1. What code is running — A cryptographic fingerprint of every component, from firmware to application. Change one byte, and the fingerprint changes.
  2. That the hardware is genuine — The report is signed by keys rooted in the manufacturer (AMD, Intel, or NVIDIA). It cannot be faked by software.
  3. That isolation is active — Debug interfaces are off, security patches are applied, confidential compute is enabled.
  4. That the report is fresh — Your random challenge (nonce) is embedded in the report, proving it was generated in response to your specific request, not replayed from an earlier session.
The verification can run directly in your browser — our attestation libraries compile to WebAssembly, so you don’t need to trust any server to verify the proof.
For non-technical readers: Attestation is like a notarized audit report for a server. Instead of a human auditor, the hardware manufacturer’s silicon does the auditing — and instead of a signature on paper, it’s a cryptographic signature that’s mathematically impossible to forge. You can check the signature yourself, instantly, from your browser.
For the full technical deep-dive on attestation (CPU reports, GPU EAT tokens, certificate chains, verification code examples), see Attestation.

Layers of Defense

PCCI doesn’t rely on a single security mechanism. Multiple independent layers protect your data — so even if one layer were somehow bypassed, the others hold:
LayerWhat It DoesPlain-Language Impact
Transport encryption (TLS 1.3)Encrypts the connectionPrevents network eavesdropping
End-to-end encryptionEncrypts data from your device to the enclaveEven our own servers can’t read your data in transit
Quantum-resistant cryptographyUses algorithms that resist quantum computersYour data stays safe even if quantum computing matures
CPU isolation (AMD SEV-SNP / Intel TDX)Seals enclave memory at the hardware levelServer operators and cloud providers can’t access processing data
GPU isolation (NVIDIA Confidential Computing)Seals GPU memory during inferenceAI model processing is hardware-protected too
Attestation (hardware-signed, Rust/WASM verified)Proves the enclave is running expected codeYou can verify our security claims yourself
Key sovereigntyYou hold the master encryption keyEven if our entire platform were compromised, your encrypted data remains unreadable

Common Questions

The proxy only holds encrypted data — no keys, no plaintext. The enclave runs inside hardware-sealed memory that the host OS cannot access. An attacker with full root access to the server sees only ciphertext.
The proxy sees only encrypted payloads and metadata (API keys, timestamps, request sizes). It has no decryption keys and no mechanism to read your data. A compromised proxy is a billing/auth risk, not a data exposure risk.
End-to-end encryption means intercepting traffic gets you encrypted bytes. Breaking the encryption requires defeating both a quantum-resistant algorithm (ML-KEM768) and a classical algorithm (X25519) simultaneously.
PCCI uses hybrid encryption that combines quantum-resistant and classical algorithms. Even if a future quantum computer breaks the classical part, the quantum-resistant part keeps your data safe. This also protects against “harvest now, decrypt later” attacks — where someone stores your encrypted traffic today, hoping to crack it with a quantum computer in the future.
There is no operator access mechanism in the enclave images — no SSH, no debug ports, no admin backdoor. The enclave code is measured and attested, so any modification is detectable. A rogue employee cannot access what runs inside the TEE.
Attestation catches this. Every component loaded into the enclave is cryptographically fingerprinted. If a single byte changes, the fingerprint changes, and verification fails on your device. You would know before sending any data.
The entire attestation verification stack is written in Rust — a programming language that eliminates entire classes of security vulnerabilities (buffer overflows, memory corruption) at compile time. It runs as WebAssembly in a sandboxed environment. Exploiting memory bugs in the verification code is not possible because those bugs cannot exist in Rust.
All machines are unattended — no operator access exists. TEE isolation is enforced by the CPU and GPU hardware, not by the data center operator. Attestation proves the enclave state with the same hardware-signed proof whether the machine is in our Swiss facility or a rented data center.

Honest Limitations

No security system is absolute. We believe transparency about limitations builds more trust than pretending they don’t exist.
What PCCI does not protect against:
  • Hardware-level side-channel attacks — Researchers have found theoretical and practical side-channel vulnerabilities in TEE hardware. Manufacturers patch these, and attestation reports include firmware versions so you can check patch levels and we can enforce minimum requirements. While side-channel attacks could theoretically be possible, PCCI selects deployment locations that meet strict security criteria and may apply counter mechanisms to detect invalid states. This is an industry-wide challenge, not unique to PCCI.
  • Compromised hardware manufacturer — If AMD’s, Intel’s, or NVIDIA’s root signing keys were compromised, attestation guarantees would weaken. This is the shared root of trust for the entire confidential computing industry.
  • Model behavior — PCCI protects the privacy of your data during processing. It does not control what the AI model itself does with context during a single inference pass (e.g., model memorization is a model-level concern, not an infrastructure concern).
  • Metadata — Some metadata is visible to the proxy: when you make requests, how large the payloads are, which API key you use, and rate limit counters. The content of your requests and responses is always encrypted.
For the full cryptographic specification, see the Encryption reference. For the attestation deep-dive (reports, tokens, verification code), continue to Attestation.