The short version: PCCI’s privacy is not based on promises — it’s based on hardware isolation and cryptographic proof. The processors themselves (made by AMD, Intel, NVIDIA) seal your data so that nobody can access it, and you can verify this independently. This page explains how, and where the limits are.
Why “Trust Us” Isn’t Enough
Every AI API provider tells you your data is safe. But traditional security relies on trust:- Trust that employees won’t access server memory
- Trust that the infrastructure provider won’t inspect VMs
- Trust that the privacy policy reflects the actual implementation
- Trust that a breach hasn’t already happened
Trusted Execution Environments — The Foundation
A Trusted Execution Environment (TEE) is a sealed area inside a processor with its own encrypted memory. The key property: nothing outside the TEE can read or modify what’s inside it — not the operating system, not the hypervisor, not the server administrator, not even someone with physical access to the machine.Sealed Memory
All data inside the TEE is encrypted in hardware. Reading the RAM chip directly yields only scrambled data.
Isolation from Everything
The host OS, the hypervisor, and all other software on the machine are locked out. Only the code inside the TEE can access its memory.
Tamper Detection
Every piece of code loaded into the TEE is measured (fingerprinted). Change even one byte, and the fingerprint changes — making tampering detectable.
Provable Integrity
The TEE can produce a hardware-signed report proving exactly what code is running and that isolation is intact. This is called attestation.
An Analogy
Imagine sending a sealed, tamper-evident envelope to a bank vault. The bank can’t open the envelope. The vault processes your request inside a sealed chamber, puts the result in a new tamper-evident envelope, and sends it back. You can verify the vault’s seal is genuine by checking the manufacturer’s stamp — and if anyone opened the vault or changed what’s inside, the stamp would be different. That’s essentially what a TEE does — at the silicon level, enforced by physics rather than policy.CPU Confidential Computing
PCCI enclaves run on Confidential Virtual Machines (CVMs) using:- AMD SEV-SNP — Encrypts VM memory with per-VM keys managed by a dedicated security processor inside the AMD chip. The hypervisor cannot read or tamper with guest memory.
- Intel TDX — Equivalent isolation using Intel’s Trust Domain Extensions. Provides memory encryption, integrity protection, and its own attestation system.
GPU Confidential Computing
AI inference runs on GPUs, so CPU isolation alone isn’t enough. PCCI uses NVIDIA GPUs (Hopper and Blackwell architectures) in confidential compute mode:- GPU memory is encrypted and isolated from the host
- Data moves between CPU and GPU over an encrypted channel
- The GPU produces its own attestation report, verifiable independently
- The host driver cannot access GPU memory contents
Where the Infrastructure Lives
PCCI operates a hybrid infrastructure — hardware we own and capacity we rent from providers. This is intentional: it gives us geographic flexibility and avoids lock-in. The security is the same in both environments. Here’s why: All machines are unattended. There is no operator access — no SSH, no console, no debug interfaces. The TEE hardware enforces isolation regardless of who owns the physical server. Attestation provides identical cryptographic proof whether the machine is ours or a provider’s.- Owned infrastructure (Switzerland) — Based in Switzerland, operating under Swiss data protection law. We control the physical supply chain. Lower physical tampering risk, but attestation proves the enclave state regardless.
- Rented infrastructure (Europe, US) — Primarily located in Europe, with some deployments in the United States. The provider controls the physical hardware, but TEE guarantees are identical: encrypted memory, measured boot, hardware-signed attestation. The provider cannot access what runs inside the CVM.
For compliance teams: The attestation report is the same regardless of infrastructure ownership. It does not say “trust this machine” — it says “this specific code is running in a hardware-isolated environment with these exact measurements, signed by the hardware manufacturer.” This holds whether the machine is in our Swiss facility, a European provider’s data center, or a US deployment.
Everything Runs Inside CVMs
The sealed environment isn’t just the main enclave. Every service that touches your unencrypted data runs inside Confidential Virtual Machines:| Service | Purpose | In CVM? |
|---|---|---|
| PCCI Enclave | Decryption, encryption, request orchestration | Yes |
| Model Router | Directs requests to the right AI model | Yes |
| LLM Inference | Runs the AI model on your prompt (all models self-hosted) | Yes (GPU TEE) |
| Speech-to-Text (Deepgram, Whisper) | Audio transcription | Yes |
Attestation — Verify, Don’t Trust
Attestation is the mechanism that makes everything above provable rather than promissory. It works like this: What an attestation report tells you:- What code is running — A cryptographic fingerprint of every component, from firmware to application. Change one byte, and the fingerprint changes.
- That the hardware is genuine — The report is signed by keys rooted in the manufacturer (AMD, Intel, or NVIDIA). It cannot be faked by software.
- That isolation is active — Debug interfaces are off, security patches are applied, confidential compute is enabled.
- That the report is fresh — Your random challenge (nonce) is embedded in the report, proving it was generated in response to your specific request, not replayed from an earlier session.
Layers of Defense
PCCI doesn’t rely on a single security mechanism. Multiple independent layers protect your data — so even if one layer were somehow bypassed, the others hold:| Layer | What It Does | Plain-Language Impact |
|---|---|---|
| Transport encryption (TLS 1.3) | Encrypts the connection | Prevents network eavesdropping |
| End-to-end encryption | Encrypts data from your device to the enclave | Even our own servers can’t read your data in transit |
| Quantum-resistant cryptography | Uses algorithms that resist quantum computers | Your data stays safe even if quantum computing matures |
| CPU isolation (AMD SEV-SNP / Intel TDX) | Seals enclave memory at the hardware level | Server operators and cloud providers can’t access processing data |
| GPU isolation (NVIDIA Confidential Computing) | Seals GPU memory during inference | AI model processing is hardware-protected too |
| Attestation (hardware-signed, Rust/WASM verified) | Proves the enclave is running expected code | You can verify our security claims yourself |
| Key sovereignty | You hold the master encryption key | Even if our entire platform were compromised, your encrypted data remains unreadable |
Common Questions
What if someone hacks your servers?
What if someone hacks your servers?
The proxy only holds encrypted data — no keys, no plaintext. The enclave runs inside hardware-sealed memory that the host OS cannot access. An attacker with full root access to the server sees only ciphertext.
What if the proxy is compromised?
What if the proxy is compromised?
The proxy sees only encrypted payloads and metadata (API keys, timestamps, request sizes). It has no decryption keys and no mechanism to read your data. A compromised proxy is a billing/auth risk, not a data exposure risk.
What if someone intercepts the network traffic?
What if someone intercepts the network traffic?
End-to-end encryption means intercepting traffic gets you encrypted bytes. Breaking the encryption requires defeating both a quantum-resistant algorithm (ML-KEM768) and a classical algorithm (X25519) simultaneously.
What about quantum computers?
What about quantum computers?
PCCI uses hybrid encryption that combines quantum-resistant and classical algorithms. Even if a future quantum computer breaks the classical part, the quantum-resistant part keeps your data safe. This also protects against “harvest now, decrypt later” attacks — where someone stores your encrypted traffic today, hoping to crack it with a quantum computer in the future.
What if a PCCI employee goes rogue?
What if a PCCI employee goes rogue?
There is no operator access mechanism in the enclave images — no SSH, no debug ports, no admin backdoor. The enclave code is measured and attested, so any modification is detectable. A rogue employee cannot access what runs inside the TEE.
What if someone tampers with the enclave code?
What if someone tampers with the enclave code?
Attestation catches this. Every component loaded into the enclave is cryptographically fingerprinted. If a single byte changes, the fingerprint changes, and verification fails on your device. You would know before sending any data.
What if there's a bug in the verification code itself?
What if there's a bug in the verification code itself?
The entire attestation verification stack is written in Rust — a programming language that eliminates entire classes of security vulnerabilities (buffer overflows, memory corruption) at compile time. It runs as WebAssembly in a sandboxed environment. Exploiting memory bugs in the verification code is not possible because those bugs cannot exist in Rust.
What if the cloud provider or data center operator is malicious?
What if the cloud provider or data center operator is malicious?
All machines are unattended — no operator access exists. TEE isolation is enforced by the CPU and GPU hardware, not by the data center operator. Attestation proves the enclave state with the same hardware-signed proof whether the machine is in our Swiss facility or a rented data center.
Honest Limitations
No security system is absolute. We believe transparency about limitations builds more trust than pretending they don’t exist.For the full cryptographic specification, see the Encryption reference. For the attestation deep-dive (reports, tokens, verification code), continue to Attestation.

