Skip to main content
Prem API is an end-to-end encrypted, OpenAI-compatible API: chat, audio, and models behind the same familiar request shapes. Use the TypeScript SDK, or run the bundled local proxy and connect with any OpenAI client — you can be up and running in a few lines of code.

Using the Prem API SDK

1. Create an API key

  1. Open the Dashboard and sign in (or register).
  2. Go to the API section.
  3. Create a new API key and copy it somewhere safe.
Store the key in an environment variable (for example PREM_API_KEY) and avoid committing it to source control.

2. Install the SDK and make your first call

Install the TypeScript SDK from npm:
npm install @premai/pcci-sdk-ts
Then initialize the client and call the API:
import createRvencClient from "@premai/pcci-sdk-ts";

const client = await createRvencClient({
  apiKey: process.env.PREM_API_KEY!,
});

const stream = await client.chat.completions.create({
  model: "openai/gpt-oss-120b",
  messages: [{ role: "user", content: "Hello!" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
All three environment variables are required. Add them to your .env file:
PREM_API_KEY=your-api-key
PROXY_URL=https://your-proxy-url
ENCLAVE_URL=https://your-enclave-url

3. Run a request

That’s it — run your script and you should see a response printed to the console.
For error codes and HTTP conventions see Errors. For request limits see Rate limits.

Using the OpenAI SDK

You can run a small Express server that exposes OpenAI compatible routes. Any OpenAI client can point its baseURL at this server — no SDK changes required.
npx -p @premai/pcci-sdk-ts pcci-proxy
All three environment variables are required. Add them to your .env file:
PREM_API_KEY=your-api-key
PROXY_URL=https://your-proxy-url
ENCLAVE_URL=https://your-enclave-url
With the proxy running, install the OpenAI JavaScript SDK and point it at your local /v1 URL:
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.PREM_API_KEY!,
  baseURL: "http://127.0.0.1:3000/v1",
});

const stream = await client.chat.completions.create({
  model: "openai/gpt-oss-120b",
  messages: [{ role: "user", content: "Hello!" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Next steps

API reference

Chat completions and other endpoints in detail.

Recipes

Step-by-step guides for common flows (chat, audio, and more).