Documentation Index
Fetch the complete documentation index at: https://docs.prem.io/llms.txt
Use this file to discover all available pages before exploring further.
Basic Audio Transcription
Transcribe audio files to text:
import createRvencClient from "@premai/api-sdk";
import fs from "fs";
const client = await createRvencClient({
apiKey: process.env.API_KEY,
clientKEK: process.env.CLIENT_KEK
});
const audioFile = fs.createReadStream("./audio.mp3");
const transcription = await client.audio.transcriptions.create({
file: audioFile,
model: "openai/whisper-large-v3",
});
console.log(transcription.text);
Transcription with Language Specification
Improve accuracy by specifying the language:
const transcription = await client.audio.transcriptions.create({
file: audioFile,
model: "openai/whisper-large-v3",
language: "en",
});
console.log(transcription.text);
Transcription with Timestamps
Get word-level or segment-level timestamps:
const transcription = await client.audio.transcriptions.create({
file: audioFile,
model: "openai/whisper-large-v3",
language: "en",
response_format: "verbose_json",
timestamp_granularities: ["word", "segment"],
});
console.log(transcription);
Transcription with Custom Prompt
Provide context to improve accuracy:
const transcription = await client.audio.transcriptions.create({
file: audioFile,
model: "openai/whisper-large-v3",
prompt: "This is a medical consultation discussing patient symptoms and treatment options.",
});
console.log(transcription.text);