Wizper (Whisper v3 -- fal.ai edition) Speech to Text
About
Transcribe an audio file using the Whisper model.
1. Calling the API#
Install the client#
The client provides a convenient way to interact with the model API.
npm install --save @fal-ai/client
Migrate to @fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
Setup your API Key#
Set FAL_KEY
as an environment variable in your runtime.
export FAL_KEY="YOUR_API_KEY"
Submit a request#
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("fal-ai/wizper", {
input: {
audio_url: "https://ihlhivqvotguuqycfcvj.supabase.co/storage/v1/object/public/public-text-to-speech/scratch-testing/earth-history-19mins.mp3"
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);
2. Authentication#
The API uses an API Key for authentication. It is recommended you set the FAL_KEY
environment variable in your runtime when possible.
API Key#
import { fal } from "@fal-ai/client";
fal.config({
credentials: "YOUR_FAL_KEY"
});
Protect your API Key
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY
. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
3. Queue#
Submit a request#
The client API provides a convenient way to submit requests to the model.
import { fal } from "@fal-ai/client";
const { request_id } = await fal.queue.submit("fal-ai/wizper", {
input: {
audio_url: "https://ihlhivqvotguuqycfcvj.supabase.co/storage/v1/object/public/public-text-to-speech/scratch-testing/earth-history-19mins.mp3"
},
webhookUrl: "https://optional.webhook.url/for/results",
});
Fetch request status#
You can fetch the status of a request to check if it is completed or still in progress.
import { fal } from "@fal-ai/client";
const status = await fal.queue.status("fal-ai/wizper", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
logs: true,
});
Get the result#
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import { fal } from "@fal-ai/client";
const result = await fal.queue.result("fal-ai/wizper", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);
4. Files#
Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.
Data URI (base64)#
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
Hosted files (URL)#
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
Uploading files#
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
import { fal } from "@fal-ai/client";
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);
Auto uploads
The client will auto-upload the file for you if you pass a binary object (e.g. File
, Data
).
Read more about file handling in our file upload guide.
5. Schema#
Input#
audio_url
string
* requiredURL of the audio file to transcribe. Supported formats: mp3, mp4, mpeg, mpga, m4a, wav or webm.
task
TaskEnum
Task to perform on the audio file. Either transcribe or translate. Default value: "transcribe"
Possible enum values: transcribe, translate
language
LanguageEnum
Language of the audio file.
If translate is selected as the task, the audio will be translated to
English, regardless of the language selected. Default value: "en"
Possible enum values: af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, yue, zh
chunk_level
ChunkLevelEnum
Level of the chunks to return. Default value: "segment"
Possible enum values: segment
version
VersionEnum
Version of the model to use. All of the models are the Whisper large variant. Default value: "3"
Possible enum values: 3
{
"audio_url": "https://ihlhivqvotguuqycfcvj.supabase.co/storage/v1/object/public/public-text-to-speech/scratch-testing/earth-history-19mins.mp3",
"task": "transcribe",
"language": "en",
"chunk_level": "segment",
"version": "3"
}
Output#
text
string
* requiredTranscription of the audio file
Timestamp chunks of the audio file
{
"text": "",
"chunks": [
{
"text": ""
}
]
}