Run the latest models all in one Sandbox 🏖️

Qwen 3 TTS - Clone Voice [0.6B] Unknown

fal-ai/qwen-3-tts/clone-voice/0.6b
Clone your voices using Qwen3-TTS Clone-Voice model with zero shot cloning capabilities and use it on text-to-speech models to create speeches of yours!
Inference
Commercial use

About

Clone Voice 06B

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/qwen-3-tts/clone-voice/0.6b", {
  input: {
    audio_url: "https://storage.googleapis.com/falserverless/example_inputs/qwen3-tts/clone_in.mp3"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import { fal } from "@fal-ai/client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Queue#

Submit a request#

The client API provides a convenient way to submit requests to the model.

import { fal } from "@fal-ai/client";

const { request_id } = await fal.queue.submit("fal-ai/qwen-3-tts/clone-voice/0.6b", {
  input: {
    audio_url: "https://storage.googleapis.com/falserverless/example_inputs/qwen3-tts/clone_in.mp3"
  },
  webhookUrl: "https://optional.webhook.url/for/results",
});

Fetch request status#

You can fetch the status of a request to check if it is completed or still in progress.

import { fal } from "@fal-ai/client";

const status = await fal.queue.status("fal-ai/qwen-3-tts/clone-voice/0.6b", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
  logs: true,
});

Get the result#

Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.

import { fal } from "@fal-ai/client";

const result = await fal.queue.result("fal-ai/qwen-3-tts/clone-voice/0.6b", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);

4. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import { fal } from "@fal-ai/client";

const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

Read more about file handling in our file upload guide.

5. Schema#

Input#

audio_url string* required

URL to the reference audio file used for voice cloning.

reference_text string

Optional reference text that was used when creating the speaker embedding. Providing this can improve synthesis quality when using a cloned voice.

{
  "audio_url": "https://storage.googleapis.com/falserverless/example_inputs/qwen3-tts/clone_in.mp3",
  "reference_text": "Okay. Yeah. I resent you. I love you. I respect you. But you know what? You blew it! And it is all thanks to you."
}

Output#

speaker_embedding File* required

The generated speaker embedding file in safetensors format.

{
  "speaker_embedding": {
    "file_size": 16288,
    "file_name": "tmpe71u7t4j.safetensors",
    "content_type": "application/octet-stream",
    "url": "https://storage.googleapis.com/falserverless/example_outputs/qwen3-tts/clone_out.safetensors"
  }
}

Other types#

Qwen3TTSInput06b#

text string* required

The text to be converted to speech.

prompt string

Optional prompt to guide the style of the generated speech. This prompt will be ignored if a speaker embedding is provided.

voice VoiceEnum

The voice to be used for speech synthesis, will be ignored if a speaker embedding is provided. Check out the documentation for each voice's details and which language they primarily support.

Possible enum values: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

language LanguageEnum

The language of the voice. Default value: "Auto"

Possible enum values: Auto, English, Chinese, Spanish, French, German, Italian, Japanese, Korean, Portuguese, Russian

speaker_voice_embedding_file_url string

URL to a speaker embedding file in safetensors format, from fal-ai/qwen-3-tts/clone-voice/0.6b endpoint. If provided, the TTS model will use the cloned voice for synthesis instead of the predefined voices.

reference_text string

Optional reference text that was used when creating the speaker embedding. Providing this can improve synthesis quality when using a cloned voice.

top_k integer

Top-k sampling parameter. Default value: 50

top_p float

Top-p sampling parameter. Default value: 1

temperature float

Sampling temperature; higher => more random. Default value: 0.9

repetition_penalty float

Penalty to reduce repeated tokens/codes. Default value: 1.05

subtalker_dosample boolean

Sampling switch for the sub-talker. Default value: true

subtalker_top_k integer

Top-k for sub-talker sampling. Default value: 50

subtalker_top_p float

Top-p for sub-talker sampling. Default value: 1

subtalker_temperature float

Temperature for sub-talker sampling. Default value: 0.9

max_new_tokens integer

Maximum number of new codec tokens to generate. Default value: 200

Qwen3TTSOutput06b#

audio AudioFile* required

The generated speech audio file.

AudioFile#

url string* required

The URL where the file can be downloaded from.

content_type string

The mime type of the file.

file_name string

The name of the file. It will be auto-generated if not provided.

file_size integer

The size of the file in bytes.

file_data string

File data

duration float

The duration of the audio

channels integer

The number of channels in the audio

sample_rate integer

The sample rate of the audio

bitrate string | integer

The bitrate of the audio (e.g., '192k' or 192000)

File#

url string* required

The URL where the file can be downloaded from.

content_type string

The mime type of the file.

file_name string

The name of the file. It will be auto-generated if not provided.

file_size integer

The size of the file in bytes.

file_data string

File data

Related Models