MiniMax Voice Cloning Text to Speech

fal-ai/minimax/voice-clone
Clone a voice from a sample audio and generate speech from text prompts using the MiniMax model, which leverages advanced AI techniques to create high-quality text-to-speech.
Inference
Commercial use
Partner

About

Clone a voice from an audio URL. Optionally, generate a TTS preview with the cloned voice.

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/minimax/voice-clone", {
  input: {
    audio_url: "https://storage.googleapis.com/falserverless/model_tests/zonos/demo_voice_zonos.wav"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import { fal } from "@fal-ai/client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Queue#

Submit a request#

The client API provides a convenient way to submit requests to the model.

import { fal } from "@fal-ai/client";

const { request_id } = await fal.queue.submit("fal-ai/minimax/voice-clone", {
  input: {
    audio_url: "https://storage.googleapis.com/falserverless/model_tests/zonos/demo_voice_zonos.wav"
  },
  webhookUrl: "https://optional.webhook.url/for/results",
});

Fetch request status#

You can fetch the status of a request to check if it is completed or still in progress.

import { fal } from "@fal-ai/client";

const status = await fal.queue.status("fal-ai/minimax/voice-clone", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
  logs: true,
});

Get the result#

Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.

import { fal } from "@fal-ai/client";

const result = await fal.queue.result("fal-ai/minimax/voice-clone", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);

4. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import { fal } from "@fal-ai/client";

const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

Read more about file handling in our file upload guide.

5. Schema#

Input#

audio_url string* required

URL of the input audio file for voice cloning. Should be at least 10 seconds long. Default value: undefined

noise_reduction boolean

Enable noise reduction for the cloned voice Default value: false

need_volume_normalization boolean

Enable volume normalization for the cloned voice Default value: false

accuracy float

Text validation accuracy threshold (0-1) Default value: undefined

text string

Text to generate a TTS preview with the cloned voice (optional) Default value: "Hello, this is a preview of your cloned voice! I hope you like it!"

model ModelEnum

TTS model to use for preview. Options: speech-02-hd, speech-02-turbo, speech-01-hd, speech-01-turbo Default value: "speech-02-hd"

Possible enum values: speech-02-hd, speech-02-turbo, speech-01-hd, speech-01-turbo

{
  "audio_url": "https://storage.googleapis.com/falserverless/model_tests/zonos/demo_voice_zonos.wav",
  "text": "Hello, this is a preview of your cloned voice! I hope you like it!",
  "model": "speech-02-hd"
}

Output#

custom_voice_id string* required

The cloned voice ID for use with TTS Default value: undefined

audio File

Preview audio generated with the cloned voice (if requested) Default value: undefined

{
  "custom_voice_id": "",
  "audio": {
    "url": "https://fal.media/files/kangaroo/kojPUCNZ9iUGFGMR-xb7h_speech.mp3"
  }
}

Other types#

TextToSpeechTurboRequest#

text string* required

Text to convert to speech (max 5000 characters) Default value: undefined

voice_setting VoiceSetting

Voice configuration settings

audio_setting AudioSetting

Audio configuration settings Default value: undefined

language_boost LanguageBoostEnum

Enhance recognition of specified languages and dialects Default value: undefined

Possible enum values: Chinese, Chinese,Yue, English, Arabic, Russian, Spanish, French, Portuguese, German, Turkish, Dutch, Ukrainian, Vietnamese, Indonesian, Japanese, Italian, Korean, Thai, Polish, Romanian, Greek, Czech, Finnish, Hindi, auto

output_format OutputFormatEnum

Format of the output content (non-streaming only) Default value: "hex"

Possible enum values: url, hex

pronunciation_dict PronunciationDict

Custom pronunciation dictionary for text replacement Default value: undefined

TextToVideoLiveRequest#

prompt string* required

Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

TextToVideoDirectorRequest#

prompt string* required

Text prompt for video generation. Camera movement instructions can be added using square brackets (e.g. [Pan left] or [Zoom in]). You can use up to 3 combined movements per prompt. Supported movements: Truck left/right, Pan left/right, Push in/Pull out, Pedestal up/down, Tilt up/down, Zoom in/out, Shake, Tracking shot, Static shot. For example: [Truck left, Pan right, Zoom in]. For a more detailed guide, refer https://sixth-switch-2ac.notion.site/T2V-01-Director-Model-Tutorial-with-camera-movement-1886c20a98eb80f395b8e05291ad8645 Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

File#

url string* required

The URL where the file can be downloaded from. Default value: undefined

content_type string

The mime type of the file. Default value: undefined

file_name string

The name of the file. It will be auto-generated if not provided. Default value: undefined

file_size integer

The size of the file in bytes. Default value: undefined

file_data string

File data Default value: undefined

ImageToVideoRequest#

prompt string* required

Default value: undefined

image_url string* required

URL of the image to use as the first frame Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

SubjectReferenceRequest#

prompt string* required

Default value: undefined

subject_reference_image_url string* required

URL of the subject reference image to use for consistent subject appearance Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

AudioSetting#

sample_rate SampleRateEnum

Sample rate of generated audio Default value: "32000"

Possible enum values: 8000, 16000, 22050, 24000, 32000, 44100

bitrate BitrateEnum

Bitrate of generated audio Default value: "128000"

Possible enum values: 32000, 64000, 128000, 256000

format FormatEnum

Audio format Default value: "mp3"

Possible enum values: mp3, pcm, flac

channel ChannelEnum

Number of audio channels (1=mono, 2=stereo) Default value: "1"

Possible enum values: 1, 2

MiniMaxTextToImageWithReferenceRequest#

prompt string* required

Text prompt for image generation (max 1500 characters) Default value: undefined

image_url string* required

URL of the subject reference image to use for consistent character appearance Default value: undefined

aspect_ratio AspectRatioEnum

Aspect ratio of the generated image Default value: "1:1"

Possible enum values: 1:1, 16:9, 4:3, 3:2, 2:3, 3:4, 9:16, 21:9

num_images integer

Number of images to generate (1-9) Default value: 1

prompt_optimizer boolean

Whether to enable automatic prompt optimization Default value: false

MiniMaxTextToImageRequest#

prompt string* required

Text prompt for image generation (max 1500 characters) Default value: undefined

aspect_ratio AspectRatioEnum

Aspect ratio of the generated image Default value: "1:1"

Possible enum values: 1:1, 16:9, 4:3, 3:2, 2:3, 3:4, 9:16, 21:9

num_images integer

Number of images to generate (1-9) Default value: 1

prompt_optimizer boolean

Whether to enable automatic prompt optimization Default value: false

VoiceSetting#

voice_id string

Predefined voice ID to use for synthesis Default value: "Wise_Woman"

speed float

Speech speed (0.5-2.0) Default value: 1

vol float

Volume (0-10) Default value: 1

pitch integer

Voice pitch (-12 to 12) Default value: 0

emotion EmotionEnum

Emotion of the generated speech Default value: undefined

Possible enum values: happy, sad, angry, fearful, disgusted, surprised, neutral

english_normalization boolean

Enables English text normalization to improve number reading performance, with a slight increase in latency Default value: false

TextToVideoRequest#

prompt string* required

Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

ImageToVideoDirectorRequest#

prompt string* required

Text prompt for video generation. Camera movement instructions can be added using square brackets (e.g. [Pan left] or [Zoom in]). You can use up to 3 combined movements per prompt. Supported movements: Truck left/right, Pan left/right, Push in/Pull out, Pedestal up/down, Tilt up/down, Zoom in/out, Shake, Tracking shot, Static shot. For example: [Truck left, Pan right, Zoom in]. For a more detailed guide, refer https://sixth-switch-2ac.notion.site/T2V-01-Director-Model-Tutorial-with-camera-movement-1886c20a98eb80f395b8e05291ad8645 Default value: undefined

image_url string* required

URL of the image to use as the first frame Default value: undefined

prompt_optimizer boolean

Whether to use the model's prompt optimizer Default value: true

TextToSpeechHDRequest#

text string* required

Text to convert to speech (max 5000 characters) Default value: undefined

voice_setting VoiceSetting

Voice configuration settings

audio_setting AudioSetting

Audio configuration settings Default value: undefined

language_boost LanguageBoostEnum

Enhance recognition of specified languages and dialects Default value: undefined

Possible enum values: Chinese, Chinese,Yue, English, Arabic, Russian, Spanish, French, Portuguese, German, Turkish, Dutch, Ukrainian, Vietnamese, Indonesian, Japanese, Italian, Korean, Thai, Polish, Romanian, Greek, Czech, Finnish, Hindi, auto

output_format OutputFormatEnum

Format of the output content (non-streaming only) Default value: "hex"

Possible enum values: url, hex

pronunciation_dict PronunciationDict

Custom pronunciation dictionary for text replacement Default value: undefined

PronunciationDict#

tone_list list<string>

List of pronunciation replacements in format ['text/(pronunciation)', ...]. For Chinese, tones are 1-5. Example: ['燕少飞/(yan4)(shao3)(fei1)'] Default value: undefined

Related Models