Run models all in one Sandbox 🏖️

fal-ai/bytedance/seed/v2/mini

Seed 2.0 Mini is a high-performance multimodal model optimized for low latency and high concurrency. It supports text, image, and video input with 256K context and configurable thinking/reasoning modes.
Inference
Commercial use

About

Multimodal understanding using ByteDance's Seed 2.0 Mini model.

Seed 2.0 Mini is a high-performance multimodal model optimized for low latency and high concurrency. It supports text, image, and video input with 256K context and configurable output length. Supports multi-turn conversations by passing back the messages field from a previous response. Optionally enable deep thinking with reasoning_effort for complex tasks.

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import { fal } from "@fal-ai/client";

const result = await fal.subscribe("fal-ai/bytedance/seed/v2/mini", {
  input: {
    prompt: "What can you do?"
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import { fal } from "@fal-ai/client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Queue#

Submit a request#

The client API provides a convenient way to submit requests to the model.

import { fal } from "@fal-ai/client";

const { request_id } = await fal.queue.submit("fal-ai/bytedance/seed/v2/mini", {
  input: {
    prompt: "What can you do?"
  },
  webhookUrl: "https://optional.webhook.url/for/results",
});

Fetch request status#

You can fetch the status of a request to check if it is completed or still in progress.

import { fal } from "@fal-ai/client";

const status = await fal.queue.status("fal-ai/bytedance/seed/v2/mini", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
  logs: true,
});

Get the result#

Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.

import { fal } from "@fal-ai/client";

const result = await fal.queue.result("fal-ai/bytedance/seed/v2/mini", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);

4. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import { fal } from "@fal-ai/client";

const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

Read more about file handling in our file upload guide.

5. Schema#

Input#

prompt string* required

The text prompt or question for the model.

image_urls list<string>

URLs of images for visual understanding. Supported formats: JPEG, PNG, WebP. A maximum of 6 images is supported. Any additional images will be ignored.

video_urls list<string>

URLs of videos for video understanding. Supported formats: MP4, MOV. Audio comprehension is not supported. A maximum of 3 videos is supported. Any additional videos will be ignored.

system_prompt string

Optional system prompt to guide the model's behavior.

messages list<Seed2MiniMessage>

Optional prior conversation history for multi-turn conversations. Pass back the messages field from a previous response to provide context. The current prompt, image_urls, video_urls, and system_prompt are always appended as the latest user turn.

thinking ThinkingEnum

Controls the model's chain-of-thought reasoning. enabled always includes reasoning, disabled never includes reasoning, auto lets the model decide based on the query. Default value: "enabled"

Possible enum values: enabled, disabled, auto

reasoning_effort Enum

Controls the depth of reasoning before the model responds. Only applicable when thinking is enabled or auto. minimal for immediate response, low for faster response with light reasoning, medium for balanced speed and depth, high for deep analysis of complex issues.

Possible enum values: minimal, low, medium, high

max_completion_tokens integer

Controls the maximum length of the model's output, including both the model's response and its chain-of-thought content, measured in tokens. Default value: 4096

temperature float

Controls randomness in the response. Lower values make output more focused and deterministic, higher values make it more creative. Default value: 1

top_p float

Nucleus sampling parameter. The model considers tokens with top_p cumulative probability mass. Lower values narrow the token selection. Default value: 0.7

{
  "prompt": "What can you do?",
  "messages": [],
  "thinking": "enabled",
  "max_completion_tokens": 4096,
  "temperature": 1,
  "top_p": 0.7
}

Output#

output string* required

The model's text response.

reasoning_content string

The model's chain-of-thought reasoning content. Only present when thinking is enabled or auto.

messages list<Seed2MiniMessage>* required

The full conversation history including the model's response. Pass this back as the messages input field to continue the conversation.

{
  "output": "",
  "messages": [
    {
      "role": "system"
    }
  ]
}

Other types#

Image#

url string* required

The URL where the file can be downloaded from.

content_type string

The mime type of the file.

file_name string

The name of the file. It will be auto-generated if not provided.

file_size integer

The size of the file in bytes.

width integer

The width of the image in pixels.

height integer

The height of the image in pixels.

File#

url string* required

The URL where the file can be downloaded from.

content_type string

The mime type of the file.

file_name string

The name of the file. It will be auto-generated if not provided.

file_size integer

The size of the file in bytes.

ImageSize#

width integer

The width of the generated image. Default value: 512

height integer

The height of the generated image. Default value: 512

Seed2MiniMessage#

role RoleEnum* required

The role of the message author.

Possible enum values: system, user, assistant

content string | list<object>* required

The content of the message. Can be a string for text-only messages, or a list of content parts for multimodal messages (e.g. with images).