Happy Horse 1.0 is now on fal

nvidia/nemotron-3-nano-omni

Open, efficient reasoning model from NVIDIA. 30B A3B hybrid Transformer-Mamba MoE, built for enterprise agentic workflows.
Inference
Commercial use

About

Run Chat

1. Calling the API#

Install the client#

The client provides a convenient way to interact with the model API.

npm install --save @fal-ai/client

Setup your API Key#

Set FAL_KEY as an environment variable in your runtime.

export FAL_KEY="YOUR_API_KEY"

Submit a request#

The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.

import { fal } from "@fal-ai/client";

const result = await fal.subscribe("nvidia/nemotron-3-nano-omni", {
  input: {
    prompt: "Summarize the key capabilities of a multimodal agent."
  },
  logs: true,
  onQueueUpdate: (update) => {
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.data);
console.log(result.requestId);

2. Authentication#

The API uses an API Key for authentication. It is recommended you set the FAL_KEY environment variable in your runtime when possible.

API Key#

In case your app is running in an environment where you cannot set environment variables, you can set the API Key manually as a client configuration.
import { fal } from "@fal-ai/client";

fal.config({
  credentials: "YOUR_FAL_KEY"
});

3. Queue#

Submit a request#

The client API provides a convenient way to submit requests to the model.

import { fal } from "@fal-ai/client";

const { request_id } = await fal.queue.submit("nvidia/nemotron-3-nano-omni", {
  input: {
    prompt: "Summarize the key capabilities of a multimodal agent."
  },
  webhookUrl: "https://optional.webhook.url/for/results",
});

Fetch request status#

You can fetch the status of a request to check if it is completed or still in progress.

import { fal } from "@fal-ai/client";

const status = await fal.queue.status("nvidia/nemotron-3-nano-omni", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
  logs: true,
});

Get the result#

Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.

import { fal } from "@fal-ai/client";

const result = await fal.queue.result("nvidia/nemotron-3-nano-omni", {
  requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);

4. Files#

Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.

Data URI (base64)#

You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.

Hosted files (URL)#

You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.

Uploading files#

We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.

import { fal } from "@fal-ai/client";

const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);

Read more about file handling in our file upload guide.

5. Schema#

Input#

prompt string* required

Text prompt to send to the model. English only.

system_prompt string

Optional system prompt to steer the model. If omitted, the reasoning_mode control token is used as the system message.

reasoning_mode ReasoningModeEnum

Whether the model should emit an explicit reasoning trace. no_think returns a direct answer; think returns chain-of-thought followed by the final answer. Default value: "no_think"

Possible enum values: think, no_think

max_tokens integer

Maximum number of tokens to generate. Default value: 1024

temperature float

Sampling temperature. Lower is more deterministic. Default value: 0.7

top_p float

Nucleus sampling probability mass. Default value: 0.95

{
  "prompt": "Summarize the key capabilities of a multimodal agent.",
  "system_prompt": "You are a concise enterprise assistant.",
  "reasoning_mode": "no_think",
  "max_tokens": 1024,
  "temperature": 0.7,
  "top_p": 0.95
}

Output#

output string* required

Generated text response.

finish_reason string

Reason generation stopped. Default value: "stop"

usage UsageInfo* required

Token usage for the request.

{
  "output": "The image shows a golden retriever puppy sitting on a wooden floor.",
  "finish_reason": "stop",
  "usage": {
    "output_tokens": 87,
    "input_tokens": 412
  }
}

Other types#

UsageInfo#

input_tokens integer* required

Number of input tokens processed.

output_tokens integer* required

Number of output tokens generated.

Related Models