Migrate to @fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
fal-ai/moondream/batched
Run
The client provides a convenient way to interact with the model API.
npm install --save @fal-ai/client
@fal-ai/client
The @fal-ai/serverless-client
package has been deprecated in favor of @fal-ai/client
. Please check the migration guide for more information.
Set FAL_KEY
as an environment variable in your runtime.
export FAL_KEY="YOUR_API_KEY"
The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("fal-ai/moondream/batched", {
input: {
inputs: [{
prompt: "What is the girl doing?",
image_url: "https://github.com/vikhyat/moondream/raw/main/assets/demo-1.jpg"
}]
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);
The API uses an API Key for authentication. It is recommended you set the FAL_KEY
environment variable in your runtime when possible.
import { fal } from "@fal-ai/client";
fal.config({
credentials: "YOUR_FAL_KEY"
});
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY
. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
The client API provides a convenient way to submit requests to the model.
import { fal } from "@fal-ai/client";
const { request_id } = await fal.queue.submit("fal-ai/moondream/batched", {
input: {
inputs: [{
prompt: "What is the girl doing?",
image_url: "https://github.com/vikhyat/moondream/raw/main/assets/demo-1.jpg"
}]
},
webhookUrl: "https://optional.webhook.url/for/results",
});
You can fetch the status of a request to check if it is completed or still in progress.
import { fal } from "@fal-ai/client";
const status = await fal.queue.status("fal-ai/moondream/batched", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b",
logs: true,
});
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import { fal } from "@fal-ai/client";
const result = await fal.queue.result("fal-ai/moondream/batched", {
requestId: "764cabcf-b745-4b3e-ae38-1200304cf45b"
});
console.log(result.data);
console.log(result.requestId);
Some attributes in the API accept file URLs as input. Whenever that's the case you can pass your own URL or a Base64 data URI.
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
import { fal } from "@fal-ai/client";
const file = new File(["Hello, World!"], "hello.txt", { type: "text/plain" });
const url = await fal.storage.upload(file);
The client will auto-upload the file for you if you pass a binary object (e.g. File
, Data
).
Read more about file handling in our file upload guide.
model_id
ModelIDEnum
Model ID to use for inference Default value: "vikhyatk/moondream2"
Possible enum values: vikhyatk/moondream2, fal-ai/moondream2-docci
List of input prompts and image URLs
max_tokens
integer
Maximum number of new tokens to generate Default value: 64
temperature
float
Temperature for sampling Default value: 0.2
top_p
float
Top P for sampling Default value: 1
repetition_penalty
float
Repetition penalty for sampling Default value: 1
{
"model_id": "vikhyatk/moondream2",
"inputs": [
{
"prompt": "What is the girl doing?",
"image_url": "https://github.com/vikhyat/moondream/raw/main/assets/demo-1.jpg"
}
],
"max_tokens": 64,
"temperature": 0.2,
"top_p": 1,
"repetition_penalty": 1
}
List of generated outputs
partial
boolean
Whether the output is partial
Timings for different parts of the process
Filenames of the images processed
{}
image_url
string
* requiredURL of the image to be processed
prompt
string
Prompt to be used for the image Default value: "Describe this image."
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks
Predict the probability of an image being NSFW.
Use any vision language model from our selected catalogue (powered by OpenRouter)