The client API handles the API submit protocol. It will handle the request status updates and return the result when the request is completed.
import*as falfrom"@fal-ai/serverless-client";const result =await fal.subscribe("fal-ai/llavav15-13b",{input:{image_url:"https://llava-vl.github.io/static/images/monalisa.jpg",prompt:"Do you know who drew this painting?"},logs:true,onQueueUpdate:(update)=>{if(update.status==="IN_PROGRESS"){ update.logs.map((log)=> log.message).forEach(console.log);}},});
When running code on the client-side (e.g. in a browser, mobile app or GUI applications), make sure to not expose your FAL_KEY. Instead, use a server-side proxy to make requests to the API. For more information, check out our server-side integration guide.
For long-running requests, such as training jobs or models with slower inference times, it is recommended to check the Queue status and rely on Webhooks instead of blocking while waiting for the result.
The client API provides a convenient way to submit requests to the model.
import*as falfrom"@fal-ai/serverless-client";const{ request_id }=await fal.queue.submit("fal-ai/llavav15-13b",{input:{image_url:"https://llava-vl.github.io/static/images/monalisa.jpg",prompt:"Do you know who drew this painting?"},webhookUrl:"https://optional.webhook.url/for/results",});
You can fetch the status of a request to check if it is completed or still in progress.
import*as falfrom"@fal-ai/serverless-client";const status =await fal.queue.status("fal-ai/llavav15-13b",{requestId:"764cabcf-b745-4b3e-ae38-1200304cf45b",logs:true,});
Once the request is completed, you can fetch the result. See the Output Schema for the expected result format.
import*as falfrom"@fal-ai/serverless-client";const result =await fal.queue.result("fal-ai/llavav15-13b",{requestId:"764cabcf-b745-4b3e-ae38-1200304cf45b"});
You can pass a Base64 data URI as a file input. The API will handle the file decoding for you. Keep in mind that for large files, this alternative although convenient can impact the request performance.
You can also pass your own URLs as long as they are publicly accessible. Be aware that some hosts might block cross-site requests, rate-limit, or consider the request as a bot.
We provide a convenient file storage that allows you to upload files and use them in your requests. You can upload files using the client API and use the returned URL in your requests.
Maximum number of tokens to generate Default value: 64
temperaturefloat
Temperature for sampling Default value: 0.2
top_pfloat
Top P for sampling Default value: 1
{"image_url":"https://llava-vl.github.io/static/images/monalisa.jpg","prompt":"Do you know who drew this painting?","max_tokens":64,"temperature":0.2,"top_p":1}