Documentation
Model Endpoints
Client

The fal client

The client libraries offer convenient ways to interact with fal functions.

Installation

The fal client is available through the standard package manager for each supported language:

npm install --save @fal-ai/serverless-client

Authentication

Navigate to our dashboard's keys page and generate a key from the UI.

All our clients expect FAL_KEY environment variable to be set.

export FAL_KEY=89733b28-••••••••

Initialize the client (for javascript)

Alternatively, the JS client allows for manually set credentials so you can handle it yourself in your app code.

fal.config({
  credentials: "FAL_KEY", // or a function that returns a string
});

Subscribe to queue updates

The client offers a way for you to subscribe to queue updates. This is useful if you want to get notified when a function is done running, or if you want to get the logs as they are being generated.

import * as fal from "@fal-ai/serverless-client";
 
const result = await fal.subscribe("fal-ai/fast-lightning-sdxl", {
  input: {
    prompt: "a cute puppy",
  },
  pollInterval: 500,
  logs: true,
  onQueueUpdate: (update) => {
    console.log(update.status);
    if (update.status === "IN_PROGRESS") {
      update.logs.map((log) => log.message).forEach(console.log);
    }
  },
});
console.log(result.url);

The onQueueUpdate callback will be called every time the queue status changes. The update object contains the queue status data as documented on the status types section.

Run functions

The client offers a way for you to run functions. This is useful if you want to run a function that execute fast and wait for the result.

import * as fal from "@fal-ai/serverless-client";
 
const result = await fal.run(FUNCTION_ID, {
  input: {
    prompt: "a cute puppy",
  },
});
console.log(result);

Streaming

Some endpoints support streaming, which allows you to get partial results as it is being generated. This is particularly useful for long running functions that produces intermediate results, such as Visual LLMs and Workflows with multiple steps.

The API should feel familiar to the other ones, but it will return a stream object that you can use to get all the events produced during the request. Here's an example with fal-ai/llavav15-13b:

import * as fal from "@fal-ai/serverless-client";
 
const stream = fal.stream("fal-ai/llavav15-13b", {
  input: {
    image_url: "https://llava-vl.github.io/static/images/monalisa.jpg",
    prompt: "Do you know who drew this painting?",
  },
});
 
for await (const event of stream) {
  console.log("partial", event);
}
 
const result = await stream.done();
console.log("final result", result);

2023 © Features and Labels Inc.