Skip to main content
Model APIs gives you instant access to state-of-the-art AI models for image, video, audio, and multimodal generation. Every model is already optimized and production-ready, so you can authenticate and start generating immediately. Each model runs on fal’s infrastructure with automatic scaling, queue-based reliability, and pay-per-use billing. You call them the same way whether you use the Python or JavaScript client or raw HTTP. If you need to deploy your own model instead, see Serverless.

Quick Example

Generate an image in three lines of code. Install the client, set your API key, and call a model.
import fal_client

result = fal_client.subscribe("fal-ai/nano-banana-2", arguments={
    "prompt": "a futuristic cityscape at sunset"
})
print(result["images"][0]["url"])
The response includes a CDN URL for the generated image, along with metadata like dimensions and seed. Every model follows the same pattern: send inputs as JSON, receive outputs as JSON with media URLs.

How It Works

Every model on fal is exposed as an HTTP endpoint. You can call it directly, or go through the queue for automatic retries, status tracking, and scaling. There are several calling patterns depending on your use case. Direct (run) sends a synchronous HTTP request to fal.run and returns the result directly. This is the simplest approach for quick scripts and prototyping. Subscribe uses the queue under the hood but handles polling automatically, so it feels synchronous. This is what the Quick Example above uses. Asynchronous (submit) gives you full control over the queue. Submit a request and return immediately, then poll for status or receive results via webhook. This is the recommended approach for production workloads with parallel processing. Streaming delivers output progressively as the model generates it. This is useful for LLMs that produce tokens incrementally, or for showing generation progress in a UI. realtime() uses WebSockets for persistent connections, bypassing the queue entirely for sub-100ms latency. Only available for models with an explicit real-time endpoint.

What You Can Generate

The model gallery has 1,000+ models spanning several categories. Here are some popular starting points.

Image Generation and Editing

ptbZcVWIQ_fXGGHfv8Zez_0ea5ca41bdf143a29e21e30a53120672

Nano Banana 2

Google’s fast image generation and editing model
WcNt4yo_7RJaCZa0Og6gE_37660458463d4008959517c59a40aafd

Nano Banana Pro

State-of-the-art image generation with realism and typography
LqyVE8NElm_vf-t27Yfkz_6c1dd3323df343e4a3ec968d8f67024c

Flux 2 Flex

Enhanced typography and text rendering from BFL
OO8LkBQxwwkEj6SVlP3Bl_2ebfa787acba475b883a9f4ad9106032

Recraft V4 Pro

Professional design and marketing visuals
DHb_RgXoXsYLdvzQz6mdn_95e87a44239c43448939b4c382dd957c

Nano Banana 2 Edit

Intelligent image editing with Google’s latest model
SR0_u1zPJbx8jCIO6bJR0_8c83f0d66bbd48f3b55f825117941f84

FLUX Kontext Pro

Targeted edits and complex scene transformations

Video Generation

IwzOGSbzp6e8N00QuLtFF_129417bb24f248298e95c3fa2b1b82fb

Veo 3.1

Google DeepMind’s latest video model with sound
Ji4e0i6Afbeql3Wr5UTz6_ab60b14661424612bf19059e97e996a5

Kling 3.0 Pro

Cinematic image-to-video with fluid motion
gV_LMXNrguqRaDB9sLqir_b5d36b36de5a40bdb5cb2cd3f8af29d7

Kling O3

Start and end frame animation with scene guidance
0bXyMS_zSKpeaG3LM6ARv_d4ee6acbfd9a4168b012a848c33b154d

Sora 2

OpenAI’s video model with audio generation
Ezbvf27opeW6gEoDS4nlw_da7064399f9c4342b5e118f6875ec389

LTX-2 19B

Video with audio from images using LTX-2
0bXyMS_zSKpeaG3LM6ARv_d4ee6acbfd9a4168b012a848c33b154d

Sora 2 Pro

OpenAI’s premium video model with enhanced quality

Audio and Speech

FzzCnGuQNXLOEYuQq8CE8_7afb3290e0de46d5a7e4d13495938e3f

Chatterbox TTS

Natural text-to-speech from Resemble AI
A-mMZvJzo3C_kFbO7NmMi_28b71bd757bf4319973fb209c96453f9

MiniMax Speech-02 HD

High-quality multi-voice text-to-speech
Sound-4

Dia TTS

Multi-speaker dialogue with voice cloning
-fqZhGXxiW3usZOZ267NO_9c5cd9c1-62d9-4ed9-8c8e-3c148614c137

Beatoven Music

Royalty-free instrumental music generation
SV2axpXDRJpr3LamW4PGo_9bca6ced-1271-4fbf-9869-c8db8ff81977

Beatoven SFX

Professional sound effect generation
SZffdb8layV90JxfN5fkP_d2048a658d9e4dc9973598f3542833a0

ElevenLabs Music

High quality, realistic music generation

Explore All Models

Browse 1,000+ models across image, video, audio, 3D, and more
Every model page on fal.ai includes a Playground for testing, full API documentation with input/output schemas, pricing, and ready-to-copy code examples.

Next Steps

Playground

Test and compare models interactively before integrating

Inference

Learn the different ways to call models

Client Setup

Install and configure the fal client for Python, JavaScript, and more

Examples

Step-by-step tutorials for image, video, and audio generation