Skip to main content
fal gives you two ways to work with AI models. If you want to generate images, video, audio, or other media, the Model APIs let you call 1,000+ production-ready models with a single API call. If you have your own model to deploy, Serverless gives you the full lifecycle: develop, test, deploy, and scale on the same infrastructure that powers the marketplace. Both paths start with an API key and take a few minutes. The “consume” path is for calling existing models through fal’s client libraries or HTTP. The “deploy” path is for teams bringing their own models to run on fal’s GPU infrastructure, using the same fal.App framework that powers every model on the platform.

What do you want to build?

Generate Images

Create images from text prompts with FLUX, Nano Banana 2, and more

Generate Videos

Transform images into videos with Kling 3.0, Sora 2, and other models

Transcribe Audio

Convert speech to text with Whisper

Use LLMs

Build with Llama, Mistral, and other large language models

Fast FLUX

Ultra-fast image generation with optimized FLUX

Build a Workflow UI

Create interfaces for complex AI workflows

Next.js Integration

Build full-stack AI apps with Next.js and fal

Vercel Integration

Deploy AI-powered apps on Vercel with fal

n8n Integration

Automate workflows by connecting fal models to n8n

Quick example

Generate your first image in under a minute.
1

Install the client

pip install fal-client
2

Set your API key

Get a key from the fal dashboard and set it as an environment variable:
export FAL_KEY="your-api-key-here"
3

Generate an image

import fal_client

result = fal_client.subscribe("fal-ai/flux/schnell", arguments={
    "prompt": "a futuristic cityscape at sunset"
})
print(result["images"][0]["url"])

Next Steps

Get Your API Key

Create an API key to authenticate your requests

AI Tools

Use AI coding assistants to build with fal faster

Explore Models

Browse 1,000+ available models