Private Serverless Models on GPUs
Optimize Models Using fal's Inference Engine

Runtime Model Optimizations

fal's inference engine bindings takes a torch module and applies all relevant dynamic compilation and quantization techniques to make it faster out of the box without leaking any of the complexity to the user.

This API is currently experimental, and might be subject to change in the future.

Example usage:

import fal
import fal.toolkit
from fal.toolkit import Image
from pydantic import BaseModel, Field
class Input(BaseModel):
    prompt: str = Field(
        description="The prompt to generate an image from.",
            "A cinematic shot of a baby racoon wearing an intricate italian priest robe.",
class Output(BaseModel):
    image: Image = Field(
        description="The generated image.",
class FalModel(fal.App):
    machine_type = "GPU"
    requirements = [
    def setup(self) -> None:
        import torch
        from diffusers import AutoPipelineForText2Image
        # Load SDXL
        self.pipeline = AutoPipelineForText2Image.from_pretrained(
        # Apply fal's spatial optimizer to the pipeline.
        self.pipeline.unet = fal.toolkit.optimize(self.pipeline.unet)
        self.pipeline.vae = fal.toolkit.optimize(self.pipeline.vae)
        # Warm up the model.
            prompt="a cat",
    def text_to_image(self, input: Input) -> Output:
        result = self.pipeline(
        [image] = result.images
        return Output(image=Image.from_pil(image))

2023 © Features and Labels Inc.