Skip to main content
Endpoint: POST https://fal.run/fal-ai/kling-video/o1/video-to-video/edit Endpoint ID: fal-ai/kling-video/o1/video-to-video/edit

Try it in the Playground

Run this model interactively with your own prompts.

Quick Start

import fal_client

def on_queue_update(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
           print(log["message"])

result = fal_client.subscribe(
    "fal-ai/kling-video/o1/video-to-video/edit",
    arguments={
        "prompt": "Replace the character in the video with @Element1, maintaining the same movements and camera angles. Transform the landscape into @Image1",
        "video_url": "https://v3b.fal.media/files/b/rabbit/ku8_Wdpf-oTbGRq4lB5DU_output.mp4"
    },
    with_logs=True,
    on_queue_update=on_queue_update,
)
print(result)

Input Schema

prompt
string
required
Use @Element1, @Element2 to reference elements and @Image1, @Image2 to reference images in order.
video_url
string
required
Reference video URL. Only .mp4/.mov formats supported, 3-10 seconds duration, 720-2160px resolution, max 200MB.Max file size: 200.0MB, Min width: 720px, Min height: 720px, Max width: 2160px, Max height: 2160px, Min duration: 3.0s, Max duration: 10.05s, Min FPS: 24.0, Max FPS: 60.0, Timeout: 30.0s
keep_audio
boolean
default:"false"
Whether to keep the original audio from the video.
image_urls
list<string>
Reference images for style/appearance. Reference in prompt as @Image1, @Image2, etc. Maximum 4 total (elements + reference images) when using video.
elements
list<OmniVideoElementInput>
Elements (characters/objects) to include. Reference in prompt as @Element1, @Element2, etc. Maximum 4 total (elements + reference images) when using video.

Output Schema

video
File
required
The generated video.

Input Example

{
  "prompt": "Replace the character in the video with @Element1, maintaining the same movements and camera angles. Transform the landscape into @Image1",
  "video_url": "https://v3b.fal.media/files/b/rabbit/ku8_Wdpf-oTbGRq4lB5DU_output.mp4",
  "keep_audio": false,
  "image_urls": [
    "https://v3b.fal.media/files/b/lion/MKvhFko5_wYnfORYacNII_AgPt8v25Wt4oyKhjnhVK5.png"
  ],
  "elements": [
    {
      "frontal_image_url": "https://v3b.fal.media/files/b/panda/MQp-ghIqshvMZROKh9lW3.png",
      "reference_image_urls": [
        "https://v3b.fal.media/files/b/kangaroo/YMpmQkYt9xugpOTQyZW0O.png",
        "https://v3b.fal.media/files/b/zebra/d6ywajNyJ6bnpa_xBue-K.png"
      ]
    }
  ]
}

Output Example

{
  "video": {
    "content_type": "video/mp4",
    "file_name": "output.mp4",
    "file_size": 7533071,
    "url": "https://v3b.fal.media/files/b/0a86603b/YAlbB2535l07BTy1wpDeI_output.mp4"
  }
}
Kuaishou’s Kling O1 Edit delivers natural language video transformation at $0.168 per second, trading traditional masking workflows for prompt-based editing that preserves original motion structure. The model accepts up to 4 combined reference elements and images, enabling complex character swaps and environment transformations through simple text commands. Built for: Character replacement in existing footage | Scene environment transformations | Style transfer while maintaining motion

Context-Aware Video Transformation Without Masking

Kling O1 Edit operates on a fundamentally different approach than frame-by-frame editing tools: it understands the entire motion structure of your input video and applies transformations that respect camera angles, movement patterns, and spatial relationships. Where traditional video editing requires manual masking and frame-level adjustments, this model interprets natural language instructions and applies them across the full video duration. What this means for you:
  • Multi-reference editing: Combine up to 4 total elements and reference images in a single transformation, enabling complex character swaps with specific style references
  • Motion preservation: Original camera movements and subject motion remain intact while subjects, settings, and visual style transform according to your prompt
  • Natural language control: Direct the edit through conversational instructions rather than technical parameters. Example: “Replace the character with @Element1, maintaining the same movements and camera angles. Transform the landscape into @Image1”
  • Audio preservation: Choose to keep original audio from your source video or generate silent output through the keep_audio parameter
  • Element structure: Each element accepts one frontal image plus multiple reference angles (frontal_image_url + reference_image_urls array), giving the model comprehensive visual context for accurate transformations

Performance That Scales

Kling O1 Edit’s per-second pricing model reflects the computational complexity of motion-preserving video transformation, with costs scaling directly to your input video duration.
MetricResultContext
Cost per Video0.500.50-1.68Based on 3-10 second input duration at $0.168/second
Input Duration3-10 secondsSupports .mp4, .mov, .webm, .m4v, .gif up to 200MB
Resolution Range720-2160pxAccepts standard HD through 4K input resolutions
Reference CapacityUp to 4 totalCombined limit for elements and style reference images

Technical Specifications

SpecDetails
ArchitectureKling O1 Edit
Input FormatsVideo (.mp4, .mov, .webm, .m4v, .gif), reference images (.jpg, .jpeg, .png, .webp, .gif, .avif)
Output Formats.mp4 video
Video Duration3-10 seconds (output matches input duration)
Audio HandlingOptional audio preservation via keep_audio parameter (default: false)
Prompt Syntax@Element1, @Element2 for tracked elements; @Image1, @Image2 for style references
LicenseCommercial use via fal partnership
API Documentation

How It Stacks Up

Sora 2 Video to Video - Kling O1 Edit prioritizes multi-reference element control with up to 4 combined inputs for complex character and environment transformations. Sora 2’s remix capabilities emphasize broader creative reinterpretation and style transfer across longer video durations, ideal for narrative content that requires substantial visual reimagining. Wan Video to Video - Kling O1 Edit’s natural language interface eliminates technical parameter tuning, making it accessible for creators who want direct prompt-based control. Wan’s video-to-video endpoint offers granular parameter control for users who need precise technical adjustments in their transformation workflows. AnimateDiff Video to Video - Kling O1 Edit maintains original motion structure while transforming visual content, preserving the exact camera movements and subject actions from your source footage. AnimateDiff focuses on animation-style transformations and motion synthesis, serving creators building stylized or animated content from video references.

Limitations

  • aspect_ratio restricted to: auto, 16:9, 9:16, 1:1
  • duration restricted to: 3, 4, 5, 6, 7, 8, 9, 10