- Edit
- Reference
Endpoint:
POST https://fal.run/fal-ai/kling-video/o1/video-to-video/edit
Endpoint ID: fal-ai/kling-video/o1/video-to-video/editTry it in the Playground
Run this model interactively with your own prompts.
Quick Start
Input Schema
Use @Element1, @Element2 to reference elements and @Image1, @Image2 to reference images in order.
Reference video URL. Only .mp4/.mov formats supported, 3-10 seconds duration, 720-2160px resolution, max 200MB.Max file size: 200.0MB, Min width: 720px, Min height: 720px, Max width: 2160px, Max height: 2160px, Min duration: 3.0s, Max duration: 10.05s, Min FPS: 24.0, Max FPS: 60.0, Timeout: 30.0s
Whether to keep the original audio from the video.
Reference images for style/appearance. Reference in prompt as @Image1, @Image2, etc. Maximum 4 total (elements + reference images) when using video.
Elements (characters/objects) to include. Reference in prompt as @Element1, @Element2, etc. Maximum 4 total (elements + reference images) when using video.
Output Schema
The generated video.
Input Example
Output Example
Context-Aware Video Transformation Without Masking
Kling O1 Edit operates on a fundamentally different approach than frame-by-frame editing tools: it understands the entire motion structure of your input video and applies transformations that respect camera angles, movement patterns, and spatial relationships. Where traditional video editing requires manual masking and frame-level adjustments, this model interprets natural language instructions and applies them across the full video duration. What this means for you:- Multi-reference editing: Combine up to 4 total elements and reference images in a single transformation, enabling complex character swaps with specific style references
- Motion preservation: Original camera movements and subject motion remain intact while subjects, settings, and visual style transform according to your prompt
- Natural language control: Direct the edit through conversational instructions rather than technical parameters. Example: “Replace the character with @Element1, maintaining the same movements and camera angles. Transform the landscape into @Image1”
- Audio preservation: Choose to keep original audio from your source video or generate silent output through the keep_audio parameter
- Element structure: Each element accepts one frontal image plus multiple reference angles (frontal_image_url + reference_image_urls array), giving the model comprehensive visual context for accurate transformations
Performance That Scales
Kling O1 Edit’s per-second pricing model reflects the computational complexity of motion-preserving video transformation, with costs scaling directly to your input video duration.| Metric | Result | Context |
|---|---|---|
| Cost per Video | 1.68 | Based on 3-10 second input duration at $0.168/second |
| Input Duration | 3-10 seconds | Supports .mp4, .mov, .webm, .m4v, .gif up to 200MB |
| Resolution Range | 720-2160px | Accepts standard HD through 4K input resolutions |
| Reference Capacity | Up to 4 total | Combined limit for elements and style reference images |
Technical Specifications
| Spec | Details |
|---|---|
| Architecture | Kling O1 Edit |
| Input Formats | Video (.mp4, .mov, .webm, .m4v, .gif), reference images (.jpg, .jpeg, .png, .webp, .gif, .avif) |
| Output Formats | .mp4 video |
| Video Duration | 3-10 seconds (output matches input duration) |
| Audio Handling | Optional audio preservation via keep_audio parameter (default: false) |
| Prompt Syntax | @Element1, @Element2 for tracked elements; @Image1, @Image2 for style references |
| License | Commercial use via fal partnership |
How It Stacks Up
Sora 2 Video to Video - Kling O1 Edit prioritizes multi-reference element control with up to 4 combined inputs for complex character and environment transformations. Sora 2’s remix capabilities emphasize broader creative reinterpretation and style transfer across longer video durations, ideal for narrative content that requires substantial visual reimagining. Wan Video to Video - Kling O1 Edit’s natural language interface eliminates technical parameter tuning, making it accessible for creators who want direct prompt-based control. Wan’s video-to-video endpoint offers granular parameter control for users who need precise technical adjustments in their transformation workflows. AnimateDiff Video to Video - Kling O1 Edit maintains original motion structure while transforming visual content, preserving the exact camera movements and subject actions from your source footage. AnimateDiff focuses on animation-style transformations and motion synthesis, serving creators building stylized or animated content from video references.Related
- Kling O1 First Frame Last Frame to Video [Pro] — Video Generation
- Kling O1 Reference Image to Video [Pro] — Video Generation
- Kling O1 Edit Video [Standard] — Video Generation
- Kling O1 Reference Video to Video [Pro] — Video Generation
Limitations
aspect_ratiorestricted to:auto,16:9,9:16,1:1durationrestricted to:3,4,5,6,7,8,9,10