OmniHuman Image to Video
Input
Hint: Drag and drop image files from your computer, images from web pages, paste from clipboard (Ctrl/Cmd+V), or provide a URL. Accepted file types: jpg, jpeg, png, webp, gif, avif

Hint: Drag and drop audio files from your computer, audio from web pages, paste from clipboard (Ctrl/Cmd+V), or provide a URL. Accepted file types: mp3, ogg, wav, m4a, aac
Result
What would you like to do next?
Your request will cost $0.14 per second.
Logs
OmniHuman | [image-to-video]
ByteDance's OmniHuman model generates audio-synchronized videos from a single reference image at $0.14 per second of output. With specialized lip-sync precision, the model trained on 18,700 hours of human motion data to maintain tight correlation between audio input and character movement. Built for content creators who need realistic talking head videos without motion capture equipment.
Use Cases: Social Media Content | Product Demos with Presenters | Educational Video Production
Performance
At $0.14 per second, OmniHuman sits in the mid-range for image-to-video generation on fal, trading cost for specialized audio-sync capabilities that generic models don't prioritize.
| Metric | Result | Context |
|---|---|---|
| Audio Sync Quality | Tight emotion/movement correlation | Trained on 18,700 hours of human motion data |
| Max Audio Duration | 30 seconds | Hard limit enforced at API level |
| Cost per Second | $0.14 | Billed on actual audio/video duration |
| Output Quality | High-fidelity video | Specialized for human figure animation |
| Related Endpoints | OmniHuman v1.5, Seedance Pro, Seedance Lite | ByteDance family variants for different quality/cost tradeoffs |
Audio-First Video Generation
OmniHuman flips the standard image-to-video workflow by making audio the primary control signal rather than text prompts or motion parameters. Where most models animate based on text descriptions or keyframes, this architecture analyzes audio waveforms to drive facial expressions, lip movements, and body language simultaneously.
What this means for you:
-
Natural speech synchronization: Upload any audio file under 30 seconds and get matching lip movements without manual keyframe adjustment or phoneme mapping
-
Emotion-driven animation: The model interprets audio tone and pacing to generate corresponding facial expressions and body gestures, not just mouth shapes
-
Single-image input: Start with one reference photo rather than building character rigs or providing multiple angles
-
Production-ready output: High-fidelity video maintains visual consistency across the full duration without drift or quality degradation
Technical Specifications
| Spec | Details |
|---|---|
| Architecture | OmniHuman |
| Input Formats | Single image (JPEG, PNG, WebP, GIF, AVIF) + audio file (MP3, OGG, WAV, M4A, AAC) |
| Output Formats | MP4 video |
| Max Audio Duration | 30 seconds |
| License | Commercial use allowed via fal partnership |
API Documentation | Quickstart Guide | Enterprise Pricing
How It Stacks Up
Bytedance OmniHuman v1.5 ($0.14/sec) – OmniHuman v1.5 offers enhanced audio processing and improved motion quality at the same $0.14 per second price point. The original OmniHuman remains viable for projects where the current quality threshold meets requirements without needing v1.5's refinements.
Seedance 1.0 Pro ($0.14/sec) – OmniHuman prioritizes audio-driven human animation with specialized lip-sync capabilities. Seedance Pro trades audio control for broader creative motion generation across any subject type, ideal for product animations or scene transitions where audio sync isn't critical.
Seedance 1.0 Lite ($0.08/sec) – Seedance Lite delivers 43% cost savings ($0.08 vs $0.14 per second) by simplifying motion generation without audio processing. Best for budget-conscious projects that can handle reduced quality or don't require speech synchronization.