Model Gallery
Featured Models
Check out some of our most popular models
Generate video clips from your prompts using MiniMax model
Generate video clips from your images using MiniMax Video model
Transform text into hyper-realistic videos with Haiper 2.0. Experience industry-leading resolution, fluid motion, and rapid generation for stunning AI videos.
Search Results
48 models found
The video upscaler endpoint uses RealESRGAN on each frame of the input video to upscale the video to a higher resolution.
Generate videos from prompts using CogVideoX-5B
Generate videos from videos and prompts using CogVideoX-5B
Generate videos from images and prompts using CogVideoX-5B
Hunyuan Video is an Open video generation model with high visual quality, motion diversity, text-video alignment, and generation stability
Generate videos from prompts using LTX Video
Generate videos from images using LTX Video
Generate short video clips from your prompts using SVD v1.1
Generate short video clips from your images using SVD v1.1 at Lightning Speed
Generate short video clips from your images using SVD v1.1 at Lightning Speed
Re-animate your videos with evolved consistency!
Re-animate your videos with evolved consistency!
Generate short video clips from your prompts
Generate video clips from your images using MiniMax Video model
Generate video clips from your prompts using MiniMax model
Transform text into hyper-realistic videos with Haiper 2.0. Experience industry-leading resolution, fluid motion, and rapid generation for stunning AI videos.
Generate short video clips from your images using SVD v1.1
Generate video clips from your prompts using Luma Dream Machine v1.5
Generate video clips from your prompts using Kling 1.0
Generate video clips from your images using Kling 1.0
Generate video clips from your prompts using Kling 1.0 (pro)
Generate video clips from your images using Kling 1.0 (pro)
Generate video clips from your images using Kling 1.5 (pro)
Generate video clips from your prompts using Kling 1.5 (pro)
Generate video clips from your images using Luma Dream Machine v1.5
MMAudio generates synchronized audio given video and/or text inputs. It can be combined with video models to get videos with audio.
Interpolate between video frames
Re-animate your videos in lightning speed!
This endpoint delivers seamlessly localized videos by generating lip-synced dubs in multiple languages, ensuring natural and immersive multilingual experiences
Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation.
Re-animate your videos!
Transfer expression from a video to a portrait.
Transfer expression from a video to a portrait.
SAM 2 is a model for segmenting images and videos in real-time.
SAM 2 is a model for segmenting images and videos in real-time.
Multimodal vision-language model for video understanding
Animate a reference image with a driving video using ControlNeXt.
Generate realistic lipsync animations from audio using advanced algorithms for high-quality synchronization.
Interpolate between image frames
Animate your ideas!
Animate your ideas in lightning speed!
Animate Your Drawings with Latent Consistency Models!
Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
MuseTalk is a real-time high quality audio-driven lip-syncing model. Use MuseTalk to animate a face with your own audio.
Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation