LongCat Video Image to Video
Input
Hint: Drag and drop image files from your computer, images from web pages, paste from clipboard (Ctrl/Cmd+V), or provide a URL. Accepted file types: jpg, jpeg, png, webp, gif, avif

Customize your input with more control.
Result
What would you like to do next?
Your request will cost $0.04 per generated second of video. Generated seconds are calculated at 30 frames per second.
Logs
Readme
LongCat Video [Image to Video]
Transform static images into dynamic, high-quality 720p videos at 30fps with intelligent motion generation.
Overview
LongCat Video delivers professional-grade image-to-video generation that brings still images to life with natural, coherent motion. Built to handle complex scenes and maintain visual fidelity, this model excels at creating smooth animations, dynamic camera movements, and realistic object motion without requiring specialized video editing skills or complex post-processing workflows.
Key Capabilities
- High-resolution output: Generate 720p videos at 30fps for professional applications
- Flexible duration control: Create videos from short clips to extended sequences
- Prompt-guided motion: Direct movement and animation style with text descriptions
- Single-image input: Transform any static image into engaging video content
Pricing and Usage
Your request costs $0.04 per generated second of video. Generated seconds are calculated at 30 frames per second, making it straightforward to estimate costs based on your desired video length.
Cost calculation example:
- 90 frames at 30fps = 3 seconds Ă— $0.04 = $0.12
- 180 frames at 30fps = 6 seconds Ă— $0.04 = $0.24
For volume pricing or enterprise solutions, contact sales to discuss custom plans that fit your production needs.
Popular Use Cases
Marketing and Advertising Content Transform product photography into attention-grabbing video ads. Take a static shot of a car and add dynamic camera movements that circle the vehicle, or animate a fashion photo with subtle fabric movement and atmospheric effects. Perfect for social media campaigns, landing pages, and digital advertising where motion increases engagement.
Content Creation and Social Media Breathe life into existing photo libraries for YouTube thumbnails, Instagram Reels, or TikTok content. Convert travel photography into cinematic sequences with parallax effects and camera pans, or animate portrait shots with natural head movements and environmental motion. Ideal for creators who need video content but work primarily with still images.
Concept Visualization and Prototyping Rapidly prototype video concepts from storyboard frames or concept art. Generate multiple motion variations from a single image to explore different creative directions before committing to full video production. Particularly valuable for agencies pitching ideas or filmmakers testing shot compositions.
Getting Started
Getting up and running with LongCat Video takes just a few minutes. Here's how to begin:
- Get your API key at fal.ai/login — sign up and access your credentials from the dashboard
- Install the client library using npm, yarn, or your preferred package manager
- Make your first API call with an image URL and optional prompt to guide the motion
Basic Example
javascriptimport { fal } from "@fal-ai/client"; fal.config({ credentials: "YOUR_FAL_KEY" }); const result = await fal.subscribe("fal-ai/longcat-video/image-to-video/720p", { input: { image_url: "https://your-image-url.jpg", prompt: "Smooth forward camera movement through the scene, maintaining focus on the subject with natural environmental motion" }, logs: true, onQueueUpdate: (update) => { if (update.status === "IN_PROGRESS") { update.logs.map((log) => log.message).forEach(console.log); } } }); console.log(result.data.video.url);
Python Example
pythonfrom fal_client import FalClient client = FalClient("YOUR_FAL_KEY") result = client.subscribe("fal-ai/longcat-video/image-to-video/720p", { "image_url": "https://your-image-url.jpg", "prompt": "Smooth forward camera movement through the scene, maintaining focus on the subject with natural environmental motion" }) print(result["video"]["url"])
Advanced Example with Frame Control
javascriptimport { fal } from "@fal-ai/client"; const result = await fal.subscribe("fal-ai/longcat-video/image-to-video/720p", { input: { image_url: "https://your-image-url.jpg", prompt: "Slow circular camera pan around the subject with depth-of-field effects", num_frames: 180 // 6 seconds at 30fps } }); console.log(`Generated video: ${result.data.video.url}`); console.log(`Video duration: ${result.data.video.duration}s`);
Technical Specifications
Model Architecture
- Output resolution: 720p (1280Ă—720) at 30 frames per second
- Frame control: Configurable video length through frame count parameter
- Input formats: Supports JPG, JPEG, PNG, WebP, GIF, and AVIF image formats
- Processing: Queue-based system for efficient handling of generation requests
Input Capabilities
- Image sources: Direct URLs, base64 data URIs, or uploaded files through the fal storage API
- Prompt guidance: Optional text descriptions to control motion style, camera movement, and animation characteristics
- Seed control: Reproducible results through seed parameter for consistent outputs
Performance
- Generation time: Varies based on requested frame count and scene complexity
- Queue system: Asynchronous processing with status updates and webhook support for long-running requests
- Output delivery: Direct MP4 video URLs hosted on fal's CDN infrastructure
Best Practices
Achieve optimal results with these proven approaches:
Craft Specific Motion Prompts Instead of generic descriptions, specify exactly what should move and how. Rather than "make it dynamic," try "slow forward dolly movement with the camera tracking the subject while background elements show subtle parallax motion." Include details about camera behavior (pan, tilt, zoom), subject animation (walking, turning, gesturing), and environmental effects (wind, lighting changes). The more precise your motion description, the more control you have over the final result.
Choose Images with Clear Depth Images with distinct foreground, midground, and background elements produce more convincing motion and parallax effects. Photos with obvious depth cues—like a subject in front of a landscape or architectural elements at varying distances—allow the model to generate more natural camera movements. Avoid flat, head-on shots where depth is ambiguous, as these limit the model's ability to create dimensional motion.
Match Frame Count to Content Type Select frame counts that align with your intended use case and motion complexity. Shorter sequences (60–90 frames, or 2–3 seconds) work well for social media loops and product showcases where you want punchy, repeatable content. Longer sequences (120–180 frames, or 4–6 seconds) suit narrative moments, establishing shots, or content requiring more elaborate camera movements. Consider that costs scale linearly with duration—test shorter versions first to validate your creative direction.
Leverage Seed for Iteration Use the seed parameter when you want to explore variations while maintaining reproducibility. Generate a video with a specific seed, evaluate the results, then adjust your prompt while keeping the seed constant to isolate the effect of your prompt changes. This approach helps you refine motion quality without introducing random variation from different seeds.
Advanced Features
Seed-Based Generation
Control randomness and ensure reproducible results by specifying a seed value in your requests. This proves particularly valuable when you need to generate variations of the same motion or maintain consistency across multiple video generations from similar source images. Use the same seed with identical parameters to recreate exact results, or vary the seed to explore different motion interpretations of the same prompt.
javascript// Generate with reproducible results const result = await fal.subscribe("fal-ai/longcat-video/image-to-video/720p", { input: { image_url: "https://your-image-url.jpg", prompt: "Gentle camera zoom with soft focus transitions", seed: 42 // Same seed produces identical output } });
Webhook Integration
For production workflows processing multiple videos, implement webhooks to receive completion notifications rather than polling for status. Submit requests to the queue with a webhook URL, and the system will POST results to your endpoint when generation completes. This approach reduces API calls, improves efficiency, and enables asynchronous processing pipelines where video generation happens in the background while your application handles other tasks.
javascriptimport { fal } from "@fal-ai/client"; const { request_id } = await fal.queue.submit( "fal-ai/longcat-video/image-to-video/720p", { input: { image_url: "https://your-image-url.jpg", prompt: "Dynamic camera movement through architectural space" }, webhookUrl: "https://your-domain.com/webhook/video-complete" } ); console.log(`Request queued: ${request_id}`);
Queue Status Management
For long-running requests, check the queue status and retrieve results asynchronously without blocking your application. This pattern is essential for production systems handling multiple concurrent video generations.
javascriptimport { fal } from "@fal-ai/client"; // Check status const status = await fal.queue.status( "fal-ai/longcat-video/image-to-video/720p", { requestId: "your-request-id", logs: true } ); // Retrieve result when ready if (status.status === "COMPLETED") { const result = await fal.queue.result( "fal-ai/longcat-video/image-to-video/720p", { requestId: "your-request-id" } ); console.log(result.data.video.url); }
API Reference
Input Parameters
typescriptinterface LongCatVideoInput { image_url: string; // Required: URL of the source image (jpg, jpeg, png, webp, gif, avif) prompt?: string; // Optional: Text description guiding motion generation num_frames?: number; // Optional: Number of frames to generate (default: varies by model) seed?: number; // Optional: Random seed for reproducible results }
Output Structure
typescriptinterface LongCatVideoOutput { video: { url: string; // MP4 video URL hosted on fal CDN content_type: string; // MIME type (video/mp4) }; }
Response Example
json{ "video": { "url": "https://v3b.fal.media/files/b/panda/4-MoAje_CCMAGH8d-9kmA_nQEkcRc2.mp4", "content_type": "video/mp4" } }
Support and Resources
We're here to help you succeed with LongCat Video:
- Documentation: Complete integration guides and API reference at docs.fal.ai
- Playground: Test the model interactively and explore parameters at the LongCat Video playground
- API Schema: Review detailed input/output specifications and example requests in the API documentation
- Client Libraries: Access maintained SDKs for JavaScript, Python, and other languages with auto-upload support and queue management
Ready to transform your images into compelling video content? Sign up now at fal.ai and start creating with LongCat Video.