Seedance 2.0 by ByteDance is now live on fal! 🚀
Available now on fal.ai

Wan 2.7Every Frame, Refined.


Smoother Motion. Sharper Detail. Full Control.

First & Last Frame

Define the Start and End

Provide a starting image and an ending image, and Wan 2.7 generates everything in between. Control exactly where your scene begins and ends while the model handles smooth, coherent motion across every frame.

Instruction-Based Editing

Edit Videos with Words

Transform existing videos with natural language instructions. Apply style transfers, modify scenes, swap elements, or recreate footage with a reference image. Preserve or regenerate audio automatically.

Character Reference

Consistent Characters Across Scenes

Supply reference images or videos to maintain character appearance and voice across generations. Multi-subject referencing and multi-shot segmentation keep identity locked even in complex scenes.


Endpoints

Generate, animate, reference, and edit

Create videos from text or images, maintain character consistency with references, and edit existing footage with natural language.


For Developers

A few lines of code.
Cinematic video.

fal.ai handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPUs to manage.

  • Serverless: scales to zero, scales to millions
  • Pay per second, no minimums
  • Python and JavaScript SDKs, plus REST API
import fal_client

result = fal_client.run(
  "fal-ai/wan/v2.7/text-to-video",
  arguments={
    "prompt": "A woman walks along the shore at dawn",
    "resolution": "1080p",
    "duration": 10,
  }
)

# result.video.url → your generated video
FAQ

Common questions about Wan 2.7

What can I create with Wan 2.7?

Wan 2.7 supports text-to-video, image-to-video (with first-and-last-frame control), reference-to-video for character consistency, and instruction-based video editing. Output supports 720p and 1080p at durations of 2 to 15 seconds. Multiple aspect ratios including 16:9, 9:16, 1:1, 4:3, and 3:4.

What's new compared to Wan 2.6?

Wan 2.7 introduces native 1080p output, extended 15-second duration, first-and-last-frame video generation, 9-grid multi-image input, instruction-based video editing, combined subject and voice referencing, and significantly improved motion smoothness and visual coherence.

What is first-and-last-frame mode?

Provide a starting image and an ending image, and Wan 2.7 generates the video transition between them. This gives you precise control over where a scene begins and ends while the model handles all the motion in between.

How does video editing work?

The edit-video endpoint accepts an existing video (2-10 seconds) and a text instruction describing the changes. You can apply style transfers, modify scenes, or use a reference image for guided editing. The model can preserve or regenerate the original audio.

How much does Wan 2.7 cost on fal.ai?

All endpoints are priced at $0.10 per second of generated video. A 10-second 1080p video costs $1.00. Pay-per-use with no minimums or subscriptions.

How do I get started with the API?

Install the fal.ai SDK (Python or JavaScript), grab an API key from your dashboard, and make your first request in a few lines of code. The API is serverless, so no GPUs to manage. Check the API documentation for all available parameters.

Can I use Wan 2.7 for commercial projects?

Yes. Content generated through the fal.ai API can be used in commercial projects. Check fal.ai's terms of service for full details on usage rights and licensing.

Ready to create?

Start generating HD AI video with Wan 2.7 on fal.ai.