HappyHorse-1.0The Top Ranked AI Video Model
#1 on the Artificial Analysis Video Arena in both Text-to-Video and Image-to-Video, ranked by blind human preference votes. Joint audio-video generation in a single pass. Available now as of April 27, 2026 on fal as an official API partner.
Start building with the HappyHorse-1.0 API
Text-to-video, image-to-video, video editing, and reference-to-video endpoints, all available via API.

Generate 1080p video with synchronized native audio from a text prompt. Aspect ratios: 16:9, 9:16, 1:1, 4:3, 3:4. Duration: 3–15s.

Alibaba's #1-ranked Happy Horse 1.0 — generate 1080p video with synchronized native audio and multilingual lip-sync from text prompts or images.

HappyHorse video editing supports advanced video editing through natural language instructions. It allows for local or global editing of video elements using up to 5 reference images.

Generate 1080p video with synchronized native audio from a text prompt and references. Aspect ratios: 16:9, 9:16, 1:1, 4:3, 3:4. Duration: 3–15s.
How to access the HappyHorse-1.0 API
The client API handles the request submit protocol. It will handle the request status updates and return the result when the request is completed.
import { fal } from "@fal-ai/client";
const result = await fal.subscribe("alibaba/happy-horse/text-to-video", {
input: {
prompt: "A young woman in a red coat walks down a wet city street at night, neon reflections.",
aspect_ratio: "16:9",
resolution: "1080p",
duration: 5,
},
logs: true,
onQueueUpdate: (update) => {
if (update.status === "IN_PROGRESS") {
update.logs.map((log) => log.message).forEach(console.log);
}
},
});
console.log(result.data);
console.log(result.requestId);Artificial Analysis Video Arena Rankings
Elo ratings based on blind human preference votes. Users compare two videos from the same prompt without knowing which model produced which. In video generation with the elo system, people compare two unlabeled clips and pick the better one. The winning model gains points, the loser loses some. Generated video samples posted by the benchmark providers showed Happy Horse performing well leading to #1 results for the following arenas.
Source: Artificial Analysis Video Arena, April 2026. Scores reflect early vote counts and may shift as more votes accumulate.
Why HappyHorse-1.0 Is #1
#1 in Blind Human Preference
HappyHorse-1.0 holds the top Elo rating on the Artificial Analysis Video Arena in both Text-to-Video and Image-to-Video (no audio). Rankings are based on blind preference votes from real users who do not know which model produced the output they are voting on.
Video and Sound in a Single Pass
The model reportedly generates video and audio jointly in a single forward pass using a unified 40-layer self-attention Transformer with no cross-attention modules. This architecture produces synchronized audiovisual output without separate audio post-processing.
1080p in Under 40 Seconds
The team claims approximately 38-second generation time for 1080p output on a single NVIDIA H100 GPU, and roughly 2 seconds for a 5-second clip at 256p — a significant speed advantage over current alternatives.
The Team Behind Happy Horse
On April 9, 2026, Alibaba Group Holding Ltd revealed it created the “Happy Horse” video AI model that sent ripples across the AI industry, claiming ownership of a platform that hit #1 on global rankings for its debut. Happy Horse is the product of Alibaba Token Hub's innovation business unit, now available on fal as an official API partner.
What is Alibaba Token Hub (ATH)?
Alibaba Token Hub is a high-level division that brings together all of Alibaba's AI expertise, from research labs to real-world software, under one roof. Led by CEO Eddie Wu, it's designed to turn advanced models like Qwen into practical, everyday tools by focusing on the “token” as the essential fuel for the modern AI economy.
Led by Zhang Di
Zhang Di is a veteran AI engineer with 15+ years in the field. He served as Director at Alibaba Group from 2010–2022, then joined Kuaishou as Vice President where he was the technical architect of Kling AI. He rejoined Alibaba in late 2025 to lead the Taotian Future Life Lab under ATH, and within months delivered Happy Horse 1.0.
Open-Source Status
While other industry players say HappyHorse-1.0 will be open source, we can confirm that HappyHorse-1.0 will be closed source. It will not be licensable or open source.
See what HappyHorse-1.0 can create
Sample outputs from the Artificial Analysis Video Arena and community-shared generations.
HappyHorse-1.0 API Integration Steps
Get up and running in minutes. No GPUs to manage, no infrastructure to set up.
- 1Install the client
Pick your package manager. For Python, use pip.
npm install --save @fal-ai/client
- 2Create an account on fal
Sign up to get access to the dashboard and your API keys.
- 3Get your API key
Locate your API credentials in the developer dashboard. Set
FAL_KEYas an environment variable in your runtime. - 4Submit a request
Use
fal.subscribe()to submit your request with a prompt and parameters. The client handles the async queue automatically and returns the final video URL when generation is complete.
No setup required
Start generating HappyHorse-1.0 videos instantly in the playground. No API key needed, just describe your scene and hit generate.
Open Playground →Integrate via API
Grab an API key from your dashboard and integrate HappyHorse-1.0 into your app with a few lines of code. Python and JavaScript SDKs available, plus a REST API for any language.
API Documentation →Learn how to prompt HappyHorse-1.0
Tips, techniques, and best practices for getting the most out of HappyHorse-1.0 across camera moves, lighting, and multi-beat shots.
Prompting Guide →Common questions about HappyHorse-1.0
What is HappyHorse-1.0?
HappyHorse-1.0 is an AI video generation model that appeared on the Artificial Analysis Video Arena on April 7, 2026, immediately ranking #1 in both Text-to-Video and Image-to-Video (no audio) categories. It uses blind human preference voting where real users compare outputs without knowing which model produced them.
Who built HappyHorse-1.0?
The model was submitted pseudonymously to the Artificial Analysis leaderboard. The team's own marketing materials claim it was built by the Future Life Lab team at Taotian Group (Alibaba), led by Zhang Di, described as the former VP of Kuaishou and technical lead of Kling AI. This claim has not been independently verified.
What are the technical specs?
According to the team's own sites: 15 billion parameters, a unified 40-layer self-attention Transformer that generates video and audio jointly in a single forward pass with no cross-attention modules. Claimed inference speed is approximately 38 seconds for a 1080p clip on a single NVIDIA H100 GPU. These specs have not been independently verified.
Can I use HappyHorse-1.0 right now?
Yes. HappyHorse-1.0 is available now on fal via both the playground and the API. Text-to-video, image-to-video, video editing, and reference-to-video endpoints are live at /models/alibaba/happy-horse/text-to-video, /models/alibaba/happy-horse/image-to-video, /models/alibaba/happy-horse/video-edit, and /models/alibaba/happy-horse/reference-to-video.
How much does HappyHorse-1.0 cost?
Pricing is $0.14/second of generated video at 720p and $0.28/second at 1080p. Pay per second, no minimums or subscriptions. Contact sales for enterprise pricing.
How does the Artificial Analysis ranking work?
The Artificial Analysis Video Arena uses an Elo rating system based on blind human preference votes. Users see two videos generated from the same prompt, do not know which model produced which, and vote for the one they prefer. Rankings reflect what real people prefer under blind conditions, not self-reported benchmarks.
What languages does HappyHorse-1.0 support?
The model supports native lip-sync across seven languages: Mandarin, Cantonese, English, Japanese, Korean, German, and French. Test it yourself in the fal playground.
Is HappyHorse-1.0 open source?
No. HappyHorse-1.0 is closed source — it will not be open source or licensable. Access is via the official API on fal.
When will HappyHorse-1.0 be available on fal?
HappyHorse-1.0 is available now on fal as an official API partner. Text-to-video, image-to-video, video editing, and reference-to-video endpoints are all live in the playground and via the API.
Official API Access: HappyHorse-1.0 live on fal
fal hosts the leading generative video models, and HappyHorse-1.0 is the latest. As an official API partner, we're enabling access for enterprises and developers with HappyHorse-1.0's joint audio-video capabilities live in our generative media cloud. Try the playground or integrate via API in minutes.
Ready to transform your enterprise with AI?
Take the first step towards AI-driven innovation. Our team of ML engineers is ready to help you prototype, develop, and scale your AI solutions.

