HappyHorse-1.0The Top-Ranked AI Video Model
#1 on the Artificial Analysis Video Arena in both Text-to-Video and Image-to-Video, ranked by blind human preference votes. Joint audio-video generation in a single pass.
Artificial Analysis Video Arena Rankings
Elo ratings based on blind human preference votes. Users compare two videos from the same prompt without knowing which model produced which.
Source: Artificial Analysis Video Arena, April 2026. Scores reflect early vote counts and may shift as more votes accumulate.
Why HappyHorse-1.0 Is #1
#1 in Blind Human Preference
HappyHorse-1.0 holds the top Elo rating on the Artificial Analysis Video Arena in both Text-to-Video and Image-to-Video (no audio). Rankings are based on blind preference votes from real users who do not know which model produced the output they are voting on.
Video and Sound in a Single Pass
The model reportedly generates video and audio jointly in a single forward pass using a unified 40-layer self-attention Transformer with no cross-attention modules. This architecture produces synchronized audiovisual output without separate audio post-processing.
1080p in Under 40 Seconds
The team claims approximately 38-second generation time for 1080p output on a single NVIDIA H100 GPU, and roughly 2 seconds for a 5-second clip at 256p. If verified, this would represent a significant speed advantage over current alternatives.
See what HappyHorse-1.0 can create
Sample outputs from the Artificial Analysis Video Arena and community-shared generations.
Common questions about HappyHorse-1.0
What is HappyHorse-1.0?
HappyHorse-1.0 is an AI video generation model that appeared on the Artificial Analysis Video Arena on April 7, 2026, immediately ranking #1 in both Text-to-Video and Image-to-Video (no audio) categories. It uses blind human preference voting where real users compare outputs without knowing which model produced them.
Who built HappyHorse-1.0?
The model was submitted pseudonymously to the Artificial Analysis leaderboard. The team's own marketing materials claim it was built by the Future Life Lab team at Taotian Group (Alibaba), led by Zhang Di, described as the former VP of Kuaishou and technical lead of Kling AI. This claim has not been independently verified.
What are the technical specs?
According to the team's own sites: 15 billion parameters, a unified 40-layer self-attention Transformer that generates video and audio jointly in a single forward pass with no cross-attention modules. Claimed inference speed is approximately 38 seconds for a 1080p clip on a single NVIDIA H100 GPU. These specs have not been independently verified.
Can I use HappyHorse-1.0 right now?
Not yet. As of April 2026, there is no public API, no downloadable model weights, and no confirmed pricing. The team has announced open-source availability with commercial licensing, but no weights or license have been published. It is coming soon to fal.
How does the Artificial Analysis ranking work?
The Artificial Analysis Video Arena uses an Elo rating system based on blind human preference votes. Users see two videos generated from the same prompt, do not know which model produced which, and vote for the one they prefer. Rankings reflect what real people prefer under blind conditions, not self-reported benchmarks.
What languages does HappyHorse-1.0 support?
The team claims native lip-sync support across seven languages: Mandarin, Cantonese, English, Japanese, Korean, German, and French. This has not been independently tested.
Is HappyHorse-1.0 open source?
The team has described it as fully open source with complete commercial licensing. However, as of April 2026, no weights have been published and no license file is available. The open-source claim is a stated intention, not a current reality.
When will HappyHorse-1.0 be available on fal?
HappyHorse-1.0 is coming soon to fal. We will make it available via playground and API as soon as access is possible. Check back for updates.
Coming soon to fal
HappyHorse-1.0 will be available via playground and API as soon as access is possible. Check back for updates.

