Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Mysterious "Joyful Horse" lands unexpectedly on the charts, crushing Seedance 2.0. Has Video AI changed again?
Late Tuesday night, the AI community blew up.
On the globally well-known AI evaluation platform Artificial Analysis’s Video Arena leaderboard, a mysterious text-to-video generation model code-named 「HappyHorse-1.0」 quietly dropped in—no launch event, no technical blog, no company endorsement of any kind—yet it jumped to the top of the chart with a crushing display.
As of the time of publication, in the text-to-video track, Elo points have surged to 1357, surpassing Seedance 2.0 by 84 points, which had taken the #1 spot just five days earlier. It is more than 100 points ahead of the #3 and #4 entries, SkyReels V4 and Kling 3.0 1080p Pro. With a single model, HappyHorse-1.0 has widened the gap between the entire industry’s tiered lineup.
The text-to-video-with-images track also produced a staggering 1402 score, setting a new historical record on that leaderboard.
The only area where it’s slightly less impressive is the combined ranking of「video + audio」that includes native sound effects: HappyHorse places second, a bit lower than Seedance 2.0.
This leaderboard isn’t that easy to game
A lot of people’s first reaction was: isn’t this just score-farming?
This skepticism is not without reason. But Artificial Analysis’s ranking mechanism makes it harder to manipulate than a regular benchmark leaderboard—every ranking comes from blind “monocular two-choice” test votes by real users worldwide. Without any information, users compare two generated results and choose between them; the final Elo scores are then aggregated.
Model teams can’t cheat by “practicing questions.” What it reflects is the most genuine preference people feel after seeing the outputs.
Of course, others have also pointed out that in Artificial Analysis’s blind-test samples, the proportion of portrait generation and lip-sync/oral-delivery-type content exceeds 60%. HappyHorse naturally has an advantage in portrait scenarios, which may, to some extent, lead to a gap between evaluation scores and real-world overall capability.
Discussions on X have therefore split commentators into two camps: the skeptics think HappyHorse and Seedance 2.0 still show visible gaps in person details and dynamic coherence; supporters, meanwhile, place great expectations on its potential—especially hoping it can address the industry pain point of visual consistency across multi-camera sequences.
Second, according to online evaluations, ordinary people generally rate this model highly.
Who exactly is “HappyHorse,” someone else’s horse?
This is the question the entire AI community most wants to figure out.
Speculation on X came fast. The first thing to draw attention was the ordering of languages on the official website: Mandarin and Cantonese come before English. For a product aimed at a global user base, that order is quite unusual—because the team behind it is from China, this can basically be confirmed.
The name itself is also a clue. 2026 is the Year of the Horse in the Chinese lunar calendar. The naming “HappyHorse” contains a not-so-subtle pun for the Horse year; earlier this year, “Pony Alpha” also played a similar trick. So the list of suspects quickly expanded: the founders of Tencent and Alibaba both have the surname Ma, making them natural picks; some bet on Xiaomi, thinking Lei Jun has always been low-key and likes to suddenly show a card; others think it feels more like DeepSeek—DS had previously quietly launched a vision model, and later quietly took it offline as well.
In a comment, X user Passluo wrote in a very telling way: “Whose happy horse is this? Alibaba, Tencent, or Xiaomi?”
The “case” from a technical perspective
Guessing from the name isn’t enough. The tech community then switched to a Holmes mode.
X user Vigo Zhao took the public benchmark data of HappyHorse-1.0 and matched it point-by-point against known models, finding a highly consistent target: daVinci-MagiHuman—the open-source model “DaVinci Magic Human” that launched on GitHub in March of this year.
Visual quality, text alignment, physical consistency, and multiple other metrics all match line by line. The website structure is almost identical too. Both are single-stream Transformer architectures, both do joint audio-video generation, and the list of supported languages is exactly the same. At this level of overlap, it’s hard to explain away as coincidence.
The conclusion that currently has a relatively high level of recognition in the tech community is: HappyHorse is one of the Sand.ai developers, a partner of daVinci-MagiHuman, and is an iterated version optimized based on the open-source model. The core purpose is to verify the upper bound of the model’s performance under users’ real preferences, paving the way for later commercial deployment.
daVinci-MagiHuman was officially released as open source on March 23, 2026. It’s the product of collaboration between two young teams:
One team is from the Generative AI Research Lab at Shanghai Chuangzhi College; the other is Sand.ai in Beijing (San Dai Technology). The model uses a single-stream Transformer with 15 billion parameters of pure self-attention, and it puts tokens from text, video, and audio into the same sequence for joint modeling.
Another clue points to Alibaba Taotian
At the same time, another version of the speculation has also been circulating:
The core team behind HappyHorse comes from Alibaba Taotian Group’s “Future Life Laboratory,” led by Zhang Di, former VP of Kuaishou and technical负责人 of Keling.
Public information shows that Zhang Di joined Alibaba at the end of 2025 and took charge of Taotian Group’s “Future Life Laboratory.” This lab is Alibaba’s core e-commerce algorithms team. It brings together top technical talent and core compute resources, focusing on frontier areas like large models and multimodal technology. It was established only a little over a year ago, and has already published more than 10 high-quality papers at international top-tier conferences.
Worth mentioning is that the timing when this rumor began to spread overlaps exactly with Alibaba’s active performance in today’s Hong Kong stock market—of course, this is just an interesting coincidence. There is currently no hard evidence linking the two directly, so it shouldn’t be over-interpreted.
The truly important signal in this matter
No matter who ultimately “owns” HappyHorse, the industry signal this event sends is already crystal clear.
For a long time, there has been a visible, effects-level gap between open-source video models and closed-source products—when it comes to scenarios where you must deliver to customers, open-source models have consistently failed to cross the threshold from “usable” to “deliverable.” The pricing power of closed-source products like Keling and Seedance is, to a very significant extent, built on that gap.
This time, a product based on an open-source model, in a blind-test leaderboard benchmarked on real user perception for the first time, directly matches its current mainstream closed-source competitors.
For closed-source vendors that rely on this gap to establish their pricing power, this is at least a signal that deserves serious attention.
According to the usual “blind-test conquest” practice of Artificial Analysis, once an anonymous model earns enough attention, the official side typically formally “claims” it within a week.
Maybe just in the next few days, we’ll know the answer.
In this Year of the Horse, what’s truly worth watching might not be which horse runs the fastest, but the track itself—which is getting wider.
Risk disclosure and disclaimer terms