When it comes to the hottest buzzwords in the AI community recently, nothing beats ByteDance’s latest video large model Seedance.
On February 12, ByteDance officially launched the new generation AI video generation model Seedance 2.0, integrating it into its generative AI creative platform “Jimeng” and Doubao App. With just a few brief prompts, it can generate cinematic-quality videos. Upon release, it sparked worldwide discussion, with some American directors even exclaiming after using Seedance 2.0: Hollywood might be finished.
Thanks to its multimodal input and excellent text/image generation capabilities, Seedance quickly became a top AI trend globally. On February 14, Zhou Hongyi, founder of Qihoo 360, in an exclusive interview with The Paper, stated that the rapid popularity of DeepSeek and Seedance signifies that Chinese AI is shifting from past “pixel-level imitation” to “dimensionality reduction attack.”
Zhou told the reporter that while Silicon Valley is still debating the philosophical issues of AGI in laboratories, China’s Seedance, Vidu (a video large model under Shengshu Technology), has already entered the trillion-dollar markets of short dramas, games, and advertising infrastructure. Chinese AI is defining “application as standard,” skipping the laboratory stage and honing technology into “nuclear weapons” on the actual battlefield.
【Below is the transcript of the interview】
The Paper: How do you view the popularity of Seedance and its underlying disruptive technologies?
Zhou Hongyi: Seedance 2.0 has been trending everywhere recently. Many people’s first reaction is: AI makes videos more realistic. But after watching, I just want to say: this isn’t about “looking like” or not, but about AI starting to generate worlds according to physical laws.
When you see mechs frictionally skimming the ground, sparks flying and brightness fading; when heavy objects fall, dust swirling and shockwaves spreading; even when glass shatters, the visuals and sounds are synchronized and “growing” together. This isn’t just material stitching or template effects; it’s the model performing underlying deductions: how forces are transmitted, how energy is released.
Visual models are shifting from: imitating the world, to understanding the world. This is a generational change.
The Paper: Last year during the Spring Festival, DeepSeek’s popularity was still fresh in memory. Do you think Seedance’s recent surge is a replay of that moment?
Zhou Hongyi: In the past, everyone thought you couldn’t build top-tier AI without spending tens of thousands of GPUs, but DeepSeek proved that algorithms and engineering optimization can turn the tide. This time, Seedance is doing the same. It’s not about stacking computing power to generate images, but truly understanding the laws of physics and cinematic storytelling.
The rise of DeepSeek and Seedance both mean we’ve finally moved from “pixel-level imitation” to “dimensionality reduction attack.” While Silicon Valley debates the philosophical issues of AGI in labs, our Seedance and Vidu have already entered the trillion-yuan markets of short dramas, games, and advertising infrastructure. Chinese AI is defining “application as standard,” skipping the lab stage and transforming technology into “nuclear weapons” on the actual battlefield.
The Paper: Does the popularity of Seedance indicate that China’s large models are closing the gap with overseas counterparts, even surpassing them?
Zhou Hongyi: Currently, there is still an objective gap between domestic large models and overseas technology. But beyond the competition at the parameter level, the key factors determining victory are shifting. If you only see “stronger models,” you’re only looking at the first half. Because once models start understanding the world, what truly determines industry heights is no longer just the model itself. It’s whether you have the capability to turn the model’s abilities into stable, high-quality, rapidly deliverable engineering systems.
Why do I say this? Because the stronger the model, the higher the demands on deployment, especially consistency. For example: is the same person recognizable from front, side, and back? Do the scene elements like columns, windows, and lighting match when changing camera angles? When characters enter a scene, are they positioned reasonably? If these issues can’t be solved, even a powerful model might only produce demos, not finished works.
Even Seedance itself openly admits that it currently has shortcomings in detail stability, multi-person matching, and complex editing, and needs ongoing optimization. In other words, in the short term, it’s unlikely to see a model that guarantees perfect results every time.
Therefore, the current competition isn’t just about how many bugs a model has, but about who can run imperfect models into stable, deliverable pipelines.
The Paper: How do you forecast and judge the overall competitive trend of the AI industry this year? In the face of rapid industry iteration, what will be the most important “barometer” for the AI industry by 2026?
Zhou Hongyi: Currently, AI has entered the stage of “hundred-billion intelligent agent collaboration.” The real gap is shifting from models themselves to application layer and system capabilities. AI isn’t just about model parameters anymore; it’s about content production paradigms and the ability to truly solve industrial problems. For example, recently, Anthropic, a well-known AI unicorn, may not outperform OpenAI in models, but with intelligent agents, it has gained advantages in many scenarios.
I have a clear judgment: the true watershed in the industry is shifting from “whose model is stronger” to “who can run models into production lines,” moving from AI video generation as a lottery-style process to engineering-based production. This logic is fundamentally the same as in the film industry.
Based on this, Qihoo 360 recently launched China’s first industrial-grade AI animated drama production platform—the Nano Animated Drama Pipeline. We integrate top models in the industry, using intelligent scheduling to match the most cost-effective and expressive models according to different storyboard needs. It covers splitting scripts into scenes, generating reusable character and scene assets, managing them in a database, and producing storyboards, story video, and final editing. We are applying film industry methods to AI systems.
I believe that industrial pipelines like Nano Animated Drama, combined with visual models capable of understanding physical laws like Seedance, will truly mark the beginning of a content production revolution and the future battleground.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Zhou Hongyi Discusses Seedance's Popularity: From "Pixel-level Imitation" to "Dimensionality Reduction Strike" China AI Defines World Standards
When it comes to the hottest buzzwords in the AI community recently, nothing beats ByteDance’s latest video large model Seedance.
On February 12, ByteDance officially launched the new generation AI video generation model Seedance 2.0, integrating it into its generative AI creative platform “Jimeng” and Doubao App. With just a few brief prompts, it can generate cinematic-quality videos. Upon release, it sparked worldwide discussion, with some American directors even exclaiming after using Seedance 2.0: Hollywood might be finished.
Thanks to its multimodal input and excellent text/image generation capabilities, Seedance quickly became a top AI trend globally. On February 14, Zhou Hongyi, founder of Qihoo 360, in an exclusive interview with The Paper, stated that the rapid popularity of DeepSeek and Seedance signifies that Chinese AI is shifting from past “pixel-level imitation” to “dimensionality reduction attack.”
Zhou told the reporter that while Silicon Valley is still debating the philosophical issues of AGI in laboratories, China’s Seedance, Vidu (a video large model under Shengshu Technology), has already entered the trillion-dollar markets of short dramas, games, and advertising infrastructure. Chinese AI is defining “application as standard,” skipping the laboratory stage and honing technology into “nuclear weapons” on the actual battlefield.
【Below is the transcript of the interview】
The Paper: How do you view the popularity of Seedance and its underlying disruptive technologies?
Zhou Hongyi: Seedance 2.0 has been trending everywhere recently. Many people’s first reaction is: AI makes videos more realistic. But after watching, I just want to say: this isn’t about “looking like” or not, but about AI starting to generate worlds according to physical laws.
When you see mechs frictionally skimming the ground, sparks flying and brightness fading; when heavy objects fall, dust swirling and shockwaves spreading; even when glass shatters, the visuals and sounds are synchronized and “growing” together. This isn’t just material stitching or template effects; it’s the model performing underlying deductions: how forces are transmitted, how energy is released.
Visual models are shifting from: imitating the world, to understanding the world. This is a generational change.
The Paper: Last year during the Spring Festival, DeepSeek’s popularity was still fresh in memory. Do you think Seedance’s recent surge is a replay of that moment?
Zhou Hongyi: In the past, everyone thought you couldn’t build top-tier AI without spending tens of thousands of GPUs, but DeepSeek proved that algorithms and engineering optimization can turn the tide. This time, Seedance is doing the same. It’s not about stacking computing power to generate images, but truly understanding the laws of physics and cinematic storytelling.
The rise of DeepSeek and Seedance both mean we’ve finally moved from “pixel-level imitation” to “dimensionality reduction attack.” While Silicon Valley debates the philosophical issues of AGI in labs, our Seedance and Vidu have already entered the trillion-yuan markets of short dramas, games, and advertising infrastructure. Chinese AI is defining “application as standard,” skipping the lab stage and transforming technology into “nuclear weapons” on the actual battlefield.
The Paper: Does the popularity of Seedance indicate that China’s large models are closing the gap with overseas counterparts, even surpassing them?
Zhou Hongyi: Currently, there is still an objective gap between domestic large models and overseas technology. But beyond the competition at the parameter level, the key factors determining victory are shifting. If you only see “stronger models,” you’re only looking at the first half. Because once models start understanding the world, what truly determines industry heights is no longer just the model itself. It’s whether you have the capability to turn the model’s abilities into stable, high-quality, rapidly deliverable engineering systems.
Why do I say this? Because the stronger the model, the higher the demands on deployment, especially consistency. For example: is the same person recognizable from front, side, and back? Do the scene elements like columns, windows, and lighting match when changing camera angles? When characters enter a scene, are they positioned reasonably? If these issues can’t be solved, even a powerful model might only produce demos, not finished works.
Even Seedance itself openly admits that it currently has shortcomings in detail stability, multi-person matching, and complex editing, and needs ongoing optimization. In other words, in the short term, it’s unlikely to see a model that guarantees perfect results every time.
Therefore, the current competition isn’t just about how many bugs a model has, but about who can run imperfect models into stable, deliverable pipelines.
The Paper: How do you forecast and judge the overall competitive trend of the AI industry this year? In the face of rapid industry iteration, what will be the most important “barometer” for the AI industry by 2026?
Zhou Hongyi: Currently, AI has entered the stage of “hundred-billion intelligent agent collaboration.” The real gap is shifting from models themselves to application layer and system capabilities. AI isn’t just about model parameters anymore; it’s about content production paradigms and the ability to truly solve industrial problems. For example, recently, Anthropic, a well-known AI unicorn, may not outperform OpenAI in models, but with intelligent agents, it has gained advantages in many scenarios.
I have a clear judgment: the true watershed in the industry is shifting from “whose model is stronger” to “who can run models into production lines,” moving from AI video generation as a lottery-style process to engineering-based production. This logic is fundamentally the same as in the film industry.
Based on this, Qihoo 360 recently launched China’s first industrial-grade AI animated drama production platform—the Nano Animated Drama Pipeline. We integrate top models in the industry, using intelligent scheduling to match the most cost-effective and expressive models according to different storyboard needs. It covers splitting scripts into scenes, generating reusable character and scene assets, managing them in a database, and producing storyboards, story video, and final editing. We are applying film industry methods to AI systems.
I believe that industrial pipelines like Nano Animated Drama, combined with visual models capable of understanding physical laws like Seedance, will truly mark the beginning of a content production revolution and the future battleground.