Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Runway Custom Voice: Real-time multimodal is becoming infrastructure
Custom Voice and Runway’s Real-Time Multimodal Layout
Runway has quietly added custom voices to Characters. This isn’t just adding a feature—it’s moving enterprise AI from static text agents to dynamic video personas, further squeezing ElevenLabs and Synthesia in integrated inference. This feature launched about a month after Characters debuted on March 9, 2026:
People are watching the ethical issues of “voice cloning,” but what truly deserves attention is Modal’s low-latency, scalable inference—it turns conversational AI into deployable infrastructure. If investors are still betting on fragmented voice tools, they may be overlooking this integration path. Runway’s API also has an opportunity to ride the funding momentum in the acoustic AI space of roughly $1.23 billion in January 2026.
My take: With Modal’s global low-latency network, Runway turns voice from a functional module into part of enterprise-grade multimodal infrastructure.
Market and Hype: No Buzz Doesn’t Mean It Isn’t Important
There aren’t many KOLs retweeting on Twitter, and there’s also no discussion at the technical level—this is mostly a communications-side issue. The news was released midweek, with no flashy demo, so it was passively “de-noised,” but that’s two different things from changes happening in the industry. Rather than obsessing over clone ethics (Runway explicitly requires authorization, which is industry practice), the real deciding factor is scale, SLAs, and systems integration. From the perspective of enterprise rollout:
Conclusion: Enterprise customers want P&L results. An integration-oriented tech stack is more likely to be embedded into processes, secure SLAs, and iterate steadily.
Valuation Repricing in the Quiet
“ No retweets or replies ” doesn’t mean “it’s not important.” The voice segment has plenty of fundraising, but it generally gets stuck in system integration. The global low-latency inference collaboration between Runway and Modal, reached on March 26, 2026, clearly confirms Characters’ enterprise positioning (customer service, training, marketing, etc., with partners including BBC). This is a shock to the old belief that “voice is just a plug-in module,” and it will also force Google DeepMind and Meta to accelerate their video-agent routes. Industry data: 88% of companies are using AI, but only 6% use it well; Runway’s multimodal tech stack is closer to the structural need for “workflows that can actually ship.”
Bottom-line judgment: Runway’s custom voice strengthens its multimodal moat, and an integration-oriented tech stack is becoming the default choice—the profit margins of standalone voice tools are very likely to be compressed.
Importance: High
Category: Product Launch | Industry Trends | Market Impact
Conclusion: This assessment of an “integration-oriented multimodal tech stack” is still at the stage of being “correct early.” The winners are the Builders and mid-to-early-stage funds that are willing to embed voice-video agents directly into workflows; trading players focused purely on voice and late entrants are relatively disadvantaged.