#Gate 2025 Semi-Year Community Gala# voting is in progress! 🔥
Gate Square TOP 40 Creator Leaderboard is out
🙌 Vote to support your favorite creators: www.gate.com/activities/community-vote
Earn Votes by completing daily [Square] tasks. 30 delivered Votes = 1 lucky draw chance!
🎁 Win prizes like iPhone 16 Pro Max, Golden Bull Sculpture, Futures Voucher, and hot tokens.
The more you support, the higher your chances!
Vote to support creators now and win big!
https://www.gate.com/announcements/article/45974
A Brief Analysis of McKinsey's Lilli: What Development Ideas Does It Provide for the Enterprise AI Market?
The McKinsey Lilli case provides key development insights for the enterprise AI market: Edge Computing + potential market opportunities for small models. This AI assistant, which integrates 100,000 internal documents, not only achieved a 70% adoption rate among employees but is also used an average of 17 times per week, a level of product stickiness that is rare among enterprise tools. Below, I will share my thoughts:
Data security for enterprises is a pain point: The core knowledge assets accumulated by McKinsey over 100 years and some specific data accumulated by small and medium-sized enterprises have strong data sensitivity and are not suitable for processing on public clouds. Exploring a balanced state where "data does not leave the local environment and AI capabilities are not compromised" is an actual market demand. Edge Computing is a direction for exploration;
Professional small models will replace general large models: enterprise users do not need a "billion-parameter, all-purpose" general model, but rather a professional assistant that can accurately answer specific domain problems. In contrast, there is an inherent contradiction between the generality of large models and their professional depth, and in enterprise scenarios, small models are often valued more.
Cost balance of self-built AI infrastructure and API calls: Although the combination of Edge Computing and small models requires a large initial investment, the long-term operating costs are significantly reduced. Imagine if the AI large model frequently used by 45,000 employees comes from API calls; the resulting dependency, increase in usage scale, and feedback would make self-built AI infrastructure a rational choice for medium and large enterprises.
New Opportunities in the Edge Hardware Market: Large model training relies on high-end GPUs, but the hardware requirements for edge inference are completely different. Chip manufacturers like Qualcomm and MediaTek are seizing the market opportunity with processors optimized for edge AI. As every company aims to create its own "Lilli", edge AI chips designed for low power consumption and high efficiency will become a necessity for infrastructure.
The decentralized web3 AI market is also simultaneously enhanced: Once the demand for computing power, fine-tuning, algorithms, etc., on small models is driven by enterprises, how to balance resource scheduling will become a problem. Traditional centralized resource scheduling will become a challenge, which will directly create significant market demand for web3AI decentralized small model fine-tuning networks, decentralized computing power service platforms, and so on.
While the market is still discussing the boundaries of AGI's general capabilities, it is more gratifying to see many enterprise end users already tapping into the practical value of AI. Clearly, compared to the past monopolistic leaps in computing power and algorithms, when the market shifts its focus to Edge Computing + small models, it will bring greater market vitality.