Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Industry experts gather to discuss thoughts and breakthroughs in the AI Agent era
null
Today, the Agent economy is no longer a sci-fi concept. What it brings is not only a leap in efficiency, but also the reshaping and reallocation of how economic organizations are structured. In particular, the open-source project OpenClaw has gone viral worldwide, further pushing large models from lab experiments into large-scale, production-ready applications, as parties rush to join the Agent entry battle.
So, which large model should you choose? Can Token resources be enough to support long-term use? If you don’t follow the OpenClaw (lobster) trend, will you be淘汰 by the times? In this AI transformation that’s accelerating, how should individuals position themselves and break through?
With these questions in mind, on April 3, Xujiahui Innovation & Technology, the Shanghai Distributed Consensus Technology Association, PANews, and the ManKun Law Firm jointly hosted a themed event called “Don’t ‘Lobster’ Panic.”
In the keynote speech on “Embracing the unpredictable AI wave,” Li Chenxing, Chief Architect at Conflux Tree, said that currently, giving more autonomy to AI rather than overly constraining it with human limited experience is an inevitable trend at the current stage of technology. The “careless” issues currently exhibited by AI essentially stem from the fact that, in complex scenarios, it struggles to reliably capture key contextual constraints and to continuously remember them. From a technical architecture perspective, AI mainly relies on parameter memory, context memory, and external memory, but these mechanisms still face problems such as difficulty in updating, limited window sizes, and insufficient calling efficiency. Therefore, in the future, efforts should focus on strengthening external memory calling capabilities, exploring continuous learning and experience reuse mechanisms, and gradually consolidating experience-based memory through hands-on practice in vertical domains, in order to improve the completeness and reliability of AI decisions in real-world complex scenarios.
He also pointed out that the core progress of AI today is mainly reflected in strengthened autonomous analysis and reflection capabilities. In the future, as memory capabilities improve, it is expected to break through key bottlenecks and have a profound impact on various industries. For example, currently, the potential of digital identity and digital payment systems has long been constrained by development and user barriers, while AI may unlock that value by lowering development costs and replacing parts of the user learning process with agent-based approaches. Overall, AI should not be viewed as a threat to jobs, but as a key tool for boosting productivity and creating new opportunities. Individuals and industries should maintain an open mindset and proactively explore paths for integrating AI.
As Feng Heqing, Product Architect of Tencent Cloud Workbuddy, noted, with significant improvements in large-model capabilities, AI has evolved from early-stage basic assistance like code completion to being able to independently complete complex tasks. Within that, the core capabilities of a customizable Agent are reflected in end-to-end task support, multi-role collaboration, a layered memory system, and context-based intelligent task decomposition. Meanwhile, by coordinating multiple agents, it enables data handoff between tasks and parallel processing, and at the security layer it uses local data storage and a human confirmation mechanism for critical operations to ensure data safety. In terms of application, WorkBuddy has covered typical office scenarios such as resume screening, automatic PPT generation, data analysis, and weekly report consolidation, and it can connect to systems like WeCom and WeChat Work through enterprise-level integration capabilities for unified task management. Its technical architecture emphasizes full-stack in-house development, isolated execution environments, and enterprise-grade permission controls, supporting both local and cloud deployments. In terms of business model, it can be aimed at enterprise R&D and users in high-frequency digital office roles. Overall, WorkBuddy aims to improve enterprise productivity through customizable Agent capabilities and multi-task coordination, and by continuously optimizing task decomposition capabilities and expanding its ecosystem, further strengthen its fit and deployment capability in complex enterprise scenarios.
Biteye and XHunt founder Teddy shared insights mainly around digital employee practices, large-model applications and cost issues, technical configuration and security risks, and optimization of collaboration methods. In the area of digital employee practices, to reduce model hallucinations and code error rates, it is necessary to introduce a higher-level review Agent to conduct a second verification of code generated by lower-level Agents, forming a mandatory code review process. Since there are still certain bugs in Agent-written code, errors can be reduced through standardized development processes, strengthening prompt design, and adding multi-round validation mechanisms. In operational scenarios, it is necessary to focus on controlling posting frequency, and to ensure stability as much as possible via unified scheduling through backend APIs. In complex team collaboration environments, Discord is usually more suitable than Telegram for Agent coordination and task distribution. In resource management, special attention must be paid to Token consumption. In addition, Agent systems still require human time for training, tuning, and behavior correction.
Regarding the installation and deployment of OpenClaw, Teddy suggested it can be run on idle computers or a Mac Mini, offering a higher degree of autonomous control. The overall code is open source, emphasizes privacy protection capabilities, and can connect to an internationalized ecosystem, but its installation and configuration threshold is relatively high. During use, it is especially important to be cautious about the risk of modifying model and channel configurations, to avoid system abnormalities caused by improper configuration. If problems occur, Grok and Gemini or other tools can be used for assisted troubleshooting. At the security level, risks such as prompt injection attacks and malicious skill injection must also be prevented. In terms of resources and cost, it’s also necessary to keep Token consumption under control to avoid overly high running costs.
In the keynote speech, Zhao Xuan, a partner at the ManKun Law Firm, shared three major legal issues entrepreneurs need to address in the AI era and corresponding solutions. The first is the “organizational shell,” namely the “false separation” created by a one-person company (OPC). While it forms an independent entity on the surface, it is difficult to truly isolate responsibility and risk. It is necessary to establish real physical and legal separation, including introducing partners into the architecture, using dedicated corporate credit cards, and inserting AI disclaimers and caps on indemnification into contracts. The second is the issue of ownership of core assets. Striving is not the same as rights—you need to prove your own controlling power, fully record the creation process, and obtain evidence for preservation. The third is the systemic risk of “unplugging the cable” brought by platform hegemony, including clauses from “the heavens” and technical lock-in. Separate core data from third-party services, plan replacement solutions in advance, and introduce decentralized technologies.
In the roundtable discussion titled “From frenzy to clarity—AI’s real needs and false propositions in the eyes of VC,” multiple investors shared perspectives on AI’s development stage, application boundaries, and investment logic.
Jupiter Capital founding partner Ju Xiezi believes that AI is still in an early development phase. To truly reach the stage where user experience is mature and it’s widely considered “meaningful,” it still needs more time. He pointed out that the iteration speed of AI technology is extremely fast, and simply relying on technical leadership makes it difficult to form a long-term moat. Therefore, investments should focus more on foundational capabilities with irreplacability, such as core resources like compute. In the application layer, he gave an example: tools like “lobster” are not friendly to ordinary programming users, but in the future they may be better encapsulated into vertical applications such as “family doctors,” providing professional advice through real-time health data. He also said that AI on the enterprise side can replace information-production tools like research reports, but it cannot replace the final decision-making role—it can only exist as an auxiliary decision-making tool.
Tang Yi, founding partner of Enlight Capital, said that in the current AI investment landscape, it’s hard to form clearly non-consensus opportunities. The rapid iteration of large models may continue to “flatten” the advantages of application-layer companies. He is relatively optimistic about the direction combining Web3 and AI, believing that they each represent advanced productive forces in their respective fields. Regarding open-source tools like OpenClaw, he believes they are like giving large models “hands” and “feet,” strengthening their connectivity with external systems and social applications. But at the same time, they also bring higher security and data risks, so they require complex configuration and are not suitable for ordinary users. For now, a more ideal path is to improve overall usability and experience through encapsulation.
Yinghao, an investor at First Rule Ventures, looked at opportunities from the user and product perspective, focusing on industry applications in deep-water areas, AI creation, and scenarios combining software and hardware. He evaluates project potential through user behavior and interaction data. He said that even if you don’t personally try every emerging AI product, it doesn’t mean you’ll miss key trends, because technical capabilities are often rapidly modularized and integrated into existing product ecosystems.
Compared with a single product, he pays more attention to three long-term structural changes: first, whether AI interaction is forming new memory carriers, so that users’ cognition and work are stored and accumulated in a particular system; second, whether this memory has the ability to transfer across products, or whether it will gradually become bound to a single product—thereby creating high migration costs and experience lock-in; third, whether new super entry points will emerge to become the core hub for AI interaction and traffic distribution.
In AI product use, Zhao Xuan, a partner at the ManKun Law Firm, said he more often uses tools for data processing, retrieval, and analysis, and expects more integrated products to appear in the future to combine these capabilities. He also emphasized that in AI entrepreneurship, it is even more important to avoid catastrophic single-instance failures. He suggested that companies should prioritize key legal design early on, such as data compliance, arbitration clauses, and disclaimer clauses, so as to achieve as much risk isolation and responsibility protection as possible when uncontrollable risks arise, thereby avoiding the company’s collapse due to a single point of failure. He also looked ahead: in the future, Agents will become the main economic execution主体, responsible for data acquisition, information purchasing, strategy execution, and even cross-system transactions, thereby forming a machine-to-machine economic activity and payments system.
In a roundtable discussion themed “N Ways to Open AI—Let’s Talk About Innovators’ Opportunities,” multiple guests explored the changes AI is bringing from different perspectives. Matrix Intelligence CEO Zeno proposed that users can connect multiple devices into one by changing their own scripts or plugins, enabling synchronized multi-device memory and consistent state—so information doesn’t get lost and tasks don’t break off. They can also add daily purification and review mechanisms to maintain system stability. Compared with using off-the-shelf tools, deep customization based on enterprise-grade permissions or platform capabilities is more efficient, more flexible, and easier to build workflows that match personal habits. Looking ahead, he believes AI will become a unified entry point: users only need to interact through one AI core to call various tools and systems to complete all tasks. As usage increases, AI will continuously accumulate users’ memories, preferences, and workflows, forming a data-and-capability flywheel effect, becoming increasingly more understanding of users and increasingly more efficient. Under this trend, individuals may achieve a productivity boost far beyond traditional human labor by configuring AI systems and paying for subscriptions, thereby significantly widening efficiency gaps between people.
ClawFirm.dev co-founder 0xOlivia disclosed that in actual AI use, there are still issues such as system instability and fragmented memory and automation capabilities. Users need to keep piecing together various tools and scripts like assembling LEGO. For non-advanced users, directly adopting mature commercial platforms and combining official applications with continuous iteration capabilities is often more stable and efficient than highly fragmented self-built systems. In addition, introducing open-source components can further enhance data processing and content generation capabilities. She emphasized that the main limitations of AI today are not inherent in model capability itself, but rather that the engineering way of using it hasn’t fully matched what models can do—so there is still tremendous room for optimization and practical deployment. In the future, as large-model capabilities are rapidly improving, AI application scenarios will progressively cover all aspects of work and life, and will continue to integrate with different product forms.
Biteye/XHunt founder Teddy, when discussing AI digital employees, pointed out that you can connect AI to internal systems via APIs or automation interfaces so it can take on specific execution tasks such as code generation, implementing requirements, and processing content, while humans focus on product design and requirement definition—thereby preserving key decision-making power. This collaboration model is more stable and extensible: it not only improves overall development efficiency, but also significantly reduces error rates, making AI more like a schedulable and manageable outsourcing team rather than a single tool. She also emphasized that any work that is highly process-based and repetitive is potentially something AI can modify or replace. Even if the initial results are unstable, in the long run it will keep optimizing and gradually enhance productivity. In complex task and management decision domains, AI has already started to show notable assisting capabilities and is penetrating into higher-level business scenarios.
Senior AI application development engineer Douge added that people generally agree on the trend toward AI outsourcing, automation, and tooling-based collaboration. From an enterprise perspective, however, it’s more important to focus on security, permission management, employee collaboration mechanisms, and asset accumulation. Currently, there are multiple AI development frameworks and tool ecosystems in the market, each emphasizing different directions such as lightweight design, low-code, high integration, and security controls. When enterprises choose among them, they need to balance flexibility and controllability and design architectures in combination with real business scenarios. But truly understanding and deploying these AI systems can’t stay at the theoretical level—you also need to invest actual costs in implementation and usage. He emphasized that AI is accelerating the remolding of workflows and organizational structures. Whether for individuals or enterprises, everyone must quickly adapt to this change by continuously learning and applying AI in a tooling-based way to improve efficiency; otherwise, it’s easy to be left behind by the pace of technological iteration.