"Raising Lobsters" Goes Viral in Finance Circle, Fund Companies Take Cautious Stance: Gap Between Ideal and Reality Remains

robot
Abstract generation in progress

The Daily Economic News Reporter: Li Lei The Daily Economic News Editor: Peng Shuiping

Recently, open-source AI agent OpenClaw (commonly known as “Lobster” in the industry) has become a hot topic in the financial circle due to its proactive execution and chain-like task processing capabilities. The public fund industry is also paying close attention, with market focus on whether they will follow suit and deploy it.

According to interviews with several domestic fund companies, because OpenClaw requires high-permission calls and involves sensitive research and investment data during application, most fund companies remain cautious and have even prohibited deployment of this tool on office devices. Some risk control departments have also issued early risk alerts.

“Fund companies have high requirements for compliance and data security. For AI agents that need unlimited open permissions, they are likely to be cautious,” an industry insider said.

On the other hand, although companies currently find it difficult to implement “raising lobsters” in practice, its tool value in foundational tasks like data collection and organization has been recognized. Some companies are exploring through cloud deployment and sandbox testing while maintaining security and compliance bottom lines. Many practitioners are privately testing on personal devices or cloud hosts and actively learning related skills to embrace this new AI tool.

Industry Attitude Cautious, Practitioners Maintain Exploration Enthusiasm

With its proactive execution and chain-like task processing capabilities, OpenClaw has recently gained popularity in the financial sector, becoming a new hotspot in fintech.

However, the Daily Economic News learned that as a data-intensive and highly compliant industry, public funds are generally cautious about OpenClaw. Many companies have set clear red lines for “raising lobsters” in office scenarios, mainly concerned with data security, permission openness, and compliance management.

A senior executive from a large public fund in South China pointed out that fund companies have high standards for compliance and data security. They are likely to be cautious about AI agents requiring unlimited open permissions. Another senior executive from a Shanghai-based public fund further revealed that the company prohibits deployment on office computers mainly because of “fear of internal confidential data leaks.” Since OpenClaw inevitably accesses sensitive research data during data collection and organization, this has become a key regulatory concern.

Practically, many interviewees said their company computers do not support installing this tool, so deployment in office environments is currently impossible. A leading public fund professional told reporters that their risk control team has issued specific reminders regarding OpenClaw to strengthen compliance defenses in advance. Some practitioners also choose safer methods, with a North China public fund employee stating that although their various business lines have trialed it, “most people use cloud hosts, which are safer,” to avoid data leakage risks through technical means.

However, private testing has become routine.

Interviewees revealed that many fund practitioners have started testing OpenClaw on personal devices. Even though some believe “it’s unlikely to be deployed at scale in the short term,” they are still actively learning related skills privately.

The enthusiasm for testing mainly stems from OpenClaw’s efficiency improvements in basic tasks. For example, the aforementioned South China public fund professional pointed out that currently, the tool mainly focuses on data collection, organization, and repetitive work, which are high-frequency, tedious tasks in daily research. It can effectively alleviate practitioners’ workload.

Another mid-sized public fund professional said that AI empowerment of research has become an industry consensus. As a new generation open-source AI agent, OpenClaw naturally becomes a direction for exploration. “Some companies plan to hold internal AI competitions to encourage employees to explore and apply AI tools. Essentially, it’s about actively embracing AI’s role in research and related work.”

Leading Institutions Actively Explore, but Gaps Remain Between Ideals and Reality

In line with the trend of AI technology empowering the asset management industry, some leading fund companies are exploring through cloud deployment and sandbox testing while adhering to safety and compliance standards to fully tap into the potential of open-source AI tools in research, risk management, and other fields.

For example, regarding OpenClaw deployment, GF Fund stated that the company adopts a “proactive research, cautious practice, safe and controllable” approach. Currently, they are steadily exploring new boundaries of AI applications under the premise of ensuring safety.

It is understood that GF Fund uses a cloud deployment model that isolates it from the company’s core production systems. The main purpose of this phased approach is to reduce initial risks by thoroughly testing capabilities in an isolated environment, ensuring no impact on existing business and data. It also encourages internal exploration, providing a safe and convenient environment for employees to test functions and innovate application scenarios, accumulating practical experience.

Tianhong Fund’s relevant leader said that the company is quite positive about OpenClaw, planning to install it via sandbox for employee trials with some permission controls. Discussions are ongoing about connecting to external paid models or internal large models.

“Currently, there is a gap between ideals and reality: task stability is an issue, with complex tasks prone to failure; security and privacy pose high risks of vulnerabilities and data leaks; the ecosystem quality has flaws; and it will take time for it to become ‘usable by ordinary people, trusted by enterprises, and safe and reliable,’” the leader noted. “Our approach is cautious testing.”

He also emphasized that various open-source AI tools will accelerate the digital and intelligent transformation of asset management businesses. Some AI applications embedded in workflows have already demonstrated efficiency gains and are gradually gaining acceptance among research, sales, and other business personnel. As technology evolves, this will create a virtuous cycle of accelerated development.

“Artificial intelligence itself has very powerful functions. GF Fund’s exploration and practice in AI will continue,” the relevant leader added. “But we always insist that the application of fintech must prioritize protecting investors’ interests and ensuring system security. In the future, we will steadily promote the deep integration and application of related technologies under the premise of safety, controllability, and compliance.”

Daily Economic News

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin