"Farming Lobsters" - Is the Banking Industry Refusing to "Follow the Trend"?

robot
Abstract generation in progress

Ask AI · How will future banking AI intelligent agents balance innovation and security?

Recently, the open-source AI agent OpenClaw has attracted widespread industry attention. Named “Lobster” due to its red lobster icon, local deployment and use are called “raising lobsters.”

The most attractive feature of “raising lobsters” is its autonomous execution of complex tasks, offering new possibilities for improving work efficiency. It is understood that OpenClaw can integrate communication software with large AI models to independently handle file management, email sending and receiving, data processing, and other operations on local computers. Its flexible deployment and autonomous capabilities have gained popularity among many users.

Currently, how does the banking industry view “raising lobsters”? Will it be widely deployed and promoted in the future?

A reporter from the Financial Times found that, so far, no bank has deployed OpenClaw across the entire organization.

“Outside of banking scenarios, experiencing ‘raising lobsters’ in daily life should be fine, but connecting it to the bank’s internal network is not allowed,” said an employee of a joint-stock bank.

It is reported that some banks have issued internal risk warnings, prohibiting employees from building or deploying OpenClaw during business operations. Others have conducted internal risk self-assessments to define application boundaries.

At present, the banking industry generally adopts a cautious attitude toward OpenClaw, strictly prohibiting deployment in core business scenarios, and adhering to the bottom line of financial data security and compliance.

“OpenClaw defaults to high system permissions and weak security configurations, making it easy for attackers to exploit, potentially becoming a breach point for stealing sensitive data or illegally controlling transactions. This conflicts with banks’ high standards for security and compliance,” said Tian Lihui, a finance professor at Nankai University, in an interview with the Financial Times. The cautious stance of the banking industry stems from the industry’s extreme requirements for security and compliance.

The security risks associated with “raising lobsters” have also attracted the attention of relevant authorities.

On March 11, the Cybersecurity Threats and Vulnerabilities Information Sharing Platform of the Ministry of Industry and Information Technology issued the “Six Do’s and Six Don’ts” recommendations for preventing security risks of OpenClaw (Lobster) open-source intelligent agents, emphasizing that using this agent in financial transaction scenarios may lead to errors or account takeovers. The China Internet Finance Association also issued risk alerts, highlighting security risks in financial applications.

Luo Feipeng, a researcher at China Postal Savings Bank, stated that the ambiguity in defining compliance and responsibility further increases banks’ concerns about application. “Currently, there are no unified standards for AI agent applications in the financial industry, and OpenClaw’s autonomous execution makes it difficult to clarify the responsibility boundaries between machines and humans.”

In fact, extending from the case of “raising lobsters,” the issue of “boundaries” in AI applications in finance deserves attention and discussion. The cautious attitude of banks toward OpenClaw does not mean rejection of AI technology but reflects a problem-specific, industry-aware approach—balancing development and security, and progressing through exploration.

On March 11, the People’s Bank of China’s 2026 Technology Work Conference explicitly required that by 2026, the integration of industry and technology should be deepened, and the application of AI in finance should be promoted in a steady, safe, and orderly manner to unleash the momentum of digital and intelligent development.

Industry experts say that the exploration of AI intelligent agents in banking has never stopped. Currently, applications are already implemented in low-risk, non-core scenarios such as customer service assistance, policy document searches, and meeting minutes generation.

Tian Lihui believes that in the future, banks should cautiously explore AI intelligent agent applications. “This requires small-scale validation. Models must undergo deep transformation and privatized deployment, and a comprehensive AI governance system should be established to ensure data security from the source. Once the technology matures and industry standards are clear, then a cautious evaluation of expanding to core business areas can be made.”

Luo Feipeng suggests that banks should adhere to a principle of cautious deployment, starting with small pilot projects, focusing on low-risk scenarios, and gradually expanding after verifying effectiveness; they should establish a full-process data security system, using anonymization and encryption technologies, and clearly define data usage boundaries.

Source: Financial Times Client

Reporter: Zhao Meng

Editor: Liu Nengjing

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin