Articles

Newsletter : The Code of Law | A Lawyer Went Lobster Farming

Introduction – OpenClaw and AI Agents

 

OpenClaw has taken the world by storm. The excitement is especially vivid in Beijing and Shenzhen where thousands of people have queued up to have it installed by Tencent and Alibaba engineers.

 

For the uninitiated, OpenClaw is an open-source AI application recognizable by its lobster logo. “Lobster farming” has become shorthand for setting up and running your own OpenClaw AI agent.

 

The author has tried “farming a lobster” firsthand and this article shares that experience.

Profile pic of my OpenClaw agent, generated by Nano Banana 2

 

What sets OpenClaw apart is its agentic design and accessible interface. Traditional AI interactions were essentially a back-and-forth Q&A: you asked, the AI answered. OpenClaw operates differently. Rather than simply responding to queries, it acts as a personal agent – one that can plan, reason, use tools and execute tasks on your behalf, independently and proactively, within the boundaries you set. It can also self-improve by learning from its own mistakes.

 

In practical terms, this means OpenClaw can understand your natural language instructions, break them into steps, and carry them out – including reading and modifying files on your computer (for instance, editing and saving code files). Give it more tools, and it can do more: browse the web, send emails, or even execute stock trades through your online brokerage.

 

Another distinctive feature is that OpenClaw works through instant messaging apps such as WhatsApp, Telegram, and Discord – giving interactions the feel of chatting with a real human assistant rather than operating a piece of software.

 

Legal and Security Concerns

 

When you install OpenClaw, you are greeted with a notice that it was built as a hobby project. Being open-source also means that its code is publicly visible – which in the wrong hands, creates opportunities for bad actors to exploit the vulnerabilities or craft malicious plug-ins designed to extract your data. Some industry observers have gone so far as to describe personal AI agents like OpenClaw as a “security nightmare”[i].

 

When your AI agent exposes someone else’s data

 

Leaking your own personal information is, at worst, your own problem. But if an attack through OpenClaw causes you to expose someone else’s data, it can lead to serious legal consequences.

 

In Hong Kong, personal data is protected under the Personal Data (Privacy) Ordinance (Cap. 486) (“PDPO”). Under PDPO, the party that controls how the personal data is collected, held, processed, or used is called the “data user”. A “data processor” is any person who processes personal data on behalf of another person (a data user), instead of for his/her own purpose(s). “Data subjects” are those individuals whose data is being processed.

 

One important nuance: data processors are not directly regulated by the PDPO. It is the data user who bears the responsibility to ensure, through contracts or other means, that the data processor they engage meets the required standards.

 

The PDPO sets out six data protection principles (“DPPs”) governing how personal data must be handled. The one most relevant here is DPP 4 – Data Security, which requires data users to take all practicable steps to guard against unauthorised or accidental access, processing, erasure, or loss of personal data. The assessment of what is “practicable” takes into account factors such as:

 

  • The kind of data and the potential harm that could result from a breach;
  • Where the data is physically stored;
  • Security measures built into the storage equipment;
  • Steps taken to ensure that those with data access are trustworthy and competent; and
  • Measures in place to protect data in transit.

 

Critically, the Office of the Privacy Commissioner for Personal Data (“PCPD”) has made clear that where data is entrusted to a data processor, the data user remains responsible for that data processor’s actions[ii].

 

What this means in practice: if you are a data user and your outsourced data processor deploys and AI agent that goes rogue and leaks your data subjects’ personal data, you may well be on the hook – even if the breach was not done by you and even if the data processor resides outside Hong Kong.

 

The PCPD also published the Artificial Intelligence: Model Personal Data Protection Framework in 2024. While agentic AI was not yet on the public radar at that time, the framework offers sound and practical on AI governance, risk management, and human oversight – all of which apply with full force in the OpenClaw context.

 

Criminal liabilities

 

Breaching a DPP is not, by itself, a criminal offence. However, if the Privacy Commissioner of Personal Data (the “Commissioner”) finds a contravention on the part of a data user, the Commissioner has the power to issue an enforcement notice to the data user. Failure to comply with that notice becomes a criminal matter – carrying a fine of up to HK$50,000 and up to two years’ imprisonment on first conviction.

 

There is also the “doxxing” offence to consider. Introduced by the Personal Data (Privacy) (Amendment) Ordinance 2021, it is an offence to disclose a person’s personal data without their consent with an intent to cause harm, or being reckless as to whether harm would or would likely be caused, to that person or their family member[iii]. A conviction for the more serious form of this offence carries a fine of up to HK$1,000,000 and five years’ imprisonment.

 

At first glance, AI agents and doxxing seem entirely separate concerns. But the doxxing offence catches not only intentional wrongdoing – it also covers recklessness. Given that using an agentic AI to handle personal data is inherently unpredictable, the risk of unwittingly satisfying that mental element should not be dismissed. If agentic AI is genuinely necessary in your capacity as a data user, proper ringfencing and careful configuration designed with professional input are not optional extras.

 

One further point: criminal liability for doxxing generally rests with the individual who commits the act, so a data user will not typically face vicarious criminal liability for their data processer’s conduct.[iv] That said, directors and senior officers who consent to or connive in an offence may be personally liable.[v]

 

Reflections of a Lobster Farming Hobbyist

 

This article is not a technical review and the author has not tested OpenClaw’s full feature set. The following observations are those of an enthusiastic amateur:

 

  • Safety first OpenClaw should be approached with caution. I host it on a brand-new dedicated virtual private server (VPS) with no personal data stored there. If the agent behaves unexpectedly, I can simply wipe the VPS. Some users recommend a brand-new or decommissioned device for this purpose. One lesson learned the hard way: do not use your personal WhatsApp account as the communication channel with OpenClaw.

 

  • Technical know-how required As of writing (March 2026), setting up and maintaining OpenClaw still demands a fair amount of computer knowledge. When things go wrong – and they will – the pressure mounts quickly. If technology is not your domain and support is out of reach, it may be worth waiting for a more user-friendly AI agent to enter the market.

 

  • Cost This is a common complaint. OpenClaw can consume tokens at a significant rate (“tokens” are the units of information the AI processes; more units processed means higher costs). In my experience, these costs can be managed by selecting the more economical AI models and applying a few usage techniques (a topic for another article).

 

  • Performance How capable is OpenClaw as an AI agent? Two examples stand out. First, I asked it to write a simple “Hello, World!” programme – a classic beginner’s exercise. Rather than producing a handful of lines of code, OpenClaw went further and established a full professional project structure from scratch. Far from being an overkill, I found this a genuine demonstration of the agent’s capacity for forward planning.

 

Project structure and folders created by my “coding sub-agent”.

 

In the second instance, I encountered a security incident with my WhatsApp channel. I put the problem to OpenClaw, and after several exchanges, it identified the root cause, reasoned through the errors and resolved the issue by modifying the relevant configuration files. It took a few iterations – but it got there.

 

Concluding Remarks

 

This article neither endorses nor discourages the use of OpenClaw. What is clear is that it represents a genuinely new mode of human-AI interaction. Joining the community means tapping into the collective experience of millions of users worldwide, and the knowledge gained along the way will serve you well when the next AI product arrives – which it surely will, and soon.

 

As with any new and popular pursuit – safety first. Good luck with your AI journey.

 

[ii] See PCPD’s information leaflet on Outsourcing the Processing of Personal Data to Data Processors, available on PCPD’s website. See also section 65(2) of PDPO.

[iii] Section 64(3C), PDPO.

[iv] Section 65(4), PDPO.

[v] Section 101E, Criminal Procedure Ordinance (Cap. 221)