Lately, I’ve been frequently experimenting with “OpenClaw”-style tools—using them in the office environment as well as on my personal computer. However, the more I use them, the more exhausted I feel.
01
There are far too many “stories” circulating online about the “one-person company”—so many, in fact, that they’ve become a source of real anxiety. The narrative goes: simply tend to your “lobsters” (your automated systems), take a nap, and wake up to find a flood of massive orders or revenue waiting for you. It sounds almost too good to be true.
Naturally, I’ve begun scrutinizing every step of my workday through the lens of “Can I claw this?”—as if wielding an invisible hammer, ready at any moment to pound every task into a nail that fits the tool.
But the problem is, within a corporate setting, the truly complex issues are never about how to do something.
Rather, they boil down to: Who is actually supposed to do this? Do I even have the authority to do it? And once it’s done, will my superiors actually validate or accept the result?
These are questions that AI, in reality, cannot answer. It merely exposes these issues, leaving them for you to resolve and complete before the AI can step in and do its part.
Ironically, the “one-person company” model is actually far better suited for leveraging AI effectively. This is because the very nature of the “one-person” structure inherently eliminates the most fatal forms of collaborative friction: there are no squabbles over jurisdictional boundaries, no inter-departmental tug-of-wars over authority, and no internal drain of energy caused by passively inheriting tasks you didn’t ask for.
02
Of course, it is true that quite a few repetitive tasks have indeed been successfully automated away.
However, those past tasks—the ones that were “time-consuming and hard on the eyes, but didn’t tax the brain”—actually served as a form of mental buffer.
Much like the mindless act of peeling potatoes, these moments allowed the brain to wander, recharge, and even subconsciously process complex problems in the background.
Now, these precious “slacking-off moments” have been paved over by AI. I find myself thrust into a state of continuous, high-intensity cognitive engagement.
“Yet another new AI tool I need to learn…”
“Another new AI buzzword has popped up… ‘HARNESS’…”
“DeepSeek just proposed a brilliant approach—something I’d never considered before; it’s definitely worth a try…”
“The AI just spat out this massive summary; I’d better check it carefully to make sure there aren’t any ‘hallucinations’ (factual errors) lurking inside…”
AI has successfully eradicated inefficient manual labor, yet in doing so, it has exponentially multiplied the cognitive burden of “high-quality thinking.”
And layered on top of this is another reality: the rest of the team is using AI, too.
In the past, I could simply glance at a subordinate’s work—perhaps judging their level of effort by the sheer number of pages they had written. That is no longer possible.
AI has caused the “visible traces of effort” to vanish, leaving behind only one metric: the quality of the thinking itself. If the goal is to assess their level of competence, I am compelled to ask: “Just how much of this did you actually write?” Or, more directly: “What prompts did you use?”
If the goal is simply to obtain results, I find myself having to think even harder: “By what standards should I evaluate this output?”
03
It isn’t just my own team utilizing AI; other departments are also churning out AI-generated reports in droves. Beneath the voluminous text, an increasing amount of the content feels like—to borrow a popular meme—”listening to you speak is exactly like listening to you speak.”
In the past, typing things out manually—even if it was just nonsense—required physical effort. Now, however, the cost of producing such “nonsense literature” has dropped dramatically.
When the cost of production for everyone approaches zero, the cost of consumption—the effort required to process all that information—begins to skyrocket. Consequently, one is forced to fight fire with fire—or rather, to use AI to defeat AI—by employing AI tools to summarize reports and distill them down to their core conclusions for me.
Yet, despite this, a faint sense of unease lingers.
“What the AI suggests here sounds novel—but could it just be a hallucination?”
“What the AI suggests here sounds novel—if I don’t give it a try, will I miss out on something? And what if my boss has already seen it?”
“The AI has generated so much content—is there perhaps an even better AI tool out there that could summarize it all, or simply cut straight to the definitive conclusion?”
…
04
I’ve come to a new realization: the actual capabilities of large language models don’t vary all that much from one another. The true differentiator lies in their capacity for memory—their ability to understand context.
To make AI a more effective aid, we’ve begun doing something we never had to do before: constantly and explicitly “explaining ourselves.”
Cast your mind back to the era of traditional search engines; the system would infer our needs by reverse-engineering them from our search terms. Back then, our intentions were often “hidden”—we provided only keywords, and the machine had no access to the full background context.
Even so, Big Data was still able to piece things together from those sparse fragments and offer up “personalized recommendations.”
Now, however—in order to coax more accurate results from an AI—we are compelled to provide: background context, specific objectives, limiting constraints, and usage scenarios. We have to articulate the problem even more clearly than we would to another human being. It feels just like a scene from an American TV drama where someone visits a therapist—rambling on endlessly.
All in an effort to make the AI “like” us.
And when assigning tasks to my own team, I’ve noticed that my instructions are increasingly beginning to resemble AI prompts: complete with comprehensive information, clear structure, and zero ambiguity.
Yet, at the same time, I find myself subconsciously expecting my team to behave just like an AI—responding instantly, delivering consistent output, and remaining “online” and available at all times.
I get the distinct feeling that if this trend continues, we’ll gradually begin to evolve into something akin to the “Trisolarians” from The Three-Body Problem—becoming ever more transparent, and thereby, ever more efficient. But should humans be completely transparent?
Those parts of us that are vague, contradictory, or even self-deceptive may well be the very source of our creativity—or perhaps our last remaining zone of psychological safety.
In the story The Three-Body Problem, it was precisely through the use of cryptic metaphors and allegories that humanity managed to hold in check the Trisolarians—a civilization operating at a far higher level of efficiency.
Does complete transparency—the act of compressing oneself into data intelligible to an AI—also imply being completely seen through by that AI?
05
Yesterday, I uninstalled QCLAW. While the installation process was indeed convenient, the tool itself proved difficult to use.
For instance, I set up a scheduled task to send a daily news briefing; it worked perfectly during testing. However, when the time came, nothing was sent. Upon checking, I was informed that I needed to configure my WeChat ID; after a frustrating back-and-forth exchange, the system finally managed to locate it.
This entire rigmarole shouldn’t have happened in the first place. Shouldn’t WeChat—the platform itself—know my WeChat ID better than I do?
Of course, I have no doubt that issues like this will eventually be resolved, and that QCLAW will become increasingly user-friendly.
Yet, the incident itself served as a reminder that a new intermediary layer has now inserted itself between the AI and me. If I am to be transparent to the AI, I must, by extension, be transparent to QCLAW as well. Even if mechanisms exist to safeguard privacy, the inevitable trend will be for me to upload all my data and content to the platform’s cloud, thereby enabling a more seamless and efficient workflow.
Consequently, my cognitive patterns will become crystallized within “Skills”; my historical data will remain stored on the platform; and my very style of communication will begin to resemble that of an AI agent.
In today’s corporate landscape, we often hear the assertion that every employee requires an AI agent.
Before long, this sentiment will evolve into the notion that every role requires an AI agent.
And subsequently, the role itself will become the AI agent.
This naturally leads to a pivotal question: In the future, will the primary responsibility of an employee simply be to “maintain the AI agent assigned to their role”?
I do not know.
However, I find myself increasingly convinced that the answer to this question may not depend on how powerful AI becomes, but rather on how we choose to value those things that lie beyond AI’s capabilities.
Those elements of human experience that remain vague, opaque, and ineffable—
Such as composing a truly beautiful poem,
Or crafting a story that leaves an indelible mark on the listener—
These seem to be realms into which AI, thus far, has been unable to penetrate.
Is this merely a temporary technical shortcoming?
Or does it represent a more fundamental, intrinsic boundary?
I do not yet have the answer.
