In November 2024, for the first time, the number of AI-generated articles on the internet surpassed that of human writers.
Just over a year after its release, ChatGPT and its peers achieved the remarkable feat of overwhelming human writers with sheer volume of content.
While AI-generated content has become increasingly sophisticated, its quality is worrisome, with a significant amount of low-quality or fabricated content circulating.
Recent reports indicate that AI-generated online articles are now highly advanced. Simply input a story outline and character settings, train it in a large model, and it can generate corresponding articles—after all, these low-quality online articles are very formulaic; the number of slaps and drug administrations in each chapter is predictable, making it obviously faster for AI to write them.
State media believes that this low-quality AI-generated content has a significant toxic effect on teenagers.
However, frankly speaking, this low-quality AI-generated text and video content is at most somewhat offensive and affects one’s mood.
The greater harm of mass-produced AI garbage lies not in the AI-generated text or video, but in AI programming itself.
In March of this year, my country’s daily token usage exceeded 140 trillion, a more than 40% increase compared to the end of 2025.
A significant portion of this increase is attributed to the use of OpenClaw.
Many people have been coding tirelessly using OpenClaw, discovering the wonders of AI programming.
There used to be a joke about the entrepreneurial boom; someone would suddenly have a brilliant idea for a project, just missing a programmer to launch it.
Now, many people have obtained their dream programmer—an AI that works tirelessly, practically perfect!
As a result, many have used AI to create the applications they’ve always wanted to make.
However, most people’s ideas are mediocre, and their resulting apps won’t be particularly high-quality. Moreover, many people don’t know how to program, relying entirely on AI for code generation without any checking, resulting in a bunch of low-quality apps.
They also uploaded these spam apps to the Apple App Store, overwhelming Apple with a flood of applications. Now, due to the sheer volume of spam, the App Store review process has taken longer than usual, increasing from 24-48 hours to a maximum of 45 days.
Some might argue that AI agents would replace apps, making apps obsolete. They claim that by focusing on developing AI agents and bypassing apps, everyone would stop using apps, thus solving the App Store spam problem.
However, do you think AI agents can escape the fate of being flooded with spam and low-quality applications?
AI agents have recently become popular because they can automatically break down tasks and call upon tools to solve problems. These tools are called skills, and specific problems are solved through skills.
Skills obviously don’t fall from the sky; they are created by humans.
A recent paper analyzed over 40,000 skills available through the commonly used skill acquisition channel skills.sh and found that about half of them even had duplicate names, let alone identical functions.
If it were all just repetitive junk, it wouldn’t be a problem. The real issue is the presence of harmful junk—and guess what? There really is!
This paper categorizes skill permissions for your information into four levels:
Safe: Read public data only, no side effects;
Privacy Risk: Read sensitive data/personal information, no state change;
Medium Risk: Limited state change operations, reversible;
Severe Risk: Financial operations, irreversible data destruction, system-level configuration, arbitrary code execution.
Skills in the severe risk category alone account for 9% of these 40,000+ skills.
In other words, if you randomly deploy ten skills on skill.sh, one of them could delete your data or steal your money.
Everyone says OpenClaw has broad permissions and is dangerous when deployed locally, but I think installing other skills is just as risky.
The App Store at least has a review process, although it’s slowed down now due to the abundance of junk apps—skill reviews aren’t strict, and there’s a lot of harmful junk on them that could steal your money at any time. Aren’t you afraid?
The intelligent agents are so capable that everyone is still in a state of initial excitement. Once things calm down, we’ll realize that giving them too much power is far too dangerous.
This is no longer a simple problem of garbage flooding the internet; it’s garbage mixed with poison, and you’re being encouraged to consume it all.
You’d be lucky to develop a huge appetite; you might as well eat yourself to death.
