Why Can’t AI Replace Human Writing?

Recently, a couple earning 2 million yuan a year by using AI to write articles sparked widespread public outcry, and their related WeChat public account has been completely banned by the platform. Meanwhile, the WeChat team responded that the WeChat public platform consistently encourages human creation, and public accounts are prohibited from using AI to replace human creators in content creation and publishing processes. This is tantamount to a clear no to “AI writing.”

Many media outlets have explained that the so-called “couple earning 2 million yuan a year by using AI to write articles” actually relies primarily not on writing itself, but on a “revenue sharing” model they established. Ninety percent of their 2 million yuan income came from deposit fees charged to “content creators” or students.

However, the strong “AI flavor” can still be detected in many public account articles, and AI writing seems to be becoming increasingly common. Perhaps this is the reason why the platform has taken action to regulate it.

The Essence of AI Writing is “Probability Maximization”.

I vaguely recall a period of discussion surrounding the “hand-written economy,” a small but profitable business model that emerged after AI boosted productivity. Logically, AI-written articles should also fall under this category, as they significantly improve writing efficiency and, with savvy management, can generate substantial profits. However, why do content platforms still so explicitly oppose it?

This brings us back to the recurring question: Why can’t AI replace human writing? What writing standards and ethics are involved?

I’ve read some media commentaries, most of which argue that writing is not merely a combination of words, but also an expression of emotion and a presentation of values—qualities that AI cannot replace. These points are valid, highlighting the shortcomings of AI writing. However, I believe most commentaries fail to reveal the underlying technological logic that distinguishes AI writing from human writing. I would like to share my own perspective on this.

To begin with a general conclusion: From the perspective of artificial intelligence engineering, the core of AI writing is essentially a statistical prediction behavior based on massive amounts of data training. After writing a word, its creative goal is to “predict the most likely next word,” not to “predict the most correct word.”

In other words, the essence of AI writing is probabilistic calculation. AI writing pursues “maximizing the probability of the next word or phrase.” It doesn’t ensure “what is correct,” but rather strives to achieve “what looks most correct.” Clearly, this is significantly different from human writing.

To clarify this issue more clearly, we must return to the ancient human craft of writing itself.

Writing, while seemingly a subjective event—what to write and how to write—involves the writer’s personal judgment and feelings. However, in terms of the expression of each word and sentence, it is an incredibly precise task. The world’s languages are vast and profound; while many similar words may be used to describe a thing, the best writing always uses only one most appropriate word to achieve perfection.

Flaubert once said something similar to his student Maupassant: “Everything you talk about has only one noun to name it, only one verb to mark its action, and only one adjective to describe it. Therefore, you should seek out this noun, this verb, and this adjective that has not yet been found, and never be satisfied with approximations, never use deception, even clever deception, and never use linguistic tricks to evade difficulties.”

From this perspective, because AI writing only pursues “maximizing probability” rather than “maximizing accuracy,” it is destined to be unable to create the most reasonable text like human writing, and therefore cannot most accurately realize the meaning that humans want to express.

AI Writing Cannot Provide Personalized Experience.

Besides insufficient accuracy, “personalized erasure” is another weakness that current large-scale model writing cannot overcome.

The training data upon which large-scale models rely is essentially a collection of human writing output. Although questioners can provide personalized prompts based on their identity and purpose, all the content generated cannot correspond to the questioner’s personalized experience itself. It can only probabilistically reconstruct existing text, deriving a structured, templated text.

Furthermore, excellent works of human writing, besides the direct expression of meaning in the words, contain a great deal of implicit meaning. The emotions flowing behind the pen, the unspoken feelings, the most profound ideas—sometimes cannot be expressed through simple characters but require deep interaction between writer and reader. This is an insurmountable chasm for large-scale models. Even if artificial intelligence continues to evolve in the future, even if consciousness emerges, it cannot be compared to the consciousness that grows from human flesh and blood.

This is precisely the fundamental reason why some people often say that “large-scale model writing has no soul.”

Let’s borrow a phrase from Confucius: “Why don’t you young men study the Book of Poetry? The Book of Poetry can inspire, can be observed, can foster social interaction, and can express grievances. It teaches you how to serve your father in the near term and your ruler in the long term; it also broadens your knowledge of the names of birds, beasts, plants, and trees.” According to Confucius, “inspiring, observing, fostering social interaction, and expressing grievances” represent the highest level of learning the Book of Poetry, while “broadening your knowledge” is at the lowest. By analogy, current large-scale model writing falls at the “broadening your knowledge” level of human writing.

Ultimately, AI writing cannot capture the core personalized experience of human writing. The essence of human writing is “seeking what is within oneself,” while the essence of AI writing is “seeking what is within the human.” The fate of AI depends on the strategy employed by its controller. Human writing is a production of meaning, while AI writing is merely a prediction based on mathematical logic.

This brings to mind the “science vs. metaphysics” debate among intellectuals in the 1920s. The “scientific school,” represented by Ding Wenjiang and Hu Shi, believed science could solve problems of worldview, while the “metaphysical school,” including Zhang Junmai, insisted that life’s questions concerned emotions, intuition, and free will, beyond the scope of scientific logic. At the heart of this debate was the boundary between instrumental rationality and humanistic values.

Today’s AI writing debate may not necessarily present two clearly defined camps, but at least for now, no consensus has been reached. From a writer’s perspective, I can raise a question: In the process of generating text, do large models experience the kind of “inspirational flashes” mentioned above? I have serious doubts about this.

This entry was posted in AI and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *