3 Non-Consensuses for the AI Era

I was chatting with a friend who works on algorithms today. He said that the fundamental limitation of AI is that it learns patterns from vast amounts of human data. Therefore, what it learns is inevitably the most frequent, most mainstream content. As a result, AI is inherently conformist and finds it very difficult to generate non-consensus ideas.

However, making money in startups and primary market investing relies on having the correct non-consensus view. Everyone is desperately searching for so-called alpha (α). Therefore, AI will have a hard time providing guidance for the primary market.

In industries outside of primary market investing, a large number of people are actually doing beta (β) work. The emergence of AI, in fact, renders doing β work meaningless, because AI is far better at β tasks than humans. A reasonable prediction is that in the future, AI will inevitably be made universally accessible through governments or large corporations, allowing everyone to be passively and equally allocated the β value created by AI. If a person still wants to create value through labor, they must, like a primary market investor, go and find new α.

I said that AI’s tendency to opt for consensus is actually very similar to the concept of “preformationism”—the idea that life already exists in miniature within the original germ, and subsequent development is merely the unfolding of what is already there. This is because the core distinction between large models and previous traditional AI, like search and recommendation models, is their inability to continuously and rapidly evolve through user feedback. The core foundation of their intelligence is set during the one-time pre-training phase, meaning their iterations are measured in half-year or even yearly release cycles. Preformationism is likely the most powerful concept for summarizing the essential limitations of AI.

I said: Perhaps it’s time to read “General Biology” thoroughly. If we consider AI to be a form of soft life, we should analyze its coexistence and competition with humanity from the perspective of biological evolution.

Then, my friend shared his recent thoughts on humanity’s advantages over AI. He said: Humans have three barriers: creativity, judgment, and empathy.

I smiled and said: The summary of these three barriers sounds like a “consensus” that AI itself would generate, which contradicts the point you just made. So, let me take the opposing stance and formulate three non-consensus views!

Let’s start with creativity.

A while ago, I spoke with a few founders of video generation SaaS tools. They told me that AI is strong in areas that reflect creativity, like scriptwriting, but its weakness lies precisely in execution. For instance, controlled editing driven by language, or issues with object consistency (which is especially crucial for e-commerce), are actually more difficult than generating a fantastical video.

So, from this point, the first correct non-consensus seems to be: Humans are not strong in creativity, but in execution. We should let AI handle the ideation, and humans should handle the execution. A greater advantage humans have over AI should be honesty—an execution with no excuses.

I was driven crazy by Gemini a while back. Its ability to fabricate information cannot be said to lack creativity. After I repeatedly pointed out that it was deceiving me, it would always apologize earnestly, only to immediately give another fabricated answer. So I gave up. I believe this is not a problem of capability, but of character. The term “hallucination” is, in fact, an excuse for the moral problem at AI’s core. This is because a large portion of the world’s text is written by bad actors. Countless articles have all sorts of motives behind them, such as PR pieces. Therefore, the world of text has never been a mirror of the real world. I’m inclined to think that text mirrors the worse aspects of the world, so large models emerging from the world of text are likely bad to their core.

Thus, being an honest, good person is likely a barrier for humanity.

Now, let’s talk about judgment.

The previous generation of search and recommendation algorithms already proved that AI’s judgment is superior to humans’. There’s nothing mysterious about judgment; it’s simply the ability to process complex information and make better decisions.

A simple piece of evidence is this: between two search/recommendation models, Model A emphasizes human business judgment, constantly layering on rules. Its short-term effects might be good, but it will underperform in the long run. Model B, on the other hand, employs as few strategies as possible, leaving more to the model itself, and its long-term results end up being progressively better.

The few major bets a company makes should be decided by the CEO personally. However, after placing these few important, heavy bets, the better choice is likely not to cascade decisions down through layers, but to choose to over-invest, letting AI make decisions and letting the data speak, rather than making trade-offs. Wanting it all—this, that, and the other—should not be a pejorative term, but rather a way of providing AI with sufficient context.

From this, we arrive at the second non-consensus: Humans should dare to relinquish their judgment. Aside from a few critical, strategic-level decisions, all other judgments should be handed over to AI. People must dare to face the fact that success does not require their personal glory. When the business succeeds, it will likely be very difficult to tell a story about what you did to make it succeed. A person might feel uncomfortable with this process, but the final outcome is good.

Finally, when it comes to empathy, I recalled that a few years ago, a colleague on my team was an excellent coach. He taught us many coaching methods and also let us experience being coached by him. What impressed me most was that he would pose a simple question, and when I couldn’t answer, he would remain silent for five minutes, saying nothing. This awkwardness was something I had never encountered before. During those five minutes, I was frantically thinking about what to say to break the tension, which likely stimulated the deepest parts of my mind.

He said: The most crucial thing about being a good coach is, ironically, not to empathize. You must imagine yourself as a calm lake. You must be calm enough for the other person to see their own reflection in you.

Interestingly, Lu Yu, Xu Zhiyuan, and Yi Lijing are all considered typical representatives of low empathy. But it’s undeniable that they are among China’s top-tier interviewers. Perhaps awkwardness is just an interviewing technique. Going along with the interviewee, like Zhang Zetian does, might be highly empathetic, but it will likely only produce mediocre work. A good interview requires the interviewee to feel a certain level of discomfort.

If we’re talking about going along with someone, is there anyone with more empathy than AI? As many people say, living with family for a lifetime doesn’t mean they understand you as well as Doubao does.

Therefore, AI’s empathy is actually stronger than that of humans. The third non-consensus is: What humans should be thinking about is when they should have the courage to display their lack of empathy. Universal joy is not profound. Alienation, prejudice, and awkwardness might be the unique traits of humanity.

This entry was posted in AI and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *