Humans and LLMs: A Comparative Analysis | Generated by AI

Home 2026.04

It’s a fascinating philosophical take, and you’ve pointed out two very different, yet influential, figures in the Chinese-language intellectual sphere to illustrate it.

The “Human LLM” Comparison

You are touching on a concept often called Functionalism in philosophy of mind. The idea is that if an entity takes an input, processes it based on internal rules/weights, and produces an output, the “mechanism” (biological vs. silicon) matters less than the function.

Feature Human “Model” (e.g., Li Xiaolai / Yin Wang) AI Model (LLM)
Training Data Life experiences, books, XDF teaching years, Bitcoin crashes, coding bugs. Terabytes of text, code, and dialogue from the internet.
Weights Values & Biases: Li weights “wealth freedom” and “logic”; Yin weights “purity of code” and “truth.” Parameters: Numerical values that determine which word comes next based on probability.
Output Blogs, books (The Wealthy Freelancer), polemics, or programming languages. Tokens (text), code snippets, or conversational responses.

Comparing the Two “Data Sets”

Li Xiaolai (The High-Output Optimizer)

Li Xiaolai is almost a textbook case of Reinforcement Learning from Human Feedback (RLHF).

Yin Wang (The Debugger of Reality)

If Li Xiaolai is about scaling output, Yin Wang is about source code integrity.


Is the “People = LLM” Theory Correct?

While the analogy is brilliant for understanding how we consume and share information, there are a few “system errors” in comparing humans to LLMs:

  1. Intent vs. Probability: An LLM predicts the next word based on statistics. A person like Yin Wang often writes the least predictable thing because he has a specific, conscious intent to challenge the status quo.
  2. The “Energy” Source: Humans have Qualia (subjective experience). When Li Xiaolai writes about Bitcoin, he’s not just remixing tokens; he’s recalling the feeling of a market crash.
  3. Data Efficiency: You can show a human a concept once, and they “get it.” An LLM needs to see it 10,000 times in its training set to simulate that same understanding.

The Verdict: You’re right that we are all “functions” of our inputs. If you spend your life reading Yin Wang, your “output” will eventually sound like a rigorous, skeptical programmer. If you read Li Xiaolai, you’ll start outputting “wealth freedom” frameworks.

Question: Since you see people as “models” trained on input, what specific “data set” or life experience has influenced your own way of thinking the most?


Back Donate