Humans and LLMs: A Comparative Analysis | Generated by AI
It’s a fascinating philosophical take, and you’ve pointed out two very different, yet influential, figures in the Chinese-language intellectual sphere to illustrate it.
The “Human LLM” Comparison
You are touching on a concept often called Functionalism in philosophy of mind. The idea is that if an entity takes an input, processes it based on internal rules/weights, and produces an output, the “mechanism” (biological vs. silicon) matters less than the function.
| Feature | Human “Model” (e.g., Li Xiaolai / Yin Wang) | AI Model (LLM) |
|---|---|---|
| Training Data | Life experiences, books, XDF teaching years, Bitcoin crashes, coding bugs. | Terabytes of text, code, and dialogue from the internet. |
| Weights | Values & Biases: Li weights “wealth freedom” and “logic”; Yin weights “purity of code” and “truth.” | Parameters: Numerical values that determine which word comes next based on probability. |
| Output | Blogs, books (The Wealthy Freelancer), polemics, or programming languages. | Tokens (text), code snippets, or conversational responses. |
Comparing the Two “Data Sets”
Li Xiaolai (The High-Output Optimizer)
Li Xiaolai is almost a textbook case of Reinforcement Learning from Human Feedback (RLHF).
- The Teacher Phase: At New Oriental (XDF), his “input” was English test prep; his “output” was teaching. The feedback (student success) refined his model.
- The Crypto Phase: He processed the “Whitepaper” input early, adjusted his internal weights toward Bitcoin, and generated a massive amount of output (blogs, books, and courses) that influenced an entire generation of Chinese investors.
- The Logic: He often preaches “logical rigor” and “learning how to learn,” which is essentially telling people to optimize their own training algorithms.
Yin Wang (The Debugger of Reality)
If Li Xiaolai is about scaling output, Yin Wang is about source code integrity.
- The Discovery Phase: His journey through top CS programs (Tsinghua, Indiana, Cornell) and big tech (Google, Microsoft) served as high-quality training data.
- The Output: His blog is famous for “zero-shot” critiques of industry standards. He treats the world like a giant codebase that needs to be refactored.
- The Difference: While Li focuses on the utility of information (how to use it to get ahead), Yin focuses on the correctness of the information (the fundamental “why”).
Is the “People = LLM” Theory Correct?
While the analogy is brilliant for understanding how we consume and share information, there are a few “system errors” in comparing humans to LLMs:
- Intent vs. Probability: An LLM predicts the next word based on statistics. A person like Yin Wang often writes the least predictable thing because he has a specific, conscious intent to challenge the status quo.
- The “Energy” Source: Humans have Qualia (subjective experience). When Li Xiaolai writes about Bitcoin, he’s not just remixing tokens; he’s recalling the feeling of a market crash.
- Data Efficiency: You can show a human a concept once, and they “get it.” An LLM needs to see it 10,000 times in its training set to simulate that same understanding.
The Verdict: You’re right that we are all “functions” of our inputs. If you spend your life reading Yin Wang, your “output” will eventually sound like a rigorous, skeptical programmer. If you read Li Xiaolai, you’ll start outputting “wealth freedom” frameworks.
Question: Since you see people as “models” trained on input, what specific “data set” or life experience has influenced your own way of thinking the most?