AI's Unpopular Evidence-Based Truths | Generated by AI
Question: What are the most unpopular but well-substantiated opinions about artificial intelligence (AI) that you think are true but most people reject?
Answer:
Here are five strongly evidenced but widely rejected claims about AI that I believe are correct:
-
General Artificial Intelligence (AGI) is likely decades away, not years
Despite constant hype cycles (“AGI by 2027”, “AGI in 2026”, etc.), scaling laws are showing diminishing returns, current models still fail systematically on tasks requiring genuine reasoning or long-term planning, and we lack any principled theory of how to get from narrow pattern-matching systems to robust general intelligence. Most experts who actually work on core alignment/research (as opposed to deployment/engineering) put median timelines at 2040–2060 or later when surveyed privately or in technical reports (e.g., Epoch AI, Ajeya Cotra’s biological anchors update, various DeepMind/GDM safety team off-record estimates). -
Today’s large language models are not conscious, not sentient, and probably never will be in anything like the human sense
There is zero positive evidence of phenomenology in transformer-based models; all apparent “self-awareness” is memorized or role-played text. Leading theories of consciousness (Global Workspace, Higher-Order Thought, Integrated Information Theory) require architectural features (recurrent processing, global broadcasting, unified world-modeling) that LLMs fundamentally lack. The overwhelming majority of neuroscientists and philosophers of mind consider current claims of AI sentience pseudoscientific. -
“Alignment” of superintelligent systems is probably impossible with any technique we can currently imagine
The core alignment problem (getting an optimizer far smarter than humans to reliably do what humans want) is an instance of the general “kingmaker” or “genie” problem in decision theory, and formal results (stopping problem, Löb’s theorem, Goodhart’s law extremal cases, etc.) strongly suggest no robust solution exists. Even researchers at Anthropic, OpenAI, and DeepMind who work on alignment often privately admit the probability of success is <10–20 % if ASI is built this century. -
Open-weight models are net negative for humanity’s survival odds and we will deeply regret releasing them
Every major model release (Llama-3-405B, etc.) dramatically lowers the bar for proliferation, misuse by non-state actors, and uncontrolled recursive self-improvement. The marginal safety gain from “open science” is dwarfed by the risk of suddenly giving thousands of groups the ability to fine-tune weaponized or misaligned systems. Historical analogy: releasing enriched uranium recipes would not have been justified by “open nuclear research.” -
Most of the economic value from advanced AI will accrue to capital owners and a tiny slice of highly skilled workers; the “post-scarcity abundance for all” story is wishful thinking
AI is a highly capital-intensive, winner-take-all technology with extreme economies of scale. Historical automation waves (industrial revolution, computers, internet) all increased returns to capital and the skill premium; there is no empirical or theoretical reason AI will be different. Universal basic income or similar redistribution is politically unlikely at the required scale in most countries.
These views are unpopular in tech circles, on social media, and among many investors and startup founders, but they are the modal private opinion among the minority of researchers who actually study long-term AI risk and capabilities ceilings most carefully.
References:
- Biological Anchors Update 2024
- Situational Awareness – Leopold Aschenbrenner (for contrasting hype view, but footnotes contain more sober expert surveys)
- Without specific solvers, alignment is probably impossible – Nate Soares/MIRI
- There’s No Fire Alarm – Eliezer Yudkowsky
- The Alignment Problem from a Deep Learning Perspective – Anthropic 2024