AI Can't Replace Programming Skills | Generated by AI

Home 2025.08

https://weibo.com/6347862377/5183583601819943


There’s so much misinformation online these days, claiming that AI (LLM) is incredibly proficient at programming, capable of completing projects without writing a single line of code. Based on my extensive practical experience with ChatGPT, Claude, Copilot, and recently Cursor, it’s almost a fantasy to think that someone who doesn’t know how to program can successfully complete a project using AI.

Even those who can program but lack a deep understanding of computer science and the ability to write extremely simple, logically precise programs are unlikely to produce “Wang Yin-level” code with AI. Some are surprised that Wang Yin also uses AI? Of course, why wouldn’t I? I use it quite a bit and more effectively than most people. During the later stages of the fifth session of the Computer Science Basics course last year, I demonstrated to the students using Copilot to complete code. However, I don’t recommend that students in the class use these tools because they lack the ability to discern the quality of the code being generated, and using AI would clearly hinder their thinking and progress. But I can control it, so I can use it.

Over the past month, Cursor has generated over 60,000 lines of code for me. Guess how many lines I accepted? Less than 5,000. It often goes off in the wrong direction, repeats the same logic without understanding abstraction, and even corrects parts that I manually adjusted, reverting previously corrected mistakes. It writes a lot of complex tests that it itself can’t understand, ultimately failing to comprehend why the tests “don’t pass…”

A few days ago, I created a new project that consumed over 20 hours of my “explanations” to it, resulting in the generation of over 20,000 lines of code. Eventually, it became so complex that it was irreparable, and I had to decide to completely start over. It even cheered, “Success!” and listed item by item the “achievements,” completely ignoring the fundamental errors that made no sense. I pointed out the problems repeatedly, but it kept responding with “Oh, I see!” and “This time I found the root of the problem!” but it was all empty talk… Because it couldn’t be corrected, it could only deceive itself?

The model I used was the latest Claude 4 sonnet. GPT 4.1 is even worse and almost unusable for code modification. Claude opus is too expensive and slow, and from what I’ve tried, it doesn’t seem to be much better than sonnet. Some people provide Cursor configuration “guidelines,” saying that you just need to write these into .cursorrules. Of course, I tried it, but it’s useless. It doesn’t satisfy whatever you ask for; even the requirements I just proposed can sometimes be ignored. People think AI can understand a large amount of complex code, but time and again, experience has taught me that it cannot.

No one can understand messy code, not even Wang Yin. If you give AI instructions a few times and the generated code is combined, it starts to repeat and become messy. AI cannot understand the patterns and similarities in this code; it may not even have looked at some of the code. So it can’t simplify such code or even see where it can be simplified. A few times, I explicitly marked the line numbers and pointed out, “This part can be simplified.” It replied, “Yes! I’ll help you simplify it!” But in the end, its understanding was completely different; it went and corrected parts that were originally correct, and the code wasn’t simplified at all.

Don’t get me wrong, using AI to do things isn’t always a failure; in fact, it’s often successful on a small scale. Sometimes it can really get things done, but you have to know how to control it. I’m giving this example just to show that even Wang Yin often fails when using AI to write code. It’s clearly not as advertised, where you just tell it what to do. I’ve described it in great detail, and it still doesn’t work well. How detailed can Wang Yin’s descriptions be? Just look at Wang Yin’s articles, and you’ll know.

Some say not to set goals too high or too fast and to have a strategy. Of course, I know this; I’m very strategic. Do you think my previous successful projects could have been accomplished without wisdom and strategy? The so-called strategy is knowing what to do first, what to do later, what should be done, what shouldn’t be done, and what shouldn’t be done for the time being. I can say that I am a master of this kind of strategy. Very few people know how I do things; they only know the final outcome.

After I did PySonar, a team at Google spent two years trying to create a project that surpassed PySonar, but in the end, they accomplished nothing. Why? Because their strategy was wrong from the beginning. They wanted to use a logic programming language like Prolog to implement type inference. As soon as I heard this, I knew it was doomed to fail. Why did I know? Because I had already tried it, and I knew the limitations of the Hindley-Milner system and Prolog. Knowing what not to do and what is doomed to fail is actually very important wisdom and strategy. Many people lack this wisdom.

I digress. In short, I later found that my usual programming strategies can be used to guide AI in writing code, such as starting from the most basic small functions and gradually progressing. Can it do it right if guided this way? I found that it can’t even write small pieces of code well. Some small functions of just a few lines require me to correct them multiple times before it can get them right. And then, who knows when it might correct them wrong again, so you have to check every place it has modified. You need to know what good code looks like and what is garbage. In other words, you have to review almost every line of code it writes, otherwise, it’s easy to lose control.

If you can’t write code, how can you review someone else’s code? Knowing what good and correct code looks like is the hardest thing. Without in-depth research and a lot of experience, it’s impossible to discern. Yes, AI has now become a coder, and I have become a VP. But what good can a VP who doesn’t understand computer science and leads a group of coders writing spaghetti code produce? Hehe, I understand similar phenomena in many companies. They don’t know what their subordinates are doing, who is right, or what the next step should be. I know how many VPs are groping in the dark, deceiving and cheating.

So, people without ability still can’t do anything with AI because they can’t control it. They are not qualified to be VPs. Because most of the code in the world is written by mediocre spaghetti code programmers, and the training data is like that, it’s expected that AI can hardly write “Wang Yin-level” code. I found that giving my well-written code to AI, it can indeed perform some useful analysis and improvements. But if it starts writing from scratch, AI really struggles. Almost every small function requires me to correct it multiple times to reach the simplicity and understandability I expect.

The code in my computer science class is all extremely profound, completely different from company code and open-source projects. So, students taking my class have little hope of using AI to complete their exercises. Because the data volume is too small, there is no training data, so AI may never reach this level of profundity. Of course, after graduation, the students’ level far exceeds that of AI and those mediocre programmers who are the source of AI training data. This is why my course is called “Computer Science” and not “Programming.” There is a huge difference between computer scientists and programmers/software engineers.

AI may be able to replace ordinary programmers, but it can never replace computer scientists. It can only be used by computer scientists as a tool. Don’t get me wrong, I actually think AI is a great thing and a truly great invention. I found that LLM can really understand human language and appears to have a deep level of “thinking,” which is already a remarkable achievement. My previous evaluations of AI were mostly correct, such as the prediction that “self-driving cars” are doomed to fail. It’s just that the capabilities of LLM have somewhat exceeded my expectations.

However, the current programming level of LLM is clearly far below mine. When using AI, I found that the speed of problem-solving has increased a lot because many tedious tasks don’t require my personal attention, allowing me to focus more on the core parts. In short, I use it to do the “dirty work” that I don’t want to do, and it has no complaints. This is why, after so many failures, I continue to use them and even pay to use them. Because there is too much dirty work in this world, too many complex and poorly designed documents, and I need such a tool to help me deal with them. But the core ideas still have to come from me; AI is powerless in this regard.


Back Donate