Skip to content

Reflections on AI Coding

While working on RobustMQ, I've been thinking: When AI can generate code quickly and Claude and Cursor become everyday tools, what competitive advantage do we who write code have left?

This isn't groundless worry. Many engineers around me, including some who used to be very strong, have become noticeably dependent on AI. I sometimes feel this dependency too.

An Unsettling Observation

Over the past year, the trend has been clear: more and more engineers around me rely on AI to write code. A requirement comes in, first they ask Claude how to implement it; AI gives code, copy and paste; if it runs, submit. The whole process takes ten-plus minutes—remarkably efficient.

I understand the temptation. Who doesn't want things to be easy and efficient? But what's more unsettling: these engineers who heavily depend on AI are visibly regressing in technical capability.

Last month I met a friend who two years ago hand-coded distributed systems with deep low-level understanding. Now he basically doesn't hand-write anymore—relies entirely on Copilot. I asked about concurrency; he couldn't explain it clearly for a long time. He said: "I'm becoming less sensitive to details; without AI I feel lost."

Another time during Code Review, there was a loop repeatedly allocating memory. A colleague said it was AI-generated, "as long as it runs." I pointed out the performance issue; he said: "Right, I don't know how to fix it—let AI optimize it?"

These small things made me think: While AI boosts efficiency, is it quietly hollowing out our abilities?

AI's Fundamental Limitations

Many worry AI will completely replace engineers. That worry rests on a wrong assumption: that AI can reach 100% intelligence.

In reality, AI has fundamental limits. It doesn't understand "why"—it knows how to write code but not why to design a system that way. It doesn't innovate—it can only recombine existing patterns, not create breakthrough architectures. It doesn't understand real scenarios—like RobustMQ's cache consistency issues; AI wouldn't spot these pain points. It doesn't make trade-offs—faced with performance vs. maintainability, AI can't give project-specific judgment.

More importantly, engineering's essence isn't writing code—it's solving problems. The real work is: understanding requirements, defining solutions, designing architecture, optimizing performance, troubleshooting, making key decisions. Writing code is just one step, often not the most important.

AI can assist with writing code, but understanding problems, designing solutions, making judgments, taking responsibility—these core tasks must be done by humans.

So it's not "engineers have no value"—it's "simple coding value goes down, complex engineering value goes up." AI lowers the bar for beginners and raises the bar for experts. The future belongs to engineers who deeply understand technology, solve complex problems, and master AI tools.

Ability Regression and Loss of Technical Intuition

Programming ability isn't knowledge you learn once and keep forever—it's more like a muscle that needs ongoing exercise. Stop practicing long enough, and it atrophies.

I've experienced this myself. Last year the project was behind schedule; I used a lot of AI-generated code. Two months later, when I tried to hand-write an algorithm, I found: implementations I used to do without thinking now required thought; optimization points I used to spot at a glance now needed careful consideration. Just two months without hand-coding, and ability had already started to decline.

Friends report similar feelings: "Writing code without AI feels really uncomfortable—like driving automatic for years and suddenly switching to manual." Scarier still: this regression happens silently. You don't suddenly realize "I can't do it anymore"—you realize at some moment: complex problem diagnosis has slowed down, code doesn't feel right anymore, performance optimization has lost its touch.

When you're used to AI, going back to hand-coding feels painful, slow, clumsy. Most people keep depending on AI, so regression becomes irreversible.

The most fatal part of this regression is losing technical intuition. When writing RobustMQ's message engine, I stared at the code for a few minutes and felt something was wrong. After analysis: the data structure wasn't aligned to cache lines, causing false sharing. After fixing, performance improved 15%. That "something feels off" feeling is technical intuition.

Technical intuition isn't innate—it forms from hand-writing tens of thousands of lines of performance-critical code and stepping in countless pitfalls. You've dealt with various concurrency issues, optimized memory layout—gradually you can look at code and sense where problems might be.

But this intuition is fragile. Stop hand-writing core code, stop going deep into the low level, stop doing optimization yourself—in six months intuition weakens, in a year it may disappear entirely.

AI-generated code can run, but often isn't well optimized. It won't consider cache alignment, branch prediction, SIMD instructions. Someone with technical intuition can spot these and optimize manually. Without intuition, you accept suboptimal code and never reach peak performance.

In infrastructure software, performance is core competitiveness. RobustMQ needs microsecond-level latency—every microsecond matters. That kind of optimization, AI can't do—it requires human experience and intuition.

Core Code Must Be Hand-Written

Not all code needs to be hand-written, but core code must be written by hand.

Performance-critical paths must be hand-written. In message queue core logic, every call affects latency, every memory allocation matters. AI can't write this well—it needs engineers who understand the low level to craft it carefully. CPU cache alignment, branch prediction, SIMD instructions—these details determine the performance ceiling.

Architectural core abstractions must be hand-written. Critical trait definitions, module interfaces, data structure choices—these determine architectural extensibility and elegance. AI gives generic solutions; it can't produce targeted optimization. This requires deep business understanding, forward-looking technical thinking, accurate trade-off judgment.

Tricky bug fixes must be hand-written. Production weird bugs, sudden performance drops, intermittent crashes—these need diving into system internals, combining logs, monitoring, source code, experience to diagnose. AI gives suggestions, but root cause analysis and solution design still depend on human experience and intuition.

Innovative features must be hand-written. Never-before-seen features, breakthrough optimizations, unique approaches—no existing reference. AI is based on existing knowledge; it can't produce real innovation. RobustMQ's multi-protocol unified kernel—that kind of architectural design must be done by humans.

Repetitive code, standardized patterns, lots of tests—AI can generate these; humans Review and optimize. This isn't laziness—it's spending time where it's more valuable. The key is being able to spot problems in AI code and optimize—that itself requires solid skills.

Two Developmental Paths

Five years from now, ten years from now—which engineers will be eliminated, which will become more valuable?

Path One: The Comfort Zone of AI Dependence

Feels great at first. Use AI to write code, efficiency doubles, output is high. Year one you might even get praised for "being efficient."

But year two, year three, problems appear. Low-level knowledge blurs—don't remember system calls, unsure about concurrency details. But AI knows—just ask.

A couple more years, hand-coding ability declines. Writing code on the spot feels wrong—like switching from automatic to manual. But in daily work, AI handles everything.

Year five, you hit a complex problem AI can't solve—completely stuck. Colleagues have become experts; you can't function without AI. Want to regain your touch? Can't find what you once had.

This path seems easy but mortgages the future.

Path Two: Maintaining Hard Skills

This path is harder. Keep hand-writing core code, think about why for every line. Solve production issues with your own hands—stay up debugging root cause. Do performance optimization yourself—squeeze out every microsecond.

Year one feels slow. Others finish in ten minutes; you take an hour. But that hour builds real skill.

Year two, year three, low-level understanding deepens, intuition sharpens. Spot problems in code at a glance, have ideas for debugging, a feel for optimization. At the same time learn to use AI for efficiency—but it's an assistant, not a dependency.

Year five, hand-coding ability plus AI tools—output is three times before. More importantly: you can solve problems others can't. Complex concurrency bugs, strange performance issues, innovative technical approaches—only you can handle them.

This path is tiring now but strong in long-term competitiveness.

How to Properly Use AI and Practice

I'm not against using AI—I use it myself. The key is how. The principle: humans think first, humans do the core work, AI assists with details.

When I encounter a problem, I don't immediately ask AI. First I think: What's the essence of the problem? What are the solution options? Where are the trade-offs? Which fits our scenario? After thinking it through, then use AI to assist implementation.

Core logic must be hand-written—like the message scheduling algorithm that determines system performance and correctness. Every line of code must be understood—why it's written that way; every data structure must consider performance impact.

Auxiliary code can be AI-generated—configuration parsing, error handling, test framework. But after generation, Review carefully—understand every line, optimize what's not good enough. If you can't explain why a piece of code is written that way, you don't truly understand it—don't use it.

The process of Reviewing AI code is the process of maintaining technical sensitivity. You need to spot redundancy, performance issues, inelegant parts. This requires solid fundamentals in language, algorithms, system principles.

Engineers who use AI well usually have a strong foundation. They know AI's boundaries: when to use it, when not to; they know where AI code's pitfalls are and how to optimize. Those who only use AI either lack foundation or have been dragged down by AI.

AI is an amplifier—it makes the strong stronger and exposes the weak.

But using AI well requires staying on the front lines. Some say senior engineers only make design decisions and don't write code. That's a dangerous misconception.

Good architectural design must be based on hands-on experience. Without hand-writing high-performance code, how do you know which design performs better? Without handling production failures, how do you know where problems easily occur? Without optimizing memory, how do you judge if the architecture has bottlenecks?

I've seen "architects" who've left the front lines—diagrams are beautiful, theory sounds convincing, but implementation is full of issues. Because they've lost system feel—"small problems" are actually big pitfalls.

Truly great people stay on the front lines. Linus is in his seventies still Reviewing Linux patches—because he knows leaving code means losing understanding depth.

While building RobustMQ, I've been hand-writing core code. Compute-storage separation scheduling, message routing optimization, MQTT protocol critical parts—all implemented by hand. Not that others can't write it well—but only by writing it yourself can you truly understand the system and make correct decisions.

Once we discussed a new feature; I thought it was simple. But when hand-writing the code I found the changes were huge and would affect performance. If I hadn't done it myself, I might have made the wrong decision.

Leave the front lines, and ability regresses. No matter how strong you were, no matter your title.

Who the Future Belongs To

AI will make engineer differentiation more obvious.

Some will become stronger because of AI. They maintain hand-written core code, low-level understanding, front-line practice—while using AI well for repetitive work. Their output may be two or three times before; they solve complex problems better; technical judgment is more accurate. They'll become the industry's most scarce talent.

Others will become weaker because of AI. Over-dependent on AI, no longer hand-writing code, no longer going deep—ability keeps regressing. In five or ten years when real skill is needed, they'll find they can't do anything. Their "can use AI" skill is no longer scarce; the "hard-core ability" they've lost is what's scarce.

The market will vote. Ten years from now looking back, engineers who persisted in hand-writing code, maintained technical depth, stayed on the front lines—will be the most valuable. Engineers who over-depended on AI and regressed—will find it harder and harder to get opportunities.

This is an era watershed—and every engineer's choice.

Choose the harder path, maintain hard skills, go further in the long run.