Skip to content

AI-Era Foundational Software Development: How Energy Allocation Has Changed

Over the past year developing RobustMQ, AI-assisted programming has become an important part of my workflow. In constantly working with AI on coding, my strongest feeling is: energy allocation has fundamentally changed.

Before, a lot of energy went into repetitive checking, low-level bugs, and mechanical work. Now AI handles many of these, and energy can focus on what really matters: architecture design, performance optimization, and core functionality. The speedup from this shift is larger than from "writing code faster" alone.

What Consumes Energy

When building foundational software, many assume most time is spent writing new features. In reality, a lot goes to:

Fixing low-level bugs: Unhandled edge cases, incomplete error handling, functions that can return None but aren’t handled, panics under certain edge conditions. Individually these aren’t hard, but they’re numerous, scattered, and tedious.

Repetitive checking: Reviewing code for memory leaks, concurrency issues, performance risks. Each pass needs care, but people get tired and distracted—especially when reviewing thousands of lines, attention drops toward the end.

Refactoring tech debt: Code written hastily early on needs later refactoring. And one fix can trigger others; changing one place forces changes elsewhere.

Mechanical work: Writing tests, docs, issues, formatting. All important, but all repetitive and draining.

Look at Kafka release notes: many versions are mainly bug fixes, module refactors, and legacy optimization. Pulsar is similar—often fixing edge cases, concurrency deadlocks, data inconsistency. These are the main energy sinks.

What AI Changes

AI’s biggest value isn’t "writing code for me" but taking on a lot of repetitive checking.

I finish code and immediately ask AI to review. It points out missed edge cases, poor error handling, possible panic points, and concurrency risks. These low-level issues get caught early instead of turning into tech debt.

Just as important: AI’s checks are consistent. No "feeling sharp today, careful tomorrow, careless the day after." It applies the same bar every time. Problems don’t slip through because of fatigue or oversight.

I ask AI to generate tests; it covers many scenarios and quickly builds baseline coverage. I still verify critical scenarios aren’t missed, but the baseline is much faster.

I ask AI for docs and formatting. It handles these mechanical tasks quickly and consistently. I explain the core ideas; AI organizes them into structured output.

A Concrete Example: Auto-Creating Issues

Recently I found a very useful pattern. Writing issues used to feel like a burden—I’d know what to do but didn’t want to spend time on the issue. Now I describe ideas to AI, it understands the code context, and produces a clear, well-structured issue.

This has freed up a lot. Before, I often skipped documenting because "writing issues is annoying." Now I can capture ideas as they come. More and better issues help community building, and ideas don’t get lost. I discovered this through ongoing AI use.

How Energy Allocation Changed

Before, much of my time went to fixing bugs, refactoring debt, repetitive checks, and mechanical work. Only a small fraction went to architecture, core functionality, and performance optimization. Now the ratio has flipped: AI handles checking and mechanical work, so most of my energy goes to moving the project forward.

It’s not only about speed. The quality of energy has changed. Before, a lot went into bug fixes—forced, reactive, non-accumulating. Now most goes to architecture, optimization, and core work—deliberate, accumulating. Every optimization and feature makes the project better.

Energy isn’t just time; it’s state. In the morning you’re sharp and can handle complex design; in the afternoon you’re tired and only good for repetitive tasks. If too much time goes to repetitive work, by the time you need to do architecture, you’re exhausted. Worse is the mental load: you know you should review every line, but you’re tired, distracted, and that "should do it but can’t" feeling is a hidden cost. AI lets me set that aside and keep energy for core work.

This change compounds. Energy spent on what moves the project forward makes progress faster, direction clearer, and code quality higher. High code quality means fewer bugs later, which frees more energy.

Standing on Mature Shoulders

The change in energy allocation isn’t only from AI; mature components help too.

Using RocksDB instead of building storage from scratch saves tens of thousands of lines and a lot of energy. No need to debug file management, index maintenance, or crash recovery—Facebook solved those. I focus on key layout, tuning configs, adapting to the domain.

The same idea applies to tokio instead of rolling our own async, io_uring instead of hand-tuning I/O. Save energy on basics and spend it where we add value.

AI and mature components together let energy concentrate on core work. No reinventing wheels, no getting stuck on low-level issues—focus on what actually creates value.

How Big Is This Change

These days, building foundational software is a lot faster. But the main speedup comes from the change in energy allocation, not just "writing code faster."

It’s fast not because AI writes core code for me. It’s because it does a lot of repetitive work: surfaces low-level issues, fills in test coverage, handles mechanical tasks. These used to consume a lot of energy; now AI takes them.

If writing were faster but we still spent most energy on bugs and repetitive checks, overall speedup would be limited.

But if energy allocation shifts—from a little time on core work to a lot—the speedup is large. Because core work moves the project; that’s where change compounds.

That’s the power of shifting energy allocation.

But Core Must Still Be Under Control

Having said all that about AI, I must emphasize: core must stay under human control.

AI can help with checking, testing, formatting, and writing issues, but architecture, core code, and key judgments must be mine. Not because AI can’t do it, but because even if it could, I wouldn’t delegate. Those are the soul of the project; delegating means losing control.

The change in energy allocation lets me spend more energy on core work—it doesn’t mean AI does core work for me.

That’s an important distinction. Not "AI does core work for me," but "AI handles repetitive work so I can focus on core work."

Summary

In AI-era foundational software development, the biggest change isn’t "writing code faster"—it’s energy allocation changing.

From energy consumed by low-level problems to energy focused on core work. From being dragged down by repetitive work to focusing on what actually moves the project forward.

The speedup from this change is larger than from simply writing faster. Because energy used well brings quality gains, clearer direction, and project progress.

In developing RobustMQ, I’ve felt this strongly. AI lets me concentrate on architecture, performance, and core functionality instead of being drained by low-level bugs, repetitive checks, and mechanical work. That’s the main value AI gives me.

Energy is finite. Use it where it counts, and you’ll go further.