Skip to content

RobustMQ: We Want to Be the Definers, Not the Followers

Over the past few months, many people have asked me: There are already mature products like Kafka, EMQX, and RabbitMQ. Why build RobustMQ? What makes you think you can do better?

My answer is direct: RobustMQ was never meant to be "a better Kafka" or "EMQX in Rust." What we're doing is defining what the next generation of message platforms should look like.

Followers vs Definers

Over the past decade, the messaging space has seen many excellent projects. Redpanda rewrote Kafka in C++ and achieved multi-fold performance gains. Pulsar addressed Kafka's architectural flaws. Cloud vendors launched managed services. These are all valuable efforts, but fundamentally, they all optimize and improve within the framework that Kafka defined. The protocol is still Kafka protocol, the model is still publish-subscribe, and the concepts remain topic, partition, and consumer group.

They are followers. They follow the standard Kafka defined and do their best within that framework.

Followers can succeed too, but followers always live in the shadow of the definers. No matter how well you do, you're playing by someone else's rules.

RobustMQ doesn't want to be a follower. We want to be definers.

What is a definer? Kafka defined the log model for streaming data in 2011, defined the concepts of partitions and replicas—and for over a decade, everyone has worked within that framework. Kubernetes defined container orchestration standards, defined abstractions like Pod and Service, and the entire cloud-native ecosystem was built on these concepts. Rust defined the ownership system and changed the rules of systems programming languages.

Definers create new concepts, new standards, new paradigms. Everyone after them works within that framework. That's true impact.

What We're Defining

The first concept we're defining: message infrastructure should be unified.

Today, enterprises deploy EMQX for IoT connectivity, Kafka for stream processing, and RabbitMQ for microservice communication. Three systems, three protocols, three sets of operations overhead, three data silos. Why? Because each product was designed for a single scenario, each optimized separately, unable to unify.

We say: this shouldn't be the end state. The message platform of the future should be unified. One system, through a powerful kernel, supporting all message protocols and scenarios.

The second concept we're defining: the kernel of a message platform should be protocol-agnostic.

The traditional approach is to design one system for MQTT and another for Kafka, with protocols deeply coupled to implementations. We say: protocols are just manifestations—the essence is message routing, storage, and distribution. There should be a protocol-agnostic kernel, with different protocols adapted on top.

The third concept we're defining: foundational software should be rebuilt with the most suitable technology.

Kafka used Java because in 2011, Java was mainstream. But it's 2025 now, and we have better choices like Rust. Zero-GC performance, memory safety guarantees, modern concurrency models. We want to prove: building from scratch with the right technology can produce fundamentally superior products.

Compatibility is the Start, Definition is the End

Some see us implementing MQTT and Kafka protocols and say: "Aren't you also following?"

Adapting to existing protocols is only the first step. We implement MQTT because IoT devices use it. We support Kafka because many applications already use the Kafka API. This is a pragmatic choice to reduce user migration costs.

But while we're compatible, we're building something entirely different underneath. We don't use Kafka's log model or EMQX's Erlang architecture. We have our own compute-storage separation architecture, our own Rust kernel, our own understanding of multi-protocol unification.

More importantly, we won't stop at adapting existing protocols. When RobustMQ's kernel is mature enough, when we've built sufficient influence, we'll start defining new things. Perhaps a protocol better suited for cloud-native. Perhaps a more efficient multi-protocol routing mechanism. Perhaps a transmission protocol designed specifically for AI scenarios.

That's when we truly transform from followers into definers.

Definition from the Kernel Up

Definition must start from the fundamentals.

If we merely forked Kafka's code and rewrote it in Rust, that would just be a language change—the essence would still be Kafka's approach. If we only added features on top of EMQX, it would still be EMQX's architecture. That's not defining; that's following.

True definition starts from the lowest, most core layer. Rethinking the essence of message queues, redesigning what capabilities the kernel should provide, redefining how protocols are abstracted.

That's why we spend so much time on the kernel, why we insist on hand-writing every line of critical code. We believe that only by building differentiation from the ground up, from the most core layer, can we truly surpass existing products.

The message routing engine, storage abstraction layer, scheduling mechanism—these core components must be perfected. If the kernel isn't strong enough, stacking features on top is useless. If the kernel has no innovation, just rewriting in another language, the gap with existing products is limited.

MQTT and Kafka are just means to validate kernel design. When the kernel is truly mature, it should support any protocol, any scenario—even new protocols we define ourselves.

That's definition from the kernel up.

How Long Will It Take

Defining a new standard takes time. Kafka took five or six years to become the streaming standard. Kubernetes took three or four years to dominate container orchestration.

We've given ourselves ten years.

In the early years, focus on polishing the technology. Make the kernel excellent, achieve 100% MQTT, achieve 100% Kafka. Make the technology solid, the performance outstanding, the stability verifiable.

In the middle years, expand influence. Join Apache, become a top-level project. Speak in the tech community, attract more contributors, build an active community.

In the later years, begin real definition. Propose new concepts, design new standards, advance new paradigms. If the community accepts them, the ecosystem adopts them, and they become de facto standards—then we've succeeded.

This process won't be smooth. There will be skepticism, challenges, failures. But we'll persist, because we believe the direction is right.

Our Choice

Being a follower is relatively safe. Learn best practices, optimize within mature frameworks, controllable risk.

Being a definer is full of risk. The direction might be wrong, the capability might fall short, the timing might be off. But if we succeed, the impact is incomparable to that of followers.

We choose to be definers.

Not out of arrogance, but because we believe: message infrastructure needs new paradigms. The existing framework is no longer enough. Someone has to step up to define new standards. Why can't it be us?

We have clear technical convictions—unified kernel, multi-protocol support, cloud-native architecture. We have solid technical capability—every line of core code is hand-written, every design decision is carefully considered. We have long-term patience—ten years is enough to polish the technology to perfection.

Maybe we'll succeed and become the next project that defines standards. Maybe we'll fail and become one attempt in the evolution of technology. Either way, we're seriously doing something meaningful.

We're not trying to be "a better Kafka." We're defining what the next generation of message platforms should look like.

This is RobustMQ's choice, and our belief.

Look up at the sky, keep walking the road. The future belongs to those who dare to define.


RobustMQ is defining the next generation of unified message platforms from the kernel up. Follow us on GitHub to witness this exploration.