Skip to content

RobustMQ: Strategic Thinking on Building the Next-Generation Unified Message Platform

After months of deep reflection, we have clarified RobustMQ's strategic positioning and development path. This is not a fast commercialization project, but a long-term undertaking to build great infrastructure software. Below is our complete exposition of technical vision, strategic choices, and execution philosophy.

Vision and Technical Foundation

RobustMQ's core vision is: Build a unified message platform supporting multiple protocols and adapting to multiple scenarios on a powerful kernel foundation. Let enterprises no longer need to deploy EMQX, Kafka, RabbitMQ, and other systems in parallel—solve all message needs for IoT connectivity, real-time stream processing, and microservice communication with a single RobustMQ.

This vision is ambitious, but we believe it's the inevitable direction for message middleware development. Just as Linux unified Unix fragmentation and Kubernetes unified container orchestration, the next-generation message platform needs to unify today's scattered message infrastructure.

To realize this vision, we use a three-tier architecture: at the bottom is a unified message kernel implemented in Rust; in the middle is the multi-protocol adaptation layer; at the top is scenario optimization.

The unified kernel is the foundation, implementing core capabilities such as high-performance message routing engine, compute-storage separation, pluggable storage, and elastic scheduling. These capabilities are protocol-agnostic and scenario-agnostic, forming a technical moat.

Above the kernel is the protocol adaptation layer. MQTT, Kafka, AMQP, and other protocols are different manifestations of kernel capabilities. MQTT needs pub/sub and QoS; Kafka needs partitioned stream processing; AMQP needs complex routing—these needs have unified abstractions at the kernel level.

The application scenario layer optimizes for specific businesses. IoT needs massive long-connection management; AI needs high throughput and low latency; real-time analytics needs stream processing. These optimizations share the same kernel.

The advantage of layering: once the kernel is strong and stable, adding protocols and scenarios becomes easier. We don't need to re-implement low-level logic for each protocol—we only need to develop adaptation layers. This is the power of "build once, reuse many times."

Technical Foundation: Unified Kernel Architecture

RobustMQ's technical architecture uses a three-tier design: the bottom tier is a unified message kernel implemented in Rust; the middle tier is the multi-protocol adaptation layer; the top tier is optimization for various application scenarios.

The unified kernel is the foundation of the entire system. We implement high-performance message routing engine, compute-storage separation architecture, pluggable storage abstraction, elastic scaling scheduling, and other core capabilities at the kernel layer. These capabilities are protocol-agnostic and scenario-agnostic; they form RobustMQ's technical moat.

Above the kernel, we designed the protocol adaptation layer. Different protocols like MQTT, Kafka, and AMQP are essentially different manifestations of kernel capabilities. MQTT needs pub/sub and QoS guarantees; Kafka needs partitions and stream processing; AMQP needs complex routing rules—these seemingly different needs all find unified abstractions at the kernel level.

The application scenario layer optimizes for specific business scenarios. IoT device connectivity needs massive long-connection management; AI training needs high throughput and low latency; real-time analytics needs stream processing—these optimizations sit atop the protocol layer but all share the same kernel.

The advantage of this layered architecture: once the kernel is strong and stable enough, adding new protocols and scenarios becomes relatively easy. We don't need to re-implement low-level logic for each protocol—we only need to develop adaptation layers. This is the power of "build once, reuse many times."

Execution Strategy: Depth First and Multi-Scenario Parallel

With limited resources, the biggest risk is "wanting to do everything and doing nothing well." Too many open source projects have died from over-expansion: many features, but each only 60–70% complete, ultimately abandoned.

Our execution principle: Only start development on the next protocol after one protocol reaches 100% completeness. This isn't conservatism—it's responsibility to users and technology.

We chose MQTT as the first protocol: specification is clear, implementation is well-defined, testing is mature, and it fully validates kernel capabilities. MQTT features like pub/sub, QoS, session management, and will messages all need strong kernel support. If MQTT works well, the kernel is stable and reliable.

100% means: complete functionality, production-grade stability, industry-leading performance, comprehensive documentation and tooling, and sufficient user validation. We won't rush to the next protocol because something "basically works"—we'll keep refining until it's excellent.

When MQTT reaches 100%, we'll start Kafka development. With MQTT experience and a mature kernel, Kafka will go more smoothly. Similarly, Kafka to 100%, then AMQP, then other protocols. This is a serial, incremental process—each step solid, each protocol a quality product.

Although protocols are serial, we won't wait for MQTT to be fully done before touching other scenarios. While focusing on MQTT, we'll use some resources to explore AI training, stream processing, and other scenarios. This isn't splitting focus—it's validating kernel strength.

AI scenarios place extremely strict demands on message queues: microsecond-level latency, high throughput, large file transfer, elastic scaling. If the kernel can support AI training, the design is successful and we can demonstrate technical strength to the community. More importantly, AI is hot—it helps build attention. Through AI community engagement, tech conference presentations, and performance reports, we attract the tech community and build a "technically leading" brand impression.

Similarly, we'll build a basic Kafka protocol framework. Although full development waits until MQTT reaches 100%, building it in advance, showing progress, and publishing plans lets Kafka users know what we're doing, expanding our potential user base.

This is "multi-wheel drive": MQTT is the main wheel, spinning fast and steady; AI and Kafka are auxiliary wheels, building momentum, validating technology, expanding influence. The main wheel carries capability; auxiliary wheels create visibility.

Multi-Scenario Parallel: Momentum and Validation

Although protocol development is serial, we won't wait for MQTT to be fully done before engaging other scenarios. While focusing on MQTT development, we'll use some resources to explore AI training, stream processing, and other scenarios.

This isn't splitting focus—it has a clear purpose. AI training places extremely strict demands on message queues: microsecond-level latency, high throughput, large file transfer, elastic scaling. If our kernel can support hard-core scenarios like AI training, the kernel design is successful and we can demonstrate technical strength to the community.

More importantly, AI scenarios help us build momentum and attention. AI is the hottest topic in technology; discussion and attention around AI far exceeds traditional message queue topics. By engaging in the AI community, presenting AI scenario applications at tech conferences, publishing technical articles and performance reports, we attract more tech community attention and build a "technically leading" brand impression.

Similarly, we'll build the Kafka protocol base framework. Although full Kafka development waits until MQTT reaches 100%, building the framework in advance, showing development progress, and publishing technical plans lets Kafka users know what we're doing, expanding our potential user base and warming up for future Kafka development.

This is our "multi-wheel drive": MQTT is the main wheel and must spin fast and steady; AI and Kafka are auxiliary wheels for building momentum, validating technology, and expanding influence. The main wheel carries strength; auxiliary wheels create visibility.

Community Building and the Open Source Way

Great infrastructure software is necessarily community-driven. Linux, PostgreSQL, and Kubernetes have all proven this.

From day one of the project, we've placed community building in an important position. Continuous technical blog output, Discord operations, timely GitHub Issue responses, tech conference participation, nurturing external contributors—these are daily work.

We won't "wait for the product to mature before building community"—we'll let the community grow with the product. Early user feedback improves the product; external contributors raise code quality; technical discussion avoids working in isolation. Community activity is the best measure of whether we're just pleasing ourselves.

The Apache Foundation is strategically important. Joining the incubator isn't just brand certification—more importantly it establishes healthy community culture through a governance framework. Apache's "community over code" philosophy, transparent decision process, and standardized release mechanisms can help RobustMQ become a truly community-driven project.

But joining Apache shouldn't be too early. Only when the product is mature, community is active, and influence is sufficient will it be natural. Our goal is to enter with strength and graduate quickly as a top-level project, not stagnate in the incubator for years.

Technical Purity and Long-termism

Choosing Rust isn't trend-chasing—it's a deliberate technical decision. Most mainstream message queues use Java—mature ecosystem but GC pauses, memory efficiency, and concurrency limits. C++ has better performance but lower development efficiency and prominent memory safety issues.

Rust gives us the best of both: near-C++ performance, memory safety guarantees, modern development experience. Zero GC pauses mean more stable latency; ownership system means safer concurrency; strong typing means more reliable code. These characteristics have obvious value in message queue scenarios with extreme performance and reliability requirements.

Compute-storage separation isn't about novelty—it's the inevitable choice in the cloud-native era. Stateless compute can scale quickly; independent storage can be flexibly configured; separated scheduling enables intelligent resource optimization. This architecture gives RobustMQ innate Serverless capability, adapting to cloud-native elasticity needs.

We don't compromise on technology. We won't lower code quality for quick releases, won't sacrifice architectural elegance for compatibility, won't deviate from technical essence for market demands. This purity makes us move slower, but ensures we go far and steady.

We have no commercialization plans or pressure—this is both advantage and test. Advantage is focus on technology without short-term profit interference; test is how to maintain long-term commitment and motivation.

Great open source projects take time: Linux 30 years to become server standard, PostgreSQL 25 years to surpass MySQL, Rust 10 years to be adopted by Linux, Kubernetes 7 years to become cloud-native standard. Infrastructure software needs sustained investment and firm belief.

We're giving ourselves 10 years. The first 5 years focus on technical refinement and community building—making one protocol excellent, accumulating user trust. When technology is mature, community is active, and influence is sufficient, commercialization will come naturally. But we have no such plans or thinking in the short term.

Strategic Keywords

RobustMQ's strategy can be summarized as: unified, deep, patient, open.

Unified is the vision—one kernel, one system, solving all message scenario needs. Deep is the execution principle—one protocol to 100% before the next. Patient is the time view—10 years to refine technology, no rush. Open is the attitude—embrace community, accept scrutiny, keep learning.

This road is long and not easy. The market already has excellent message queue products; building differentiation isn't easy, earning user trust isn't easy, building an active community isn't easy. But precisely because it's not easy, it's worth doing.

We believe that with the right direction, solid technology, healthy community, and sustained investment, RobustMQ will become an important presence in message middleware. Maybe it won't dominate the market, but it will be a technically excellent, community-active, trustworthy open source project.

This is what we want to do. Use 10 years to build the next-generation unified message platform, providing better infrastructure for the cloud-native and AI era.

The road is long, but the direction is clear. We're already on the way.