Skip to content

IBM Acquires Confluent: Talking About RobustMQ

Today I saw the news that IBM acquired Confluent for $11 billion. Honestly, my first reaction was a bit of surprise. I was thinking—so people are still willing to invest this much in message queue technology, this kind of infrastructure. In the process of building RobustMQ, the biggest skepticism we've faced is that the message queue direction is already saturated, that there's no room left. In this era when AI concepts are everywhere, message queues—this traditional infrastructure software—neither has the sex appeal of large models nor the immediate feedback of application software. It seems like it's long since left the spotlight.

But today's news made me think: maybe we're doing something that could be valuable. I'm thinking—if our technical direction is sound and we do it well, it could be really cool. I also want to share some thoughts from developing RobustMQ.

Over the past few months, many people have asked: There are already mature products like Kafka, EMQX, RabbitMQ; the market landscape is basically set. Why build RobustMQ? What makes you think you can do better?

Our answer is simple: To surpass, you must start from scratch.

Many projects today are doing optimization: Redpanda rewrote Kafka in C++ for better performance; some projects add features on top of Kafka to expand use cases. These are valuable improvements. But we firmly believe that true differentiation and competitiveness must be built from the lowest, most foundational level. Only by designing from scratch can you build something truly different.

Kafka was designed over a decade ago—back then there was no cloud-native concept, no systems language like Rust, no compute-storage separation architecture thinking. EMQX is implemented in Erlang; though it performs well in IoT scenarios, architecturally it's already difficult to support multi-protocol unification. These products are all excellent and solved key problems in their time. But the times are changing, technology is advancing, and requirements are evolving.

So we chose to start from the kernel and rebuild completely. Use Rust instead of Java or Erlang—because we need zero-GC performance and memory safety guarantees. Use compute-storage separation instead of traditional monolithic architecture—because the cloud-native era needs true elastic scaling. Design a protocol-agnostic unified kernel instead of optimizing for a single protocol—because we believe the future needs a unified message platform.

This path is hard and slow. Starting from zero means every line of code must be written ourselves, every pitfall stepped in ourselves, every detail polished repeatedly. But we believe that only this way can we establish true differentiation at the lowest, most core level.

RobustMQ's core design philosophy is "unified kernel, multi-protocol adaptation." We've implemented a high-performance message routing engine, compute-storage separation architecture, pluggable storage abstraction, and elastic scheduling at the kernel layer. These capabilities are protocol-agnostic and scenario-agnostic. On top of this strong kernel, MQTT, Kafka, and AMQP are just different protocol adaptation layers.

This design lets us "build once, reuse many times." The standard we've set for ourselves: Only start the next protocol when one reaches 100%. MQTT is our first protocol—not because MQTT has the largest market, but because MQTT can fully validate the kernel's various capabilities. Pub/sub model, QoS guarantees, session management, massive connection handling—all of these need strong kernel support.

We're not chasing short-term convenience—we're chasing long-term correctness. We chose Rust because it provides near-C++ performance while guaranteeing memory safety. We chose compute-storage separation because the cloud-native era needs true elasticity. We chose multi-protocol unification because we believe that's the direction of the future.

Looking at Confluent's journey—11 years from founding to acquisition—it proves a truth: excellent infrastructure software takes time. We're giving ourselves 10 years to make each protocol excellent, one by one, and validate each scenario deeply.

Some say our vision is too big—"next-generation unified message platform" supporting all protocols and all scenarios. But we're also clear-eyed. Vision is vision; execution must be grounded. Our current focus is clear: bring MQTT to 100%, use MQTT to validate kernel stability and reliability. At the same time explore AI scenarios to validate kernel generality and performance.

Look up to know where you're going; look down to know how to take each step. That's our rhythm.

Today's news makes me more convinced we're doing the right thing. AI-era demand for data infrastructure is growing rapidly. Building differentiation from the kernel is the path to truly surpassing existing products.

Of course, the road is long. But we're not in a rush. We want to do something with technical ambition, something that can influence technology development long-term. Confluent took 11 years; Linux took 30. We're giving ourselves enough time to refine our technology.

Technology itself is the best answer. If this is valuable, the technology will speak.