Skip to content

What Should a Messaging System Look Like in the Age of AI — Thoughts and Explorations on mq9

1. Where mq9 Stands Today

mq9 is a protocol designed by RobustMQ for AI Agent communication. Its core semantic is the mailbox — each Agent has a persistent communication address where messages wait until their TTL expires or they are processed. Senders and receivers do not need to be online at the same time.

The foundational capabilities of mq9 are now in place.

Each Agent has its own dedicated mailbox as a communication address. Messages are persisted until TTL expires, and neither sender nor receiver needs to be online simultaneously. Messages support three priority levels (critical / urgent / normal), with higher-priority messages delivered first and FIFO ordering guaranteed within the same priority. Multiple Agents can form a queue group to compete for consumption from a single mailbox, ensuring each message is processed exactly once — naturally supporting task distribution and load balancing. Public mailboxes support custom names and descriptions, and Agents can discover the existence and capabilities of other Agents via PUBLIC.LIST.

SDKs are available in six languages (Python, Go, JavaScript, Java, Rust, C#), integrations with LangChain and LangGraph are complete, and MCP Server support is in progress.

So what comes next? We want mq9 to be RobustMQ's entry point into the AI architecture era. That is why we have been thinking carefully about what a messaging system should look like in the age of AI. Here is where we stand.


2. The Landscape Has Changed — Existing MQ Systems Are No Longer Enough

Kafka is great. RabbitMQ is great. NATS is great.

But they were designed for a different era.

Kafka solved this problem: how to move massive volumes of data between systems at high throughput. Its core assumption is that messages are data, the middleware is a pipeline, and consumers are data processors. That assumption was entirely correct in the big data era.

The emergence of AI Agents changes the picture.

What Agents pass between each other is not just data — it is intent: "Help me analyze this contract," "Task complete, here are the results," "I need a collaborator who can handle legal questions." Intent and data are fundamentally different: data is structured, intent is semantic; data is routed by Topic, intent is routed by understanding.

When Agent A wants to hand off a legal question to "the most suitable Agent," Kafka asks: which Topic should you publish to? You need to know the destination in advance. But Agent A does not know — it only knows what its problem is.

This is not a flaw in Kafka. The landscape has simply changed.

The deeper difference lies in the time model. Traditional MQ design assumes consumers are online and that minimizing message wait time is the goal. But Agents are autonomous — they have their own rhythm. An Agent may be executing another task, waiting for human confirmation, or sitting idle. Communication between Agents inherently requires asynchrony, persistence, and the ability to wait.

There is also the security model. Traditional MQ assumes senders are trusted, and the middleware's job is simply delivery. But Agents make decisions autonomously. A misconfigured or compromised Agent could issue dangerous instructions — "delete the database," "transfer funds." A traditional message broker would faithfully deliver that message without any interception.

The landscape has changed. The tools need to change with it.


3. What We Think a Messaging System Should Look Like in the Age of AI

This is an open question, and we do not have all the answers. But we have formed some views.

Asynchronous Communication Is the Foundation, Not a Feature

In the world of AI Agents, asynchrony is not an optional communication mode — it is the default.

Agents are autonomous entities with their own execution rhythm. Forcing synchronous communication between Agents is equivalent to forcing them to wait on each other, which undermines the very foundation of their autonomy. The mailbox semantic — each Agent has a persistent communication address where messages wait for it — is the most natural base abstraction for communication in the AI era.

This is also the starting point for mq9's design. Not because asynchrony is technically more complex, but because it aligns more naturally with how Agents actually work.

Service Discovery Should Be Built Into the Communication Protocol

Traditional service discovery is a separate piece of infrastructure — Consul, Etcd, ZooKeeper. Service registration lives in one place, message communication lives in another, and two systems need to be coordinated.

But mq9's public mailboxes are inherently a service registry. An Agent creates a public mailbox, fills in a name and description, and other Agents discover it via PUBLIC.LIST. No external registry is needed — the communication protocol itself takes on the responsibility of service discovery.

This was not a deliberate design decision so much as a natural extension of the mailbox semantic. When you give each Agent a meaningful address, service discovery is already done.

Routing Should Understand Semantics, Not Just Match Addresses

This is a direction we are actively exploring — we do not have a definitive answer yet.

Traditional routing is hard-coded: you know who you are sending to, you fill in the destination address, and the middleware handles delivery. This works well in system-to-system integration scenarios where boundaries are clearly defined.

But Agent collaboration is different. Agent A has a problem it needs to handle, it does not know who to send it to, it only knows what the problem is.

If the middleware could understand the semantics of a message, be aware of each Agent's capability description, and perform vector similarity matching — then "route to the most suitable Agent" would no longer be the Agent's own responsibility, but a capability provided by the communication infrastructure.

Vectorize the capability descriptions, vectorize the message content, and perform matching at the routing layer. The middleware evolves from a "post office" into an "intelligent dispatcher." This direction is worth pursuing.

Middleware Should Understand Intent, Not Just Move Bits

This is a more ambitious position.

Existing message middleware operates as a "mindless relay" — it does not know what a message contains, does not judge whether a message should be delivered, and only concerns itself with getting it from A to B. This design is reasonable in human-built systems, where security policy is enforced at the application layer.

But in the world of AI Agents, application-layer security policy is fragile. A misled Agent may send a message carrying the semantic of "delete the database." Application-layer policies might be bypassed — but if the middleware can recognize that intent at the transport layer and block it, that becomes an infrastructure-level security boundary.

No reliance on application-layer discipline. A physical safety valve added at the lowest level.

This requires the middleware to have intent-understanding capabilities — a lightweight policy engine, configurable rules, and intent auditing for high-risk operations. This is not a small change; it is a redefinition of the role of message middleware.

We believe this direction has value, but it is an exploration, not a commitment we are making today.

Context Should Flow at the Infrastructure Layer, Not Just Be Passed at the Application Layer

There is enormous waste in how AI Agents communicate today: every interaction redundantly transmits context.

Agent A tells Agent B: "Hello, I'm A, we previously discussed X, and now I need you to help me do Y." This contextual information consumes a large number of tokens and is retransmitted in every interaction.

If the communication infrastructure could be session-aware — remembering the conversation history between Agents and automatically supplying missing context as messages flow — then Agent A would only need to say "do Y," and the middleware, knowing the history between A and B, would carry the necessary context along with the message.

This transforms the middleware from a "stateless pipeline" into a "stateful context network."

This is the most ambitious direction we are exploring, and also the one furthest from reality today. But we believe it points toward the right future — in AI systems, context is the most valuable resource, and infrastructure-level awareness and management of context will be one of the most important differentiators of the next generation of communication systems.


4. What's Next for mq9

The foundational mailbox communication capabilities are complete. Going forward, mq9 will progressively implement the capabilities outlined above, guided by the views we have described.

Phase 1: Semantic Service Discovery

Vectorize the desc field of public mailboxes, and add semantic search support to PUBLIC.LIST. Agents no longer need to know the exact name of a target — they simply describe what they need, and mq9 returns the best-matching mailbox list.

This positions mq9 as the natural service registry and discovery infrastructure for AI Agent systems.

Phase 2: Semantic Routing

Messages no longer need to specify an exact target mailbox at send time. The sender describes its intent, and mq9 performs vector matching between the message content and registered Agent capability descriptions, automatically routing to the most suitable recipient.

Evolving from "I know who to send this to" to "I only know what I need done."

Phase 3: Intent-based Policy

Configure policy rules on a mailbox. As messages pass through the policy engine during transit, they are evaluated against the semantic content of the message to determine whether they comply with the policy. Non-compliant messages are blocked outright.

This is a security boundary at the transport layer — independent of application-layer implementation and Agent self-discipline. High-risk operations are stopped at the infrastructure level.

There is a natural advantage here from the multi-protocol architecture. When the policy engine identifies a risky message, no additional system is needed to handle the audit events — RobustMQ has a built-in risk Topic, and messages that trigger policy rules are automatically written to it. Risk analysis systems can consume directly via the Kafka protocol. No data crosses system boundaries, no changes are needed in business code, and no additional connections need to be made between mq9, the message queue, and the risk system.

One infrastructure. One dataset. Multiple protocols, each consuming what it needs. This is a concrete example of RobustMQ's multi-protocol architecture applied to AI security scenarios — not supporting multiple protocols for its own sake, but rather using multiple protocols to genuinely reduce the overall complexity of the system.

Phase 4: Context Awareness (Exploratory)

mq9 becomes aware of the session context between Agents, and messages automatically carry the necessary historical information as they flow. Agents no longer need to retransmit the full context in every interaction, reducing token consumption and making Agent collaboration more efficient.

This is the longest-horizon direction we are pursuing.


These four phases are not strictly sequential — priorities will be adjusted based on feedback from real-world use cases. But the direction is clear: mq9 aims to become the communication infrastructure of the AI era, starting from message delivery and progressively acquiring semantic understanding, intelligent routing, and intent auditing capabilities.


mq9 protocol documentation: https://github.com/robustmq/robustmq-sdk/blob/main/docs/mq9-protocol.mdRobustMQ: https://github.com/robustmq/robustmq

🎉 既然都登录了 GitHub,不如顺手给我们点个 Star 吧!⭐ 你的支持是我们最大的动力 🚀