VOLATILE EVENT INGESTION ROUTING DISPATCH STREAMING UNLEASHED.

To saturate the bandwidth of the next generation of hardware.

$ _ VERY ALPHA - CHILL OUT
# Download and run
curl -fsSL https://bustermq.dev/install.sh | sh
bustermq --host 0.0.0.0 --port 4222

# Or with Docker
docker run -p 4222:4222 bustermq/bustermq

BUILT FOR WHAT YOU BUILD.

From edge devices to planet-scale systems.

Benchmarks →
IoT / Edge millions of devices
Microservices event-driven
Real-time Analytics stream processing
Gaming low-latency sync
Ad Tech real-time bidding
Financial market data feeds

PERFORMANCE. AT THE SPEED OF SILICON.

128 clients. 50M messages. 8 cores. No tricks.

Protocol →
BusterMQ 251.9M msg/s
NATS (Go) 16.4M msg/s
Redis Streams 7.5M msg/s
AWS Kinesis 1M msg/s
Google Pub/Sub 200K msg/s

NATS PROTOCOL COMPATIBLE.

Use your existing clients. More features coming.

Drivers →

CORE PROTOCOL

Publish / Subscribe PUB, SUB, UNSUB
Wildcard Subscriptions *, >
Queue Groups load balancing
Request / Reply INBOX pattern

CONNECTORS COMING SOON

Redpanda kafka protocol
VictoriaMetrics metrics sink
gRPC Stream bidirectional
ClickHouse analytics sink
Webhook HTTP POST

CHOOSE YOUR DRIVERS. UNLOCK PERFORMANCE.

We ship native drivers in Go, Rust, and Zig to unlock shard-aware routing and real load balance. Same protocol, real scale.

Go Rust Zig
Backend →

NATS Compatible

PROTO V1

Drop-in compatible with NATS. Use your existing clients and the protocol still works. Nothing breaks, nothing rewrites.

Shard-Aware Drivers RECOMMENDED

GO / RUST / ZIG

But our native drivers unlock shard-aware routing and server-guided load balancing, keeping traffic local and cores evenly fed. That is where the real throughput comes from.

CHOOSE YOUR I/O BACKEND.

Same architecture. Different syscalls. Pick what fits.

Roots →

io_uring RECOMMENDED

--io-backend=io_uring

Kernel-bypass I/O. Zero syscall overhead, batched completions. Linux 5.1+.

Throughput: 6M msg/s Latency: ~8µs

DPDK COMING Q2 2025

--io-backend=dpdk

Full kernel bypass. Userspace networking. Dedicated NIC required.

Throughput: 20M+ msg/s Latency: ~1µs

INSPIRED BY GREATNESS.

Standing on the shoulders of giants.

Next →

TigerBeetle

tigerbeetle.com

Pioneered deterministic simulation testing. We adopted their VOPR approach because finding bugs before production is the only way.

VOPR — Deterministic simulation testing

Redpanda

redpanda.com

Proved that thread-per-core architecture delivers. No locks, no contention, just raw throughput.

thread-per-core — This is the way

ZML

zml.ai

Relentless pursuit of speed. Pushing boundaries, never settling. The mindset that faster is always possible.

performance — No ceiling, only horizons

A language for system programming that prioritizes performance and correctness. No hidden allocations, no hidden control flow.

comptime — Zero-cost abstractions done right

COMING SOON.