Sequence Diagrams for Kafka, RabbitMQ, and Event-Driven Architectures
- Kafka producer-consumer flow with offset commits
- Fan-out pattern: one event, multiple consumers
- Saga pattern with compensating transactions
- Retry and dead-letter queue handling
Table of Contents
Event-driven systems with Kafka, RabbitMQ, or similar message brokers are notoriously hard to document in prose. Who publishes what? Who consumes it? What happens on failure? A sequence diagram captures the flow in a form that both developers and architects can read at a glance. Here are five common patterns with copy-paste Mermaid code you can use as templates.
Every example renders in our free sequence diagram tool. Paste the code, see the diagram, export as PNG or SVG for your architecture documentation.
Pattern 1: Basic Kafka Producer and Consumer
sequenceDiagram
participant Producer
participant Broker as Kafka Broker
participant Consumer
participant DB as Consumer DB
Producer->>Broker: Publish event to topic
Broker->>Broker: Write to partition log
Broker-->>Producer: Ack
Note over Broker: Event sits in log
Consumer->>Broker: Poll for new events
Broker-->>Consumer: Event batch
loop For each event
Consumer->>Consumer: Process event
Consumer->>DB: Persist result
DB-->>Consumer: Stored
end
Consumer->>Broker: Commit offset
Broker-->>Consumer: Offset committed
This is the fundamental Kafka pattern. The producer publishes, the broker stores, the consumer polls. The critical detail is the offset commit at the end. If the consumer processes events but does not commit the offset, it will re-process them after a restart. If it commits the offset before processing (or crashes between commit and processing), events are lost. The diagram makes this ordering explicit.
The Note "Event sits in log" represents time passing between publish and consume. This could be milliseconds or days, depending on how long your retention policy is and how quickly consumers poll.
Pattern 2: Fan-Out (One Event, Multiple Consumers)
sequenceDiagram
participant Source as Order Service
participant Broker as Kafka
participant Inv as Inventory Service
participant Email as Email Service
participant Analytics
Source->>Broker: Publish OrderPlaced event
Broker-->>Source: Ack
par Multiple consumers process independently
Inv->>Broker: Poll OrderPlaced
Broker-->>Inv: Event
Inv->>Inv: Reserve inventory
Inv->>Broker: Commit offset
and
Email->>Broker: Poll OrderPlaced
Broker-->>Email: Event
Email->>Email: Send confirmation
Email->>Broker: Commit offset
and
Analytics->>Broker: Poll OrderPlaced
Broker-->>Analytics: Event
Analytics->>Analytics: Update dashboards
Analytics->>Broker: Commit offset
end
Fan-out is the killer feature of event-driven architecture. The order service publishes one event. Three independent services consume it for different purposes. The order service does not need to know about inventory, email, or analytics. It just publishes and continues.
The "par" block is essential here. It shows that all three consumers process the event in parallel, independently. If the email service is slow, it does not block inventory reservation. If analytics crashes, it does not prevent emails from sending. This decoupling is what makes event-driven systems resilient.
Each consumer is in its own consumer group in Kafka terms, which is why they all see the same event. If they were in the same consumer group, Kafka would distribute events among them instead of delivering to each one.
Sell Custom Apparel — We Handle Printing & Free ShippingPattern 3: Saga (Distributed Transaction with Compensation)
sequenceDiagram
participant Gateway as API Gateway
participant Orders
participant Broker as Kafka
participant Payment
participant Inventory
participant Shipping
Gateway->>Orders: Create order
Orders->>Broker: OrderCreated
Broker-->>Orders: Ack
Orders-->>Gateway: Order received
Payment->>Broker: Consume OrderCreated
Payment->>Payment: Charge card
alt Payment success
Payment->>Broker: PaymentCompleted
Inventory->>Broker: Consume PaymentCompleted
Inventory->>Inventory: Reserve items
alt Inventory available
Inventory->>Broker: InventoryReserved
Shipping->>Broker: Consume InventoryReserved
Shipping->>Shipping: Schedule delivery
Shipping->>Broker: OrderFulfilled
else Inventory unavailable
Inventory->>Broker: InventoryUnavailable
Note over Payment: Compensation starts
Payment->>Broker: Consume InventoryUnavailable
Payment->>Payment: Refund card
Payment->>Broker: PaymentRefunded
end
else Payment failed
Payment->>Broker: PaymentFailed
Orders->>Broker: Consume PaymentFailed
Orders->>Orders: Mark order failed
end
The saga pattern handles distributed transactions without distributed locks. Each step publishes an event. Subsequent steps consume. When something fails, a compensation event is published and earlier steps undo their work.
The key insight: there is no global transaction. If payment succeeds but inventory fails, the payment service consumes the InventoryUnavailable event and issues a refund. This is eventual consistency, and it is how real microservice architectures handle failures across services.
For simpler API flows without the event-driven complexity, see our REST API sequence diagrams.
Pattern 4: Retry and Dead-Letter Queue
sequenceDiagram
participant Producer
participant MainTopic as Main Topic
participant Consumer
participant RetryTopic as Retry Topic
participant DLQ as Dead Letter Queue
Producer->>MainTopic: Publish event
Consumer->>MainTopic: Consume event
Consumer->>Consumer: Try to process
alt Processing success
Consumer->>MainTopic: Commit offset
else Transient failure
Consumer->>RetryTopic: Republish with attempt=1
Consumer->>MainTopic: Commit offset
Note over RetryTopic: Delay 30 seconds
Consumer->>RetryTopic: Consume
Consumer->>Consumer: Retry processing
alt Success on retry
Consumer->>RetryTopic: Commit offset
else Still failing after N retries
Consumer->>DLQ: Send to DLQ
Consumer->>RetryTopic: Commit offset
Note over DLQ: Manual inspection required
end
end
Production Kafka systems need a retry strategy. Not every failure is permanent. Network blips, temporary database unavailability, or rate-limit responses from external APIs often succeed on retry. But you cannot retry forever — eventually, persistently-failing events need human attention.
The pattern: retry a small number of times with increasing delays, then send to a dead-letter queue (DLQ). Operations teams monitor the DLQ for events that need manual intervention. This approach handles transient failures automatically while surfacing persistent failures for human review.
The Note "Delay 30 seconds" represents a delayed retry. This is typically implemented with a separate topic and a consumer that sleeps before processing. The delay prevents hammering a failing downstream system.
Pattern 5: Event Sourcing with Snapshots
sequenceDiagram
participant Client
participant API
participant EventStore as Event Store
participant ReadModel as Read Model
participant Snapshot as Snapshot Store
Client->>API: Command (e.g., UpdateOrder)
API->>EventStore: Append event
EventStore-->>API: Event stored
API-->>Client: Command accepted
EventStore->>ReadModel: Publish event
ReadModel->>ReadModel: Update projection
Note over Client: Later, client reads data
Client->>API: Query order state
API->>Snapshot: Get latest snapshot
alt Recent snapshot exists
Snapshot-->>API: Snapshot (as of event N)
API->>EventStore: Get events after N
EventStore-->>API: Events
API->>API: Apply events to snapshot
else No recent snapshot
API->>EventStore: Get all events
EventStore-->>API: Full event history
API->>API: Replay all events
end
API-->>Client: Current state
Event sourcing stores state as a series of events rather than mutable records. To get the current state, you replay all events. For performance, you periodically save snapshots and only replay events newer than the snapshot.
The diagram makes the read/write asymmetry clear. Writes are fast (append an event). Reads can be slow (replay events), which is why snapshots exist. The read model is a projection that denormalizes the events for fast queries.
Event sourcing is complex, but the diagram form makes the data flow much clearer than prose descriptions. This pattern is common in financial systems, auditing-heavy applications, and systems where you need to query historical state.
Document Your Event-Driven System
Copy any pattern above, adapt to your services, export PNG or SVG for your architecture wiki.
Open Free Sequence Diagram MakerFrequently Asked Questions
How do I draw a sequence diagram for Kafka?
Include the producer, Kafka broker, and consumer as participants. Show the publish-to-topic message from producer to broker, the broker ack, the consumer poll, and the critical offset commit after processing. Use Notes to indicate time gaps between publish and consume.
What is the sequence diagram for a Kafka consumer group?
Show multiple consumer instances polling from the same topic, with the broker distributing events across them based on partition assignment. Each consumer processes independently and commits its own offset. Use the par block to show parallel consumption across consumers.
How do I document the saga pattern with a sequence diagram?
Show each service consuming events from a message broker and publishing follow-up events. Use alt blocks for failure paths where compensating actions are triggered. Make the event names explicit (OrderCreated, PaymentCompleted, InventoryUnavailable) so the flow is clear.
Can sequence diagrams show async messaging?
Yes. Use open arrowheads (instead of filled) to indicate async messages. For Kafka specifically, treat the broker as a participant and show publish and consume as separate interactions separated by Note annotations indicating time may pass between them.

