Design: Real-Time Event Bus and SSE Streaming
Context
Spotter runs several long-running asynchronous operations -- AI mixtape generation, playlist enhancement, similar artist discovery, and background history sync. These operations execute in background goroutines and need to communicate their progress and results to the browser in real time. Without a pub/sub mechanism, handlers would either need to block (unacceptable for operations taking seconds to minutes) or the UI would require polling.
This design provides an in-memory channel-based event bus that publishers (background goroutines) write to and the SSE handler reads from, streaming Templ-rendered HTML fragments directly into the DOM via HTMX's SSE extension -- no JavaScript state management, no JSON serialization, no external message broker.
Governing ADRs: [📝 ADR-0007](../../adrs/📝 ADR-0007-in-memory-event-bus), [📝 ADR-0001](../../adrs/📝 ADR-0001-htmx-templ-server-driven-ui).
Goals / Non-Goals
Goals
- Per-user in-memory pub/sub with fan-out to multiple browser tabs
- Strongly-typed Go event structs (14 event types) with convenience publish methods
- Non-blocking publish that drops events when a subscriber's buffer is full
- SSE endpoint that streams Templ-rendered HTML fragments as named SSE events
- HTMX
hx-sseintegration for zero-JavaScript DOM updates - Clean subscriber lifecycle: subscribe on connection, cleanup on disconnect
Non-Goals
- Event persistence or replay (events are ephemeral; missed events show on next page load)
- Horizontal scaling across multiple app instances (single-instance by design, per 📝 ADR-0007)
- Client-to-server messaging over the SSE connection (SSE is unidirectional)
- Custom subscriber filtering (all events for a user are delivered to all subscribers)
Decisions
In-Memory Channels over Redis Pub/Sub
Choice: Bus struct with sync.RWMutex-protected map[int][]chan Event.
Rationale: Spotter is a single-instance personal server. An external broker (Redis, NATS)
would require operating another service for zero benefit. Go channels provide zero-serialization
delivery of typed structs, natural cleanup semantics via close(), and integration with select
for context cancellation.
Alternatives considered:
- Redis pub/sub: enables horizontal scaling but events still lack persistence; adds infrastructure complexity and JSON serialization overhead.
- Database polling: SSE handlers poll for new events on an interval. Adds 1-5s latency and unnecessary read load.
- WebSockets: bidirectional but HTMX's SSE extension is already integrated; WebSockets would require a different extension and more JavaScript.
Buffered Channels (Capacity 10) with Silent Drop
Choice: Each subscriber channel has a buffer of 10 events. If the buffer is full, Publish()
uses a non-blocking select with a default branch to silently drop the event.
Rationale: A slow or disconnected browser must never block publisher goroutines. Buffer capacity 10 absorbs normal event bursts (e.g., multiple listens synced in rapid succession). If a subscriber truly cannot keep up, dropping is acceptable -- the UI shows the latest state when the user next loads the page.
Alternatives considered:
- Unbuffered channels: publisher blocks until subscriber reads -- unacceptable for background goroutines.
- Larger buffers (100+): wastes memory for the rare burst scenario.
- Oldest-event eviction: more complex than silent drop with marginal benefit.
SSE over WebSockets
Choice: Server-Sent Events via GET /events with HTMX hx-ext="sse".
Rationale: SSE is simpler than WebSockets for server-to-client streaming. HTMX's SSE extension
directly maps named SSE events to DOM swap targets using sse-swap="{event-type}". No JavaScript
event handlers, no state management, no reconnection logic (browsers auto-reconnect SSE).
Templ-Rendered HTML Fragments as SSE Payload
Choice: The SSE handler renders each event's Templ component to a buffer and sends the raw
HTML as the SSE data: payload.
Rationale: This eliminates the JSON serialization layer entirely. The server renders the exact HTML fragment that HTMX swaps into the DOM. Components are the same Templ components used in page renders -- no duplication between SSE and page-load rendering.
Architecture
Component Topology
Event Lifecycle
Event Type Catalog
Key Implementation Details
- Bus struct:
internal/events/bus.go--subscribers map[int][]chan Eventprotected bysync.RWMutex.NewBus()creates the instance incmd/server/main.goand injects it into handlers and all services. - Subscribe: Creates a
make(chan Event, 10), appends to the user's subscriber slice. Returns the channel (read-only) and a cleanup function that removes and closes the channel under write lock. - Publish: Acquires read lock, iterates all channels for the user, uses
select { case ch <- event: default: }for non-blocking send. Zero subscribers is a no-op. - 14 typed event constants:
EventTypeRecentListen,EventTypeNotification, 6 mixtape events, 3 playlist enhancement events, 3 similar artists events. - Convenience methods:
PublishNotification(),PublishMixtapeGenerated(),PublishSimilarArtistsFound(), etc. -- encapsulate payload struct construction to prevent call sites from buildingEvent{}structs directly. - SSE handler:
internal/handlers/sse.go--Events()method onHandler. Authenticates viaRequireUser(), sets SSE headers, assertshttp.Flusher, entersselectloop onctx.Done()andeventChan. Each event is matched by type, rendered to abytes.Buffervia the appropriate Templ component, and written as named SSE frames withevent:and multi-linedata:fields. - Toast component:
internal/views/components/toast.templ-- rendersNotificationPayloadand most event types as toast notifications. - HTMX integration:
internal/views/layouts/base.templloads the HTMX SSE extension from CDN. UI elements declarehx-ext="sse",sse-connect="/events", andsse-swap="{event-type}"to receive and swap fragments.
Risks / Trade-offs
- Events lost on process restart: In-flight AI operations completing after restart will not notify the browser. Acceptable because the final state is visible on the next page load, and restarts during AI generation are rare.
- Slow subscriber drops events silently: If a browser tab falls behind and the 10-event buffer fills, events are dropped. The UI shows stale state until the user refreshes. Mitigated by the buffer absorbing normal bursts.
- Single-instance constraint: Multiple Spotter instances cannot share the in-memory bus. Consistent with the project's single-instance deployment model (📝 ADR-0007).
- No delivery guarantee: The non-blocking
defaultbranch means events can be lost even during normal operation if a subscriber is momentarily slow. This is a deliberate trade-off -- publisher goroutines must never block. - SSE reconnection creates new subscription: When a browser auto-reconnects after a network hiccup, it gets a new channel with no backfill of missed events. The user sees the current state on next page load.
Migration Plan
The event bus was implemented as part of the initial architecture:
- Created
internal/events/bus.gowithBus,Event, typed constants, and payload structs - Created
internal/handlers/sse.gowith the/eventsSSE endpoint - Added
events.NewBus()incmd/server/main.go, injected into handler and all service constructors - Added HTMX SSE extension to
internal/views/layouts/base.templ - Added
Toastcomponent tointernal/views/components/toast.templ - Updated each service (MixtapeGenerator, PlaylistEnhancer, SimilarArtistsService, Syncer) to call the appropriate
Publish*convenience method - Added
sse-connect="/events"andsse-swapattributes to relevant UI components
No database migration required (the bus is purely in-memory).
Open Questions
- Should the buffer capacity (10) be configurable? Currently hardcoded. If event bursts from rapid sync operations cause drops, increasing the buffer is a simple change but adds a config knob for a value most users will never tune.
- Should the SSE handler send periodic keep-alive comments (
: keep-alive\n\n) to prevent intermediate proxies from closing idle connections? Currently not implemented; browser auto-reconnect handles connection drops. - Should the bus expose a
SubscriberCount(userID)method for observability? Would allow logging when no subscribers are connected (all published events are being dropped). - Should the bus support event filtering per subscriber (e.g., a tab only cares about mixtape events)? Currently all events are delivered to all subscribers, which is simple but sends unnecessary data to tabs that don't use it.