Dashboards that show stale data are not dashboards. They are historical records with a live URL. By the time you see yesterday's numbers, decisions have already been made. The order has shipped late. The alert has been missed. The customer has called to ask why nothing happened. Effective real-time systems complement thoughtful dashboard design with the infrastructure to deliver data the moment it changes.
Building real-time interfaces requires a fundamentally different architecture than traditional request-response applications. Instead of the client asking for updates, the server pushes data as events occur. This sounds simple. In practice, it introduces connection management, state synchronisation, scaling constraints, and failure modes that most tutorials ignore.
The Constraint: Why Polling Fails at Scale
The naive approach to live data is polling: set a timer, fetch the latest data every N seconds. It works in development. It works with ten users. It fails in production.
Consider a dashboard showing order counts for a warehouse operation. You poll every five seconds. With 100 concurrent users, that is 1,200 requests per minute hitting your database. Most of those requests return nothing new. The order count has not changed since the last poll. You are burning server resources to repeatedly confirm that nothing has happened.
The maths: 100 users polling every 5 seconds = 20 requests/second. Add a second dashboard widget, and it doubles. Add a third, and you are at 60 requests/second for zero new information. Your database connection pool exhausts before you have 500 users.
Polling also introduces latency. If you poll every five seconds, an event that occurs one second after your last poll takes four seconds to appear. Reduce the interval to one second, and your request volume increases fivefold. You are trading latency for load, and you lose on both counts.
The constraint is clear: for data that changes frequently and must appear immediately, polling does not scale. You need the server to push updates when they happen, not have clients ask repeatedly if anything has changed.
Transport Mechanisms: Choosing the Right Push Technology
Three mechanisms allow servers to push data to browsers: long polling, Server-Sent Events (SSE), and WebSockets. Each has trade-offs that matter in production.
Long Polling
Client makes a request. Server holds it open until data is available, then responds. Client immediately makes another request.
Pros: Works everywhere, no special infrastructure
Cons: Connection overhead on every update, higher latency than true push, still more load than WebSockets
Server-Sent Events (SSE)
Client opens a persistent HTTP connection. Server sends text events over that connection. One-way: server to client only.
Pros: Simple protocol, built-in browser reconnection, works through proxies, HTTP/2 multiplexing
Cons: One-way only, some browser connection limits, text-only (no binary)
WebSockets
Persistent bidirectional connection upgraded from HTTP. Full-duplex: both sides can send at any time.
Pros: Lowest latency, bidirectional, binary and text support, efficient for high-frequency updates
Cons: More infrastructure complexity, some proxies and firewalls interfere, requires connection management
Decision framework
SSE is the right choice for most dashboards. The server pushes; the client displays. You rarely need bidirectional communication for dashboard use cases. SSE is simpler to implement, easier to debug, and works through more network configurations. WebSockets make sense when you need bidirectional communication (collaborative editing, chat) or when you are sending high-frequency updates (gaming, trading). Long polling is a fallback for environments where neither SSE nor WebSockets work.
SSE in Practice
Server-Sent Events use a simple text protocol. The server sends lines prefixed with data:, separated by blank lines. The browser's EventSource API handles reconnection automatically.
A minimal SSE endpoint in Laravel looks like this:
// routes/web.php
Route::get('/events/orders', function () {
return response()->stream(function () {
while (true) {
$count = Order::whereDate('created_at', today())->count();
echo "data: " . json_encode(['count' => $count]) . "\n\n";
ob_flush();
flush();
sleep(1);
}
}, 200, [
'Content-Type' => 'text/event-stream',
'Cache-Control' => 'no-cache',
'X-Accel-Buffering' => 'no',
]);
});
This is still polling, just moved to the server. The database query runs every second. Better than client polling (one connection per user instead of one request per second per user), but still inefficient. The robust pattern uses events.
The Robust Pattern: Event-Driven Updates
The architectural shift is straightforward: instead of querying for changes, listen for events. When an order is created, an event fires. Subscribed clients receive that event. The database is not queried repeatedly. The update appears instantly. This event-driven pattern builds on the same principles that power workflow engines, where state changes trigger downstream actions.
Event Publishing
Business logic publishes events when state changes:
- Order created: publish
OrderCreated - Status changed: publish
OrderStatusUpdated - Payment received: publish
PaymentReceived
Events are published regardless of whether anyone is listening. This decouples the action from the notification.
Event Distribution
A message broker distributes events to subscribers:
- Redis Pub/Sub for single-server or small clusters
- Pusher, Ably, or Soketi for managed WebSocket infrastructure
- Centrifugo for self-hosted, high-performance broadcasting
- Laravel Reverb for native Laravel WebSocket serving
Channel Architecture
Events are published to channels. Clients subscribe to the channels relevant to them. This is the key to scaling: not every client receives every event.
| Channel Type | Example | Use Case |
|---|---|---|
| Public | orders |
Global metrics visible to all authenticated users |
| Private | private-team.{teamId} |
Team-specific updates, authorised by team membership |
| Presence | presence-dashboard.{id} |
Know who else is viewing the same dashboard |
| User | private-user.{userId} |
Personal notifications, assignment alerts |
Private channels require authorisation. When a client attempts to subscribe to private-team.42, the server verifies they are a member of team 42. This prevents data leakage. The channel name encodes the access control.
Implementation: Laravel Broadcasting
Laravel's broadcasting system provides the abstractions for event-driven real-time updates. Events implement the ShouldBroadcast interface. The broadcast driver (Redis, Pusher, Ably, Reverb) handles delivery. Laravel Echo on the client manages subscriptions.
Event Structure
// app/Events/OrderCreated.php
class OrderCreated implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets, SerializesModels;
public function __construct(
public Order $order
) {}
public function broadcastOn(): array
{
return [
new PrivateChannel('team.' . $this->order->team_id),
];
}
public function broadcastWith(): array
{
// Send only the data the client needs
return [
'id' => $this->order->id,
'total' => $this->order->total,
'status' => $this->order->status,
'created_at' => $this->order->created_at->toIso8601String(),
];
}
public function broadcastAs(): string
{
return 'order.created';
}
}
The broadcastWith method controls the payload. Send IDs and summary data, not entire models with relationships. The client can fetch additional details if needed through a standard API endpoint.
Channel Authorisation
Private channels require authorisation callbacks. These run when a client attempts to subscribe:
// routes/channels.php
Broadcast::channel('team.{teamId}', function (User $user, int $teamId) {
return $user->teams()->where('id', $teamId)->exists();
});
Broadcast::channel('user.{userId}', function (User $user, int $userId) {
return $user->id === $userId;
});
Broadcast::channel('order.{orderId}', function (User $user, int $orderId) {
$order = Order::find($orderId);
return $order && $user->can('view', $order);
});
Authorisation runs once when the subscription is established, not on every event. For long-lived connections, consider whether access can change during the session. If a user is removed from a team, they should not continue receiving team events.
Client Subscription
Laravel Echo provides a clean API for subscribing to channels and listening for events:
// resources/js/dashboard.js
import Echo from 'laravel-echo';
import Pusher from 'pusher-js';
window.Echo = new Echo({
broadcaster: 'pusher',
key: import.meta.env.VITE_PUSHER_APP_KEY,
cluster: import.meta.env.VITE_PUSHER_APP_CLUSTER,
forceTLS: true
});
// Subscribe to team channel
Echo.private(`team.${teamId}`)
.listen('.order.created', (event) => {
orderCount.value++;
recentOrders.unshift(event);
recentOrders.splice(10); // Keep only latest 10
})
.listen('.order.status.updated', (event) => {
updateOrderInList(event.id, event.status);
});
The dot prefix in .order.created indicates a custom event name (from broadcastAs). Without the prefix, Echo looks for App\Events\OrderCreated.
Connection Management
Persistent connections introduce failure modes that request-response applications do not have. Networks drop. Browsers sleep. Laptops close. The dashboard tab sits in the background for hours. Connection management is not optional.
Reconnection Strategy
Both SSE and WebSocket implementations should handle disconnection gracefully. The EventSource API reconnects automatically. Pusher and Echo handle reconnection with exponential backoff. Custom WebSocket implementations need explicit reconnection logic:
class ReconnectingSocket {
constructor(url, options = {}) {
this.url = url;
this.maxRetries = options.maxRetries || 10;
this.baseDelay = options.baseDelay || 1000;
this.maxDelay = options.maxDelay || 30000;
this.retries = 0;
this.connect();
}
connect() {
this.socket = new WebSocket(this.url);
this.socket.onopen = () => {
this.retries = 0; // Reset on successful connection
this.onconnect?.();
};
this.socket.onclose = (event) => {
if (!event.wasClean && this.retries < this.maxRetries) {
const delay = Math.min(
this.baseDelay * Math.pow(2, this.retries),
this.maxDelay
);
this.retries++;
setTimeout(() => this.connect(), delay);
}
};
this.socket.onmessage = (event) => {
this.onmessage?.(JSON.parse(event.data));
};
}
}
Exponential backoff prevents thundering herd problems when the server restarts. If 10,000 clients all reconnect simultaneously, they will overwhelm the server. Staggered reconnection spreads the load.
Missed Events and Catch-up
When a client reconnects, events may have been missed during the disconnection. Two patterns address this:
Snapshot on Reconnect
On reconnection, fetch the current state from a REST endpoint. This is simple and works for counters and aggregates where order does not matter.
socket.onopen = () => {
fetch('/api/dashboard/current')
.then(r => r.json())
.then(syncState);
};
Event Replay
Track the last event ID received. On reconnection, request events since that ID. Essential for sequences where order matters (chat, activity feeds).
socket.send(JSON.stringify({
type: 'replay',
since: lastEventId
}));
For most dashboards, snapshot on reconnect is sufficient. The dashboard shows current state, not historical events. Fetch the current numbers, subscribe to updates, continue.
Tab Visibility
When a browser tab is in the background, updates are still received but rendering is pointless. Worse, accumulated updates can cause memory pressure and CPU spikes when the tab returns to focus. Pause updates when hidden:
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
socket.pause?.();
// Or: just stop processing messages
isPaused = true;
} else {
isPaused = false;
// Fetch current state to catch up
refreshDashboardState();
}
});
This also saves bandwidth for mobile users and extends battery life for laptop users.
Data Aggregation and Pre-computation
Real-time does not mean real-time computation. If every incoming event triggers a database aggregation query, you have moved the load problem from polling to event processing. The solution is pre-computation.
Aggregate Tables
Instead of computing COUNT(*) FROM orders WHERE DATE(created_at) = TODAY on every request, maintain an aggregate table updated on each order creation. This is a data modelling decision: trading storage for query performance:
// app/Listeners/UpdateOrderAggregates.php
class UpdateOrderAggregates
{
public function handle(OrderCreated $event): void
{
OrderDailyAggregate::updateOrCreate(
[
'date' => $event->order->created_at->toDateString(),
'team_id' => $event->order->team_id,
],
[]
)->increment('count');
}
}
Now the dashboard reads a single row instead of aggregating thousands. The write happens once at order creation. The read happens on every dashboard load and can be cached in Redis for sub-millisecond access.
Redis for Hot Data
For high-frequency metrics (active users, current queue depth, orders per minute), Redis provides atomic counters with minimal latency:
// On order creation
Redis::incr('orders:today:' . $teamId);
Redis::expire('orders:today:' . $teamId, 86400);
// On dashboard request
$count = Redis::get('orders:today:' . $teamId) ?? 0;
// Broadcast the increment
broadcast(new OrderCountUpdated($teamId, $count));
Redis operations are sub-millisecond. You can safely update counters on every event without bottlenecking event processing.
Time-Window Aggregations
For metrics like "orders in the last 5 minutes", maintain a sliding window using Redis sorted sets:
// On order creation
$now = now()->timestamp;
Redis::zadd('orders:recent:' . $teamId, $now, $event->order->id);
// Clean old entries
Redis::zremrangebyscore('orders:recent:' . $teamId, 0, $now - 300);
// Get count
$count = Redis::zcard('orders:recent:' . $teamId);
The sorted set stores order IDs with timestamps as scores. Removing entries older than 5 minutes keeps the set bounded. The count operation is O(1).
Frontend State Management
Incoming events must update the UI without triggering excessive re-renders. The browser is not infinitely fast. Poorly managed state updates will make your real-time dashboard feel sluggish.
Batch Updates
If events arrive rapidly (multiple per second), batch them before rendering:
class EventBatcher {
constructor(callback, delay = 100) {
this.callback = callback;
this.delay = delay;
this.queue = [];
this.timeout = null;
}
add(event) {
this.queue.push(event);
if (!this.timeout) {
this.timeout = setTimeout(() => {
this.callback(this.queue);
this.queue = [];
this.timeout = null;
}, this.delay);
}
}
}
const batcher = new EventBatcher((events) => {
// Update state once with all events
events.forEach(e => orderCount += e.delta);
renderOrderCount();
}, 100); // Batch updates every 100ms
Users cannot perceive updates faster than about 100ms. Batching at this interval gives the perception of instant updates while reducing render cycles by 90% under high load.
Immutable State Updates
With Vue, React, or similar frameworks, ensure state updates are immutable to trigger reactivity correctly:
// Vue 3 Composition API
const recentOrders = ref([]);
function addOrder(order) {
// Create new array reference
recentOrders.value = [order, ...recentOrders.value.slice(0, 9)];
}
// React with useState
const [recentOrders, setRecentOrders] = useState([]);
function addOrder(order) {
setRecentOrders(prev => [order, ...prev.slice(0, 9)]);
}
Mutating arrays in place (push, splice) often fails to trigger reactivity. Creating new references is explicit about what changed.
Virtualised Lists
If your dashboard shows a feed of recent activity, do not render thousands of DOM nodes. Use virtualised lists that only render visible items. Libraries like TanStack Virtual handle this efficiently:
// With @tanstack/react-virtual or vue-virtual-scroller
<VirtualList
:items="activityFeed"
:item-height="48"
:visible-items="20"
>
<template #item="{ item }">
<ActivityRow :activity="item" />
</template>
</VirtualList>
A 1,000-item feed renders 20 DOM nodes instead of 1,000. Scrolling swaps which items are rendered. Memory usage stays constant regardless of data volume.
Dashboard Components
Different data types require different update strategies. A counter increments. A list prepends. A chart appends. A map moves markers. Each has implementation considerations.
Counters and KPIs
Single numbers that update in real-time: orders today, revenue this hour, active users now.
Implementation: Listen for increment/decrement events. Animate the transition (count up/down over 300ms). Avoid flashing on rapid updates by using CSS transitions rather than JavaScript animation.
.counter {
transition: transform 0.3s;
}
.counter.updated {
transform: scale(1.1);
}
Lists and Tables
Recent items that appear as they happen: latest orders, alerts, activity feeds.
Implementation: Prepend new items with entrance animation. Cap the list length (keep last 50, remove oldest). For sortable tables, insert at correct position rather than always prepending.
// Insert in sorted position
const idx = orders.findIndex(
o => o.created_at < newOrder.created_at
);
orders.splice(idx, 0, newOrder);
Time Series Charts
Line or bar charts that add new data points as time progresses.
Implementation: Append new points, remove old ones to maintain window. Most chart libraries (Chart.js, ECharts, ApexCharts) support dynamic updates. Some handle animation better than others. Test with rapid updates.
chart.data.labels.push(newPoint.time);
chart.data.datasets[0].data.push(newPoint.value);
if (chart.data.labels.length > 60) {
chart.data.labels.shift();
chart.data.datasets[0].data.shift();
}
chart.update('none'); // Skip animation
Maps and Spatial Data
Markers that move, appear, or change state: vehicle tracking, delivery progress, incident locations.
Implementation: Update marker positions with smooth transitions. Cluster markers at low zoom levels. Remove stale markers (vehicle offline for 5 minutes). Consider update frequency carefully: GPS updates every second can overwhelm both server and client.
Graceful Degradation
Real-time connections fail. WebSocket servers restart. Corporate firewalls block non-HTTP traffic. The dashboard must remain functional when real-time updates are unavailable.
Fallback to Polling
When WebSocket connection fails after multiple retries, fall back to polling:
class DashboardConnection {
constructor() {
this.mode = 'realtime';
this.connectWebSocket();
}
connectWebSocket() {
this.socket = new WebSocket(wsUrl);
this.socket.onclose = () => {
if (this.retries++ > 5) {
this.mode = 'polling';
this.startPolling();
this.showDegradedModeNotice();
} else {
setTimeout(() => this.connectWebSocket(),
Math.min(1000 * Math.pow(2, this.retries), 30000));
}
};
}
startPolling() {
this.pollInterval = setInterval(() => {
fetch('/api/dashboard/state')
.then(r => r.json())
.then(this.updateState);
}, 5000);
// Periodically try to restore WebSocket
this.restoreInterval = setInterval(() => {
this.retries = 0;
this.connectWebSocket();
}, 60000);
}
}
Connection Status Indicator
Users should know when they are seeing potentially stale data:
<div class="connection-status" :class="connectionClass">
<span v-if="mode === 'realtime'">Live</span>
<span v-else-if="mode === 'polling'">Updates every 5s</span>
<span v-else>Reconnecting...</span>
</div>
A small indicator in the corner. Green dot for live. Yellow for degraded. Red for disconnected. Users can make informed decisions about data freshness.
Stale Data Detection
If no events arrive for an extended period, the data may be stale even if the connection appears healthy:
let lastEventTime = Date.now();
socket.onmessage = (event) => {
lastEventTime = Date.now();
// ... handle event
};
setInterval(() => {
const staleness = Date.now() - lastEventTime;
if (staleness > 60000) {
showStaleWarning('No updates in ' + Math.round(staleness/1000) + 's');
// Optionally fetch fresh state
refreshDashboardState();
}
}, 10000);
Infrastructure for Real-Time
Real-time systems have different infrastructure requirements than traditional web applications. Connection persistence, horizontal scaling, and message routing all require consideration.
WebSocket Server Scaling
A single WebSocket server can handle thousands of connections. But connections are stateful. You cannot load-balance with round-robin like HTTP requests. Two approaches:
Sticky Sessions
Route each client to the same server consistently (by IP hash, cookie, or connection ID). Events must be broadcast to all servers, which then deliver to their connected clients.
Works with: nginx ip_hash, AWS ALB sticky sessions
Shared State Backend
All WebSocket servers subscribe to a Redis Pub/Sub channel. Events are published to Redis. Every server receives them and delivers to their connected clients.
Works with: Redis adapter for Socket.io, Laravel Echo Server, Centrifugo
Load Balancer Configuration
WebSocket connections require specific load balancer settings:
Managed Services vs Self-Hosted
| Aspect | Managed (Pusher, Ably) | Self-Hosted (Soketi, Centrifugo) |
|---|---|---|
| Setup | API key and done | Server provisioning, configuration, monitoring |
| Scaling | Automatic | Manual horizontal scaling required |
| Cost at scale | Can be expensive (per-message pricing) | Fixed server costs |
| Latency | Extra hop through third party | Direct to your infrastructure |
| Data sovereignty | Events transit third-party servers | Events stay in your infrastructure |
For most projects, start with a managed service. The operational overhead of self-hosting WebSocket infrastructure is significant. Move to self-hosted when message volumes make managed pricing prohibitive, or when data sovereignty requirements demand it.
Security Considerations
Real-time connections introduce security concerns beyond standard web application security. Persistent connections mean longer exposure. Broadcast events can leak data if authorisation is not enforced.
-
Channel authorisation Every private channel subscription must be authorised. The channel name is not secret. Verification happens server-side before the subscription is accepted.
-
Minimal payloads Broadcast only IDs and minimal summary data. Let clients fetch details through authorised API endpoints. This prevents accidental exposure of sensitive fields.
-
Token expiration WebSocket connections can outlive token validity. Implement periodic re-authentication or token refresh over the WebSocket connection.
-
Rate limiting Limit connection attempts per IP. Limit subscriptions per connection. Limit messages per second for bidirectional protocols.
-
Origin validation Verify the Origin header on WebSocket upgrade requests. Reject connections from unauthorised origins.
Data Consistency Patterns
Real-time systems are eventually consistent. Events may arrive out of order. Updates may be missed during reconnection. The dashboard must handle this gracefully.
Initial State Loading
Fetch current state before subscribing to events. Otherwise, events that fire during subscription setup are missed.
// Wrong: subscribe then fetch
subscribe();
fetchState(); // Events during fetch are lost
// Right: fetch then subscribe
fetchState().then(() => {
subscribe(); // Now receiving events
});
Event Ordering
For counters and aggregates, order rarely matters. For sequences (messages, activities), include a sequence number or timestamp and sort client-side.
function insertOrdered(list, item) {
const idx = list.findIndex(
i => i.sequence > item.sequence
);
list.splice(idx >= 0 ? idx : list.length,
0, item);
}
Idempotent Updates
The same event may be delivered multiple times (network retries, reconnection replay). Updates must be idempotent.
function handleOrderCreated(event) {
// Check if already processed
if (orderIds.has(event.id)) return;
orderIds.add(event.id);
orders.push(event);
}
Periodic Reconciliation
Even with reliable delivery, drift can accumulate. Periodically fetch authoritative state and reconcile with client state.
setInterval(() => {
fetchState().then(serverState => {
if (orderCount !== serverState.count) {
orderCount = serverState.count;
logDrift();
}
});
}, 60000); // Every minute
Monitoring and Debugging
Real-time systems are harder to debug than request-response applications. Events are ephemeral. Connection state is distributed. Timing matters.
Key Metrics
Client-Side Logging
Log connection state changes and event receipts to help debug issues:
Echo.connector.pusher.connection.bind('state_change', (states) => {
console.log('Connection state:', states.previous, '->', states.current);
analytics.track('websocket_state_change', states);
});
Echo.connector.pusher.connection.bind('error', (error) => {
console.error('Connection error:', error);
analytics.track('websocket_error', { error: error.message });
});
The Business Link
Real-time dashboards are more complex than polling. They require additional infrastructure (WebSocket servers, message brokers), more sophisticated client code (connection management, state reconciliation), and ongoing operational attention (monitoring, scaling). This complexity is justified when:
-
Decisions depend on current state Warehouse operations, dispatch systems, and trading floors need data measured in seconds, not minutes. Stale data means wrong decisions.
-
User experience requires immediacy Collaborative tools, chat systems, and multiplayer applications feel broken with visible update delays.
-
Polling load is unsustainable Hundreds of concurrent users polling every few seconds will exhaust database connections and server resources. Push is more efficient.
-
Alerts require immediacy System monitoring, security alerts, and incident response dashboards need instant notification, not periodic discovery.
For dashboards refreshed once a day, polling is fine. For dashboards where a five-minute-old number causes operational errors, real-time is not a luxury. It is infrastructure.
Build Your Real-Time Dashboard
We build real-time dashboards that show data as it changes. WebSockets or SSE, event-driven architecture, pre-computed aggregates, efficient client updates: the infrastructure for live visibility. Numbers that update themselves. Lists that grow as events happen. Maps that track movement in real time.
Let's talk about real-time dashboards →