Dashboard Design

What Actually Belongs on a Dashboard


Dashboards become wallpaper. Requested enthusiastically, ignored after launch. Metrics displayed because they exist, not because anyone uses them. You've seen it happen. Whether it's an executive KPI dashboard, a performance dashboard, or an operations dashboard, the question at the heart of good dashboard design is straightforward: what does a dashboard that people actually look at every day look like?


The Problem with Most Dashboards

The typical dashboard project starts with enthusiasm. Someone requests "a dashboard to see everything." Dashboard ideas get gathered. Every department adds their metrics. The result is a screen crammed with numbers, charts, and widgets. It launches to fanfare, gets looked at for a week, then becomes expensive wallpaper.

This pattern repeats because dashboards are built backwards. Teams start with "what data do we have?" instead of "what decisions do we need to make?" They measure what's easy to measure, not what matters. They build for "the business" instead of specific people with specific questions. The same thinking that leads to effective data visualisation applies here: start with what you need to understand, not what data you have available.

The core problem: Most dashboards are data displays, not decision tools. They show metrics without context, numbers without meaning, information without insight. The dashboard becomes a reporting obligation, not a working tool.

The failure modes are predictable. Too many metrics competing for attention. Charts that require study to interpret. Numbers without comparison or context. Metrics chosen because they were available, not because they drive action. Even a well-chosen KPI dashboard example can fall flat when loaded with data nobody acts on. The dashboard tries to be everything to everyone and ends up useful to no one.


How We Approach Dashboard Design

We start every dashboard project with one question: What decision does this help someone make?

If we can't name the decision, the metric doesn't belong on the dashboard. This simple filter eliminates most of what typically clutters business dashboards. The best dashboard design connects every metric to action. Everything else is noise.

One audience, one purpose

We don't build dashboards for "the company." We build them for specific people with specific questions. An operations manager checking morning status needs different information than a CEO reviewing monthly performance. Trying to serve both audiences on one screen serves neither well.

Exceptions over averages

People shouldn't study the dashboard to find problems. Problems should announce themselves. We design for exception-based attention: green means ignore, amber means watch, red means act. The dashboard earns attention only when attention is needed.

Glanceability

A dashboard should communicate its message in seconds. If users need to study it carefully to understand what's happening, it's not working. We design for the glance: the five-second check that tells you whether things are fine or need investigation.


The Building Blocks of Dashboard Design

Good dashboard design gets dozens of small decisions right: which charts to use, where to place critical numbers, how to handle drill-down, when to alert. We've written detailed guides on each of these building blocks.

Data Visualisation

Choosing the right chart type for the question your data answers. When sparklines beat full charts, why 3D effects distort, and when a plain number with context is the best visualisation of all.

Layout & Visual Hierarchy

Where things go and why it matters. Z-pattern scanning, the inverted pyramid, grouping related metrics, and the density trade-off between showing enough and showing too much.

UX & Interaction Design

Why dashboards get abandoned and how to prevent it. Progressive disclosure, cognitive load, drill-down patterns, performance budgets, and accessible design that works for everyone.


Dashboard Patterns by Function

Every role has different questions. An executive needs a KPI dashboard example focused on five key numbers. An ops manager needs a real-time operations dashboard showing what's stuck. These patterns draw from the same data sources, but the presentation must match how each person works and what they need to know. We've written dedicated guides for the most common dashboard types.

Executive Dashboards

Five numbers and the truth. Revenue vs target, pipeline value, projects at risk, escalations, cash position. Each number shows comparison and colour-coded status. The executive checks it in 30 seconds during their morning coffee. If nothing's red, nothing needs attention.

High-level health check. Few metrics, strong summarisation, exception-focused. Answers "do I need to worry about anything?" in seconds.

Operational Dashboards

Real-time status for people managing daily work. Orders in each stage with stuck items highlighted, today's deliveries, team capacity, issues opened and resolved. Updates continuously on a second monitor or wall screen.

Answers "what needs attention right now?" without requiring investigation. The ops manager glances at it every 15 minutes and knows if intervention is needed.

Sales Dashboards

Pipeline health, target tracking, and what to chase next. This month's target and current position, deals by stage, stale deals that haven't been touched, follow-ups due today. Serves both the sales manager reviewing the team and the rep planning their day.

Answers "are we on track?" and "what should I be asking about in the team meeting?" Starts the standup and guides the one-to-ones.

Finance & Client Dashboards

Finance dashboards track cash position, aged invoices, bills due, and revenue recognised vs target. Weekly review focus, highlighting cash flow risks before they become crises. Client dashboards provide external-facing project status: progress, deliverables, and upcoming milestones with appropriate transparency.

Each serves a distinct audience. Finance answers "do we need to chase anything?" Client views answer "where are we on the project?" without exposing internal detail.


What We Put on Dashboards

Not everything measurable belongs on a dashboard. We're selective about what earns a place, and everything that appears has a specific job to do.

Key Metrics with Context

Numbers need comparison to be meaningful. Revenue: £47,000. Good or bad? The number alone tells you nothing. We always show context: vs target, vs last period, vs average. A number in isolation is data. A number with context is information.

Context takes different forms depending on the metric:

  • Progress metrics: Show current value against target. Revenue £47,000 of £60,000 target. The gap is visible.
  • Trend metrics: Show current value against previous period. Revenue £47,000 (up 12% vs last month). Direction is clear.
  • Health metrics: Show current value against threshold. Response time 2.3 seconds (target: under 3 seconds). Status is obvious.

Trends That Show Direction

Sparklines and small charts showing where things are heading. Is it improving or declining? How fast? Trends often matter more than point-in-time values. A revenue figure of £47,000 means something different if you're trending up from £35,000 than if you're trending down from £60,000.

We use sparklines for compact trend display: a small chart showing the last 12 data points, just enough to see direction and volatility. No axis labels, no legends. The shape tells the story. For deeper guidance on choosing the right visualisation for your data, see our guide to dashboard data visualisation.

Status Indicators

Traffic light colours, status icons, progress bars. Binary or categorical states communicated faster than numbers. The brain processes "green" faster than "94%."

We use a simple vocabulary: green means on track (no attention needed), amber means watch (potential issue developing), red means act (intervention required now). This vocabulary stays consistent across every dashboard we build. Users learn it once and apply it everywhere. Colour never carries meaning alone, so we pair it with icons and labels so the dashboard remains accessible to all users.

Exceptions and Alerts

Items that need attention right now. Late deliveries, overdue invoices, stuck orders. The dashboard surfaces problems rather than hiding them in averages.

Exception lists are specific. Not "3 issues" but the three actual issues with enough context to act: "Invoice #1234 (Smith Ltd) overdue 14 days, £2,400." The user can decide whether to act without clicking through to another screen.


What We Keep Off Dashboards

Knowing what to exclude is as important as knowing what to include. Every metric competes for attention. We're ruthless about what earns a place.

Everything measurable

Just because data exists doesn't mean it belongs on a dashboard. The question is always: what decision does this help someone make? If the answer is "none," the metric stays in reports and ad-hoc queries, not on the dashboard.

Detailed tables

If users need to scroll through rows, it's not a dashboard. Tables are for drill-down and reports, not dashboard-level viewing. A dashboard might show "5 invoices overdue" with the top 3 listed. The full table is one click away for those who need it.

Complex charts

Multi-series line charts with legends. Stacked area charts requiring careful study. Anything that needs analysis belongs in analysis tools, not on dashboards. If someone has to ask "what am I looking at?", the visualisation choice has failed.

Vanity metrics

Numbers that look good but don't drive decisions. Total website visitors, social media followers, email list size. If no one would change their behaviour based on the metric, it doesn't belong. Dashboards are for steering, not celebrating.


Alerts and Notifications

Dashboards show current state. Alerts notify when state changes. The two work together: the dashboard for active monitoring, alerts for passive notification. Every alert that fires without requiring action trains users to ignore alerts, so we design them with strict criteria: actionable, specific, timely, and configurable.

Urgency Channel Example
Critical SMS + Dashboard Payment processing down, major system failure
High Push notification + Email Deal at risk, SLA breach imminent
Medium Email + Dashboard badge Invoice overdue 7 days, task approaching deadline
Low Dashboard only Weekly report ready, routine status update

For dashboards with significant alert volume, we build a notification centre: a dedicated area showing recent alerts with status, acknowledgement, and resolution. Historical alerts create an audit trail and help identify patterns. The design of alert thresholds and escalation paths is covered in more detail in our guide to operational dashboards.


Personalisation and Customisation

No two users have identical needs. Even within the same role, individuals prioritise different metrics based on their current projects, responsibilities, or working style. We build dashboards that adapt to individual preferences without becoming chaotic.

What we allow users to customise
  • Widget arrangement: Reorder components to match personal scanning patterns.
  • Date ranges: Adjust defaults (last 7 days vs last 30 days vs current month).
  • Alert thresholds: Individual tolerance for when metrics turn amber or red.
  • Saved views: A "Pipeline Review" view for Mondays, a "Daily Activity" view for morning checks.
What we don't allow
  • Adding arbitrary metrics: The core metric set is designed deliberately. Random additions mislead and distract.
  • Changing colour meanings: Green always means healthy. Consistency enables team communication.
  • Hiding mandatory metrics: Some metrics must remain visible for governance or operational reasons.

The balance is giving users control over presentation while maintaining the integrity of the information architecture. Good dashboard design offers flexibility within guardrails, applying the same user experience principles that govern software design.


Performance and Real-Time Data

A slow dashboard is a useless dashboard. If users have to wait for data, they'll stop checking. A performance dashboard that takes ten seconds to load defeats its own purpose. We target under 2 seconds for initial render, under 3 seconds for primary metrics, and under 1 second for refresh interactions. When data takes time to load, skeleton screens maintain the layout while indicating progress.

Not all dashboards need real-time data. The appropriate update frequency depends on the use case:

Real-time (websocket updates)

For operations dashboards where things change minute-by-minute: order status, system health, live activity. Data pushes to the browser as it changes.

Frequent refresh (1-5 minutes)

For activity monitoring that doesn't need instant updates: sales activity, support tickets, task progress. Auto-refresh or pull-to-refresh.

Periodic refresh (hourly or daily)

For executive dashboards and trend views: revenue, pipeline, forecasts. Data changes slowly; frequent refresh adds load without value.

Stale data destroys trust. We always show "last updated" timestamps prominently, and handle error states gracefully ("Revenue data unavailable. Retry.") rather than leaving users guessing. For more on performance budgets and loading patterns, see our guide to dashboard UX.


How We Build Dashboards

Dashboard design is iterative. We don't design in isolation then reveal the finished product. We build understanding through conversation, prototype quickly, test with real users, and refine based on observation.

1

Interview the users

We talk to the people who'll use the dashboard. What questions do they ask regularly? What decisions do they make? What information do they currently dig for? The dashboard answers real questions, not hypothetical ones. We often shadow users for a day to see what they actually check and when.

2

Map decisions to metrics

For each question users ask, we identify the metric that answers it. For each decision they make, we identify the information that informs it. This mapping determines what belongs on the dashboard. Metrics without connected decisions don't make the cut.

3

Design for the worst day

Dashboards get tested when things go wrong. Late shipments, dropped revenue, client escalations. We design for these moments: does the dashboard help you understand and respond to problems? We prototype crisis scenarios alongside happy-path views.

4

Build rough, test early

We prototype quickly and put dashboards in front of real users with real data. Watch how they use them. What do they look at? What do they ignore? What's missing? Refinement comes from observation, not assumption. First versions are always wrong in interesting ways.

5

Prune continuously

Dashboards accumulate cruft. Metrics added "just in case" that no one uses. Views created for a project that ended. We build in review cycles: if something isn't earning attention, it gets removed. A quarterly audit asks: "What haven't we looked at in three months?"


Common Dashboard Mistakes

We've seen enough dashboard projects to recognise the patterns that lead to wallpaper. Avoiding these mistakes is as important as following best dashboard design practices.

Mistake Symptom Fix
Too many metrics Users don't know where to look first Ruthlessly cut to 5-8 key metrics
No context for numbers "Is 47 good or bad?" Always show comparison (vs target, vs last period)
Complex charts Users study charts to understand them Simplify or move to analysis tools
Stale data Users don't trust the numbers Show last updated time, ensure freshness
No clear hierarchy All metrics look equally important Use size, position, colour to guide attention
Dead metrics Metrics no one acts on Regular audits, remove what's not used

What You Get

Dashboards designed with these principles achieve something rare: they get used. Users check them because checking them is useful. The dashboard becomes part of the workflow, not an obligation.

  • Get looked at Because they answer questions people actually have
  • Save time Glanceable status replaces investigation and asking around
  • Surface problems early Exceptions visible before they become crises
  • Drive decisions Every metric connects to something someone can act on
  • Stay relevant Built for evolution as your questions change
  • Work everywhere Mobile, tablet, desktop: the right view for each context

The dashboard becomes a tool people use, not wallpaper they ignore. Whether it's a KPI dashboard on the boardroom screen or an operations dashboard on the warehouse wall, it earns its place in the morning routine and the weekly review. It answers questions before anyone has to ask them.


Go Deeper

This guide covers how we think about dashboards. These supporting guides cover the detail of how we build them:

Data Visualisation →

Chart selection, sparklines, the data ink ratio, and when the best visualisation is no chart at all.

Layout & Hierarchy →

Z-pattern scanning, the inverted pyramid, grouping, density trade-offs, and responsive breakpoints.

UX & Interaction →

Progressive disclosure, cognitive load, drill-down, performance, and accessible design.

Executive Dashboards →

KPI selection for leadership, the exception-based pattern, and designing for the 30-second check.

Operational Dashboards →

Real-time monitoring, wallboard design, alert thresholds, shift handover, and continuous status.

Sales Dashboards →

Pipeline visualisation, target tracking, stale deals, forecast accuracy, and CRM integration.


Further Reading

Build Dashboards People Use

We design dashboards that answer your actual questions. Your metrics, your decisions, your users. From executive views to real-time performance dashboards, integrated with your data, updated on appropriate schedules, accessible where your team works. Not generic BI tool configuration. Custom dashboards built for how your business actually operates.

Let's talk about your dashboard needs →
Graphic Swish