Dashboard Design

What Actually Belongs on a Dashboard

Most dashboards become wallpaper. Commissioned with enthusiasm, spec'd in detail, built on time, and quietly ignored within a fortnight. The data is not wrong. The technology works fine. The problem is that nobody applied enough editorial discipline to what earned a place on the screen.

Dashboard design is the discipline of deciding what belongs in front of a specific person based on what decisions they need to make. It is closer to editing a newspaper front page than to wiring up a database. The hard question is never "can we show this data?" (you almost always can). The hard question is "does this metric connect to something someone will actually do today?" When that filter is missing, dashboards fill up with charts that exist because the data was available, not because anyone uses them. That pattern of dashboard abandonment is so common it barely registers as a failure any more. It is just what dashboards do.

They do not have to. The dashboards that earn daily attention share a common trait: every element on screen passed a decision test before it was included. Decision-driven metric selection, applied ruthlessly, is the single biggest factor separating a dashboard someone relies on from one they tolerate. Pair that with glanceability (the ability to assess status in seconds, not minutes) and you have a tool that respects the user's time instead of wasting it.

We have been designing and building custom dashboards since 2005, across more than 50 applications. The pattern holds. This page covers how to go from a business question to a dashboard that people check every day: the principles that matter, the process that works, and the mistakes that cause most dashboards to fail. Sub-pages go deeper on specific dashboard types, layout patterns, and UX detail.


The Decision Test

The single most common reason dashboards fail is not bad design or wrong technology. It is that metrics were selected based on what data was available rather than what decisions needed support. A database full of customer records makes it easy to show total customers, new sign-ups this month, customers by region, average order value, churn rate, and a dozen more. So all of them go on the dashboard. The result is a screen full of numbers that are individually accurate and collectively useless, because nobody arrives at that screen needing to know all of those things at once. This is how vanity metrics colonise dashboards: they look important, they are easy to pull, and nobody applies a filter strict enough to keep them out.

The filter that works is decision-driven metric selection. Before any metric earns a place on a dashboard, it must pass a single test.

The Decision Test: If this metric changed significantly overnight, would someone take a specific action before lunchtime? If yes, it belongs on the dashboard. If no, it belongs in a report, a drill-down, or nowhere at all.

That filter sounds blunt, and it is meant to be. Stephen Few, whose work on information dashboard design remains the canonical reference in this field, makes the same argument in more measured terms: a dashboard should present the most important information needed to achieve one or more objectives, consolidated on a single screen so it can be monitored at a glance. The key phrase is "needed to achieve". Not "nice to see". Not "interesting to know". Needed, in the sense that its absence would leave someone unable to act.

A worked example: quarterly pipeline health

Suppose the business question is: "Is the sales pipeline healthy enough to hit this quarter's revenue target?" That question supports a specific decision. If the pipeline is thin, sales leadership needs to either accelerate deals in later stages or increase activity at the top of the funnel. If the pipeline is healthy, they hold course and focus on conversion. The dashboard exists to make that call quickly, ideally within seconds of opening it.

Working backwards from that decision, only a handful of metrics actually matter. Here are the ones that pass the Decision Test, and the ones that do not.

Weighted pipeline value vs. target: The single number that answers the question. If weighted value is below 2.5x target, action is needed now.
Deals by stage (count and value): Shows where pipeline mass is concentrated. A funnel heavy at the top but empty in later stages means revenue will not arrive this quarter.
Average deal age in current stage: Flags stalled opportunities. If deals have been sitting in "proposal sent" for three weeks, something is wrong.
New opportunities created this period: Leading indicator. If top-of-funnel activity has dried up, next quarter is the problem even if this quarter looks fine.

Four KPIs. Each one would prompt a specific conversation or action if it moved sharply in the wrong direction. Now compare those with the metrics that are genuinely interesting but fail the test.

Win rate (all time): Useful for quarterly reviews. Changes too slowly to drive a daily or weekly action.
Average deal size: Interesting context, but knowing it moved from £12k to £11.5k does not change what anyone does this week.
Deals by lead source: Valuable for marketing attribution. Not relevant to the question of whether the pipeline will hit target.

None of those failing metrics are bad. They are simply answers to different questions. They belong in drill-down views, weekly reports, or entirely separate dashboards built around different decisions. Putting them on this dashboard would not add insight; it would add cognitive load, forcing the viewer to scan past information they do not need in order to find the three numbers they do.

How many is too many

Four KPIs. That number is not accidental. George Miller's research on working memory suggests people hold roughly seven items in mind at once, plus or minus two. For dashboards, aim for the low end: five to nine KPIs per view, with most landing closer to five. Beyond that, each new metric competes for the same limited attention. Fifteen KPIs on one screen is not a dashboard. It is a report pretending to be one.

The discipline, then, is not finding enough metrics to fill the screen. It is having the editorial nerve to keep metrics off the screen when they do not pass the test. Every metric you exclude makes the ones that remain more powerful.


Dashboard Types and Audience Mapping

The type of dashboard dictates nearly every design decision: layout density, refresh cadence, interaction model, and how much the user is expected to explore versus glance. Getting the type wrong is the most expensive mistake in dashboard design because it means building the right thing for the wrong audience. Each type below serves a distinct role; sub-pages cover the full design treatment.

Executive Dashboard

Audience: founders, directors, senior leadership. Answers one question: are we on track? Five to seven KPIs, refreshed daily or hourly, designed for a 30-second glance between meetings. Sparklines, traffic-light indicators, and single numbers with trend context. No tables, no filters, no exploration.

Full guide to executive dashboard design →

Operational Dashboard

Audience: team leads, operations staff, on-call engineers. Answers: is anything broken right now? Data refreshes in seconds or minutes. The layout is exception-driven: everything green means normal, amber and red demand attention. Large status indicators, minimal text, detail on click.

Full guide to operational dashboard design →

Analytical Dashboard

Audience: analysts, marketing managers, anyone investigating a question rather than monitoring a status. Supports filtering, drill-down, and segment comparison. Refreshes daily or on demand. The layout accommodates more charts and controls because the user is spending minutes, not seconds. The danger is scope creep: an analytical dashboard that tries to answer every question answers none well.

Sales Dashboard

Audience: sales managers and commercial leadership. Answers: will we hit target this quarter, and where are deals stuck? Tracks pipeline health, conversion rates by stage, deal velocity, and individual performance. The most useful sales dashboards highlight exceptions (stalled deals, declining conversion rates) rather than simply totalling pipeline value.

Full guide to sales dashboard design →

A fifth category sits apart from these: the client-facing dashboard. When the dashboard is part of your product (a client portal, a SaaS analytics view, a reporting tool your customers log into) the design constraints change fundamentally. You are now designing for multiple audiences simultaneously, which introduces multi-tenancy and role-based views so that each customer sees only their own data in a layout appropriate to their permissions. White-labelling adds further complexity: the dashboard must carry your client's brand, not yours. Refresh cadence, data volumes, and user sophistication vary wildly across tenants. Client-facing dashboards are not a harder version of internal dashboards. They are a different design problem entirely, closer to product design than to reporting.


Designing for Attention

A well-designed dashboard is mostly boring. That sounds counterintuitive, but it is the single most important principle in attention design. When everything is working, the screen should communicate "all clear" in a fraction of a second. Green means ignore. Grey means informational. The dashboard earns your focused attention only when something changes, and only when that change demands a decision.

This is exception-based attention design. Normal states fade to grey. Problems push forward in colour and size. The alternative (treating every metric identically) forces users to scan the entire screen on every visit, mentally sorting what matters from what does not. That scanning is work, and it is the reason people stop checking dashboards after the first fortnight.

Dimension Everything-equal display Exception-based design
Visual weight Every metric uses the same size, colour, and prominence regardless of status Normal metrics fade to neutral tones; only exceptions carry strong colour and size
User effort User must scan all items and mentally decide what matters Dashboard does the filtering; user's eye is drawn to what needs action
Time to insight Scales linearly with the number of metrics: more items, more scanning time Constant: user spots exceptions in seconds regardless of total metric count
Behaviour over time Users disengage because checking the dashboard feels like work Users trust the dashboard to surface problems, so they keep checking it

Exception-based design depends on thresholds, and thresholds are where most implementations fall apart. A threshold indicator only works when it is tied to a genuine decision trigger. "Revenue is below target" is not specific enough. The threshold needs to reflect the point at which someone should actually do something: call a meeting, reallocate budget, escalate to leadership. Set it too loosely and everything is always green (the dashboard tells you nothing). Set it too tightly and everything is always amber or red, which leads to a far more dangerous problem.

Alert Fatigue: When Everything Is Urgent, Nothing Is

Alert fatigue occurs when a dashboard presents so many warnings that users learn to ignore all of them. It is the same phenomenon that plagues hospital monitoring systems and IT operations centres: when every other indicator is amber or red, the brain stops treating colour as a signal. The green/amber/red status hierarchy only works when green is the dominant state. If your dashboard routinely shows more than 10 to 15 percent of its indicators in a warning or critical state, your thresholds are miscalibrated. Either the targets are unrealistic, the tolerances are too narrow, or the dashboard is tracking things that fluctuate normally and should not be flagged at all.

The fix is not to suppress alerts. It is to audit each threshold against a specific question: when this turns amber, who does what? If nobody can answer that, the threshold has no business being on the dashboard. Every status indicator should map to a named action and, ideally, a named person. This discipline keeps the alert count low and the signal-to-noise ratio high.

Making Exceptions Visible

Colour works for status indicators because the brain processes it before conscious thought kicks in. You do not decide to notice a red square among grey squares; it registers automatically. Size, position, and motion work the same way. A number that briefly highlights when it crosses a threshold, or a card that shifts colour from grey to amber, captures attention without the user needing to scan. This is why dashboards that rely solely on numbers (without colour or size changes) fail to surface problems quickly. For a deeper treatment of these interaction patterns, see dashboard UX patterns.

One subtle trap: change blindness. If a value updates silently from 340 to 280 while a user is looking at the screen, they may not notice. Effective dashboards pair data changes with brief visual cues (a flash, a colour shift) so the screen communicates "something just changed" rather than relying on the user to spot it.


Visual Hierarchy, Colour, and Context

Exception-based design gets the user's eye to the right problem. Visual hierarchy determines what "the right place" is on screen. The practical rules: put the most critical metrics top-left (where the eye lands first), make them visually heavier than their neighbours, and group related information so it reads as a single unit. Edward Tufte's data-ink ratio applies here: maximise the share of screen devoted to actual data, minimise everything decorative.

Information density sits at the heart of this. Too sparse, and the user scrolls or clicks to find what should be immediate. Too dense, and nothing stands out. The right density depends on your audience. An executive dashboard should feel open, with generous whitespace around five or six metrics. An analytical dashboard for a data team can pack in more, because the users are comfortable reading complex layouts. Dashboard layout patterns cover the structural side of this balance in detail. Here, the focus is on the visual grammar that makes density readable.

Colour as a Semantic System

Colour on a dashboard is not decoration. It is a vocabulary: a consistent language that users learn once and read fluently across every chart, card, and indicator.

Green: On Target
Healthy, within threshold, positive trend. No action needed.
Amber: Watch
Approaching a threshold or deviating from plan. Monitor closely.
Red: Act
Below target, breached threshold, requires immediate attention.
Grey: Neutral
Informational only. Context or reference data, no status implied.

The rule is simple: if green means "on target" in the revenue widget, it cannot mean "marketing department" in the channel breakdown. Colour as semantic vocabulary only works when it is consistent. Every deviation teaches the user to distrust the system. Beyond consistency, accessibility demands attention. Roughly 8% of men and 0.5% of women have some form of colour blindness, most commonly red-green deficiency. WCAG 2.1 requires that colour is never the sole means of conveying information. Practically, this means pairing every colour signal with a secondary cue: a label ("On Target"), an icon (tick or cross), a shape difference, or a pattern. Dashboards designed this way are better for everyone, not just colour-blind users, because they remain legible on low-contrast screens, in bright sunlight, and in greyscale printouts.

Context: The Memory the Dashboard Carries

A number by itself is noise. Revenue: £420,000. Pipeline: 37 deals. Server uptime: 99.2%. None of these mean anything until they sit next to something the user can compare them against. The dashboard carries the memory of what normal looks like, so the user does not have to.

Context comes in several practical forms. A delta indicator shows change from a previous period ("+12% vs last month" in green, or "-8% vs target" in red). A sparkline, the small inline chart Tufte championed, shows the trend over time without taking up card space. A target line on a bar chart shows where performance should be. A percentile band shows where a value falls relative to historical norms. Each of these transforms a raw number into a judgement the user can act on. The goal is that no metric on the dashboard ever prompts the question "compared to what?"

Progressive Disclosure and Chart Selection

The interaction model follows a simple pattern: overview first, then drill down on demand. The top level shows status. Clicking a metric reveals the breakdown. Clicking a segment within that breakdown opens the underlying records. Each layer adds detail without cluttering the layer above it. A drill-down path from a red KPI card, through a filtered chart, to the specific orders causing the problem is far more useful than showing all that detail on the surface.

Chart selection follows the same discipline. Match the chart to the question: bar charts for comparisons, line charts for trends, single numbers with context for most KPI cards. Reserve full charts for the drill-down layer. For deeper guidance on matching data visualisation techniques to specific dashboard scenarios, that sub-page covers chart types, encoding principles, and common mistakes.


The Dashboard Design Process

Knowing what good looks like is only half the job. You also need a repeatable process for getting there. Ours follows five stages, refined across 50+ dashboard projects. It is not complicated, but it is disciplined about three things most processes skip: producing a decision map instead of a requirements document, prototyping with real data instead of placeholders, and testing against a crisis scenario before go-live.

1

Discovery: Users and Decisions

We conduct user interviews with the people who will actually use the dashboard, not the person who requested it. We ask what decisions they make regularly, what information they currently hunt for in spreadsheets or email, and what "something is wrong" looks like in their role. The output is a decision map: a list of decisions paired with the metrics that inform them. This is the document the rest of the process builds from.

2

Design: Layout and Real-Data Prototyping

With the decision map in hand, we design the layout and visual hierarchy. Every prototype uses real data from the start. A dashboard wireframe with dummy data hides the problems that matter most: scales that collapse when six months of flat data meet a sudden spike, labels that truncate at real string lengths, and distributions that make a chosen chart type unreadable. Dummy data looks tidy. Real data exposes design flaws early, when they are cheap to fix.

3

Build: Data Connections and Performance

Development connects the interface to live data sources through APIs, database queries, or a data pipeline matched to the required refresh cadence. A slow dashboard is an unused dashboard, so we optimise queries, cache where appropriate, and load the highest-priority metrics first.

4

Test: The Crisis Scenario

Before deployment, we run crisis-scenario testing. We ask: if something went seriously wrong right now, would this dashboard tell you? Would it tell you quickly enough? This catches missing alerts, poorly calibrated thresholds, and metrics that look fine under normal conditions but fail to surface the one thing that actually matters when it breaks.

5

Refine: Usage-Driven Improvement

After launch, we monitor which parts of the dashboard get used and which get ignored. Ignored charts are candidates for removal. Frequently filtered dimensions suggest a need for a dedicated view. The best dashboards improve over time because they are maintained, not just delivered.


Mobile Dashboards

Business owners check dashboards from phones. Between meetings, walking to the car, waiting for a client call to start. The phone is often the first screen of the day and the last check before switching off. Designing a mobile dashboard as an afterthought, or worse, relying on responsive design to reflow a desktop layout into a 375px viewport, produces something technically functional and practically useless.

A mobile dashboard is not a shrunk desktop. It is a different interface with a different job. The desktop view supports investigation: filtering, comparing segments, drilling into anomalies. The mobile view answers a single question: do I need to worry about anything right now?

Design the mobile view first. The metrics that survive the 375px constraint are the ones that truly matter. If a KPI does not earn its place on the smallest screen, it probably does not deserve prime position on the largest one either.

Three to five KPIs is the practical ceiling for a mobile dashboard. Each one needs a clear status indicator (on track, needs attention, off track) and a single-tap drill-down to context: the trend line, the comparison to target, the underlying numbers. Touch targets must be large enough for a thumb in motion, which means generous tap areas and no tiny filter dropdowns borrowed from the desktop version. Progressive disclosure does the heavy lifting here. The top level shows status at a glance. One tap reveals the detail. Two taps reaches the data. Nothing more than that.

The metrics that survive the cut are the ones tied most directly to action. Revenue against target, pipeline coverage ratio, system uptime, support queue depth. The specifics depend on the audience, but the principle holds: if the mobile view surfaces a problem, the user can decide whether to act now or wait until they are at a desk. That handoff between mobile awareness and desktop investigation is the design relationship to get right.


When to Build a Custom Dashboard

Off-the-shelf business intelligence tools handle common reporting well. Power BI, Tableau, Looker Studio, Metabase, and Qlik Sense all produce functional dashboards from standard data sources. Geckoboard covers simpler KPI displays. If your requirements fit their templates and configuration options, use them. They are faster and cheaper than custom development, and for many organisations they are the right permanent choice.

Custom dashboard development makes sense when the gap between what a configured tool offers and what your business actually needs becomes too wide to bridge with plugins, workarounds, or compromise. Four signals indicate you have reached that point.

The dashboard is the product. Customer-facing analytics, client portals, embedded reporting within a SaaS platform. Off-the-shelf BI tools cannot be white-labelled into your product with your brand, your interaction patterns, and your data isolation model.
Business processes are too specific for templates. Your workflows, metric definitions, and drill-down paths do not map to any standard dashboard template. You have spent more time fighting the tool's assumptions than building views.
Performance requirements exceed BI tool capabilities. Sub-second response times on large datasets, real-time streaming updates, or offline access. BI tools optimise for flexibility, not raw performance at the edges.
The dashboard needs to trigger actions, not just display data. Clicking a flagged metric should open a ticket, send a notification, reassign a task, or initiate a workflow. That requires application-level integration that sits outside what reporting tools provide.

The tipping point, in practice, is when the dashboard needs to do something, not just show something. Most organisations arrive here gradually: they start with Looker or Tableau, customise heavily, and eventually reach a point where the BI tool is the constraint rather than the accelerator. Our broader guide to build vs buy decisions covers the framework for evaluating that transition across all business software, not just dashboards.


Dashboard Maintenance and Governance

Dashboards decay. The pattern is predictable: a dashboard launches to genuine enthusiasm, people use it daily for a month or two, and then someone asks to add "just one more metric." Six months later, the screen is cluttered with charts nobody remembers requesting, data cruft that no longer reflects how the business operates, and metrics that made sense when the company had twelve employees but mean nothing at sixty.

The root cause is that dashboards are treated as one-off projects. They get a design phase, a build phase, a launch, and then silence. Nobody owns the ongoing editorial decision of what stays and what goes. Metrics accumulate because adding is easy and removing feels risky. Dashboard abandonment follows: when everything is on screen, nothing stands out, and users drift back to spreadsheets.

Preventing this requires dashboard governance, which sounds heavy but breaks down into three practical habits.

The most revealing habit is usage auditing. Most dashboard tools (and custom builds) can track which charts get viewed, filtered, or clicked. Any metric that has not been interacted with in 90 days is a candidate for removal. If nobody looks at it for an entire quarter, it is not informing decisions.

Once you have spotted the dead weight, act on it. Remove unused metrics. Not hide them. Remove them. A hidden metric still costs maintenance effort in the data pipeline and adds cognitive weight to the codebase. Pruning is the ongoing application of the Decision Test: if the metric no longer connects to an active decision, it does not earn screen space.

The harder question is whether the dashboard itself is still asking the right things. Business questions change. The KPIs that mattered during a growth phase may be irrelevant during a consolidation. Every quarter, check whether the questions the dashboard was built to answer are still the questions that matter.

The quarterly review principle: Every 90 days, audit usage, prune unused metrics, and check whether the business questions have changed. A dashboard that is not actively maintained will be passively abandoned.

While you are reviewing content, review the data refresh strategy too. A metric polling every five minutes that nobody checks more than once a day is wasting server resources and, in some architectures, slowing down the metrics people do care about. Match refresh cadence to actual usage patterns, not to what felt right at launch.


Getting Started

Dashboard design is an editorial discipline. Every metric must pass the Decision Test: what decision does it support, and who makes that decision? Different audiences need different views, from executive dashboards to operational and sales dashboards. Exception-based design is what makes a dashboard earn daily attention rather than weekly guilt. Governance (quarterly audits, metric pruning, decision reviews) keeps it useful after launch. And mobile is a primary viewing context, not an afterthought.

The end state is calm, informed operations. A screen you glance at and know, within seconds, whether things are on track or need your attention. We have been designing and building dashboards since 2005. If yours are not driving decisions, the problem is almost certainly design, not data.


@include('design.dashboard-design.feature')

Build Dashboards People Actually Use

No pitch deck. We will look at your current dashboards, discuss what is working and what is not, and outline what a better version looks like. If you are not sure whether you need a custom dashboard or a configured BI tool, a consulting session will clarify the options.

Book a discovery call →
Graphic Swish