Dashboards become wallpaper. Requested enthusiastically, ignored after launch. Metrics displayed because they exist, not because anyone uses them. You've seen it happen. The question is: what does a dashboard that people actually look at every day look like?
The Problem with Most Dashboards
The typical dashboard project starts with enthusiasm. Someone requests "a dashboard to see everything." Requirements get gathered. Every department adds their metrics. The result is a screen crammed with numbers, charts, and widgets. It launches to fanfare, gets looked at for a week, then becomes expensive wallpaper.
This pattern repeats because dashboards are built backwards. Teams start with "what data do we have?" instead of "what decisions do we need to make?" They measure what's easy to measure, not what matters. They build for "the business" instead of specific people with specific questions. The same thinking that leads to effective data visualisation applies here: start with what you need to understand, not what data you have available.
The core problem: Most dashboards are data displays, not decision tools. They show metrics without context, numbers without meaning, information without insight. The dashboard becomes a reporting obligation, not a working tool.
The failure modes are predictable. Too many metrics competing for attention. Charts that require study to interpret. Numbers without comparison or context. Metrics chosen because they were available, not because they drive action. The dashboard tries to be everything to everyone and ends up useful to no one.
How We Approach Dashboard Design
We start every dashboard project with one question: What decision does this help someone make?
If we can't name the decision, the metric doesn't belong on the dashboard. This simple filter eliminates most of what typically clutters business dashboards. A metric earns its place by connecting to action. Everything else is noise.
One audience, one purpose
We don't build dashboards for "the company." We build them for specific people with specific questions. An operations manager checking morning status needs different information than a CEO reviewing monthly performance. Trying to serve both audiences on one screen serves neither well.
Exceptions over averages
People shouldn't study the dashboard to find problems. Problems should announce themselves. We design for exception-based attention: green means ignore, amber means watch, red means act. The dashboard earns attention only when attention is needed.
Glanceability
A dashboard should communicate its message in seconds. If users need to study it carefully to understand what's happening, it's not working. We design for the glance: the five-second check that tells you whether things are fine or need investigation.
Dashboard Patterns for Different Roles
Every role has different questions. The data might come from the same sources, but the presentation must match how each person works and what they need to know.
The Executive Dashboard
Five numbers, visible at a glance:
- Revenue this month vs target
- Pipeline value
- Projects at risk (count)
- Open issues requiring escalation
- Cash position
Each number shows comparison (vs last month, vs target). Colour indicates status. One click reveals the detail behind any number. The executive checks it in 30 seconds during their morning coffee. If nothing's red, nothing needs attention.
The Operations Dashboard
Real-time status for people managing daily work:
- Orders in each stage (stuck items highlighted)
- Today's deliveries (on track, at risk, late)
- Team capacity and current assignments
- Issues opened today, resolved today
Updates continuously. Designed to be open all day on a second monitor. Answers "what needs attention right now?" without requiring investigation. The ops manager glances at it every 15 minutes and knows if intervention is needed.
The Sales Dashboard
Pipeline health and activity:
- This month's target and current position
- Deals by stage with value
- Deals that haven't been touched this week
- Follow-ups due today
The sales manager sees immediately: are we on track? What needs attention? Which deals are going stale? The dashboard starts the daily standup and guides the one-to-ones.
The Finance Dashboard
Cash and commitments:
- Current bank balance and forecast
- Invoices outstanding (aged)
- Bills due this week
- Revenue recognised vs target
Weekly review focus. Highlights cash flow risks before they become crises. Shows exactly which invoices are overdue and by how long. The finance lead knows within seconds if chasing is needed.
These aren't templates to configure. They're examples of the thinking. Your roles have different questions. Your dashboard should answer those specific questions, not generic ones. The user experience principles that make software effective apply equally to dashboards.
What We Put on Dashboards
Not everything measurable belongs on a dashboard. We're selective about what earns a place, and everything that appears has a specific job to do.
Key Metrics with Context
Numbers need comparison to be meaningful. Revenue: £47,000. Good or bad? The number alone tells you nothing. We always show context: vs target, vs last period, vs average. A number in isolation is data. A number with context is information.
Context takes different forms depending on the metric:
- Progress metrics: Show current value against target. Revenue £47,000 of £60,000 target. The gap is visible.
- Trend metrics: Show current value against previous period. Revenue £47,000 (up 12% vs last month). Direction is clear.
- Health metrics: Show current value against threshold. Response time 2.3 seconds (target: under 3 seconds). Status is obvious.
Trends That Show Direction
Sparklines and small charts showing where things are heading. Is it improving or declining? How fast? Trends often matter more than point-in-time values. A revenue figure of £47,000 means something different if you're trending up from £35,000 than if you're trending down from £60,000.
We use sparklines for compact trend display: a small chart showing the last 12 data points, just enough to see direction and volatility. No axis labels, no legends. The shape tells the story. Detailed charts are available one click away for those who need them.
Status Indicators
Traffic light colours, status icons, progress bars. Binary or categorical states communicated faster than numbers. The brain processes "green" faster than "94%."
We use a simple vocabulary: green means on track (no attention needed), amber means watch (potential issue developing), red means act (intervention required now). This vocabulary stays consistent across every dashboard we build. Users learn it once and apply it everywhere.
Exceptions and Alerts
Items that need attention right now. Late deliveries, overdue invoices, stuck orders. The dashboard surfaces problems rather than hiding them in averages.
Exception lists are specific. Not "3 issues" but the three actual issues with enough context to act: "Invoice #1234 (Smith Ltd) overdue 14 days, £2,400." The user can decide whether to act without clicking through to another screen.
What We Keep Off Dashboards
Knowing what to exclude is as important as knowing what to include. Every metric competes for attention. We're ruthless about what earns a place.
Everything measurable
Just because data exists doesn't mean it belongs on a dashboard. The question is always: what decision does this help someone make? If the answer is "none," the metric stays in reports and ad-hoc queries, not on the dashboard.
Detailed tables
If users need to scroll through rows, it's not a dashboard. Tables are for drill-down and reports, not dashboard-level viewing. A dashboard might show "5 invoices overdue" with the top 3 listed. The full table is one click away for those who need it.
Complex charts
Multi-series line charts with legends. Stacked area charts requiring careful study. Anything that needs analysis belongs in analysis tools, not on dashboards. If someone has to ask "what am I looking at?", the visualisation has failed.
Vanity metrics
Numbers that look good but don't drive decisions. Total website visitors, social media followers, email list size. If no one would change their behaviour based on the metric, it doesn't belong. Dashboards are for steering, not celebrating.
Data Density and Visual Hierarchy
Dashboard design is an exercise in information architecture. The challenge is showing enough information to be useful without overwhelming the viewer. This requires careful attention to density and hierarchy.
The Density Trade-off
Low-density dashboards (few metrics, lots of whitespace) are easy to scan but force users to navigate to other screens for common questions. High-density dashboards (many metrics, compact display) answer more questions in one place but risk overwhelming users.
The right density depends on the user. An executive checking once daily wants low density: five numbers, clear status, obvious exceptions. An operations manager monitoring continuously wants higher density: more metrics visible, quick access to detail, information-rich display.
We typically aim for what we call "comfortable density." The dashboard should feel informative, not cluttered. Users should be able to find what they're looking for without hunting, but shouldn't feel overwhelmed. This usually means 5-8 primary metrics for executive views, 12-20 for operational views.
Visual Hierarchy
Not all metrics are equally important. Visual hierarchy communicates priority. The most critical metrics should be most prominent: larger, higher on screen, bolder. Secondary metrics can be smaller, lower, lighter. This same principle of visual intelligence applies across all business data presentation.
We establish hierarchy through:
- Size: Critical metrics get more screen real estate. Secondary metrics are compact.
- Position: Most important information appears top-left (where Western readers start). Less critical information moves down and right.
- Contrast: Key metrics use stronger colours and bolder type. Supporting information is muted.
- Grouping: Related metrics cluster together. White space separates unrelated items.
The goal is that users can extract the main message from the dashboard without reading every element. The hierarchy guides attention to what matters most.
Colour Usage and Accessibility
Colour is a powerful tool for dashboard communication, but it needs to be used deliberately. Random colour choices create visual noise. Inconsistent colour meanings confuse users. And approximately 8% of men have some form of colour vision deficiency, so colour alone can't carry critical information.
Semantic Colour
We use colour to communicate meaning, not decoration. Our standard vocabulary:
- Green: On track, healthy, positive. No action needed.
- Amber: Warning, watch, potential issue. Attention may be needed soon.
- Red: Problem, urgent, negative. Action needed now.
- Blue: Neutral, informational. No status implication.
- Grey: Secondary, inactive, historical.
This vocabulary stays consistent. Red always means problem. Green always means healthy. Users learn it once and understand it everywhere.
Accessibility Requirements
Colour vision deficiency affects how users perceive dashboards. Red-green colour blindness is most common, which is problematic because red and green are our primary status colours.
We address this by never relying on colour alone:
- Icons: A red status also shows a warning icon. A green status shows a checkmark.
- Labels: "At risk" or "On track" appears alongside the colour indicator.
- Pattern: In charts, we use texture or line style in addition to colour.
- Contrast: We test that colour combinations work in greyscale, not just in full colour.
We also ensure sufficient contrast for readability. Text needs at least 4.5:1 contrast ratio against backgrounds (WCAG AA standard). Large text and icons can use 3:1. We test dashboards against contrast checkers during design.
Mobile and Responsive Design
Dashboards are increasingly accessed on mobile devices. A sales rep checking pipeline while travelling. An executive reviewing numbers from their phone. The dashboard needs to work on screens of all sizes.
Mobile-First Thinking
We don't simply shrink desktop dashboards for mobile. We design mobile-specific views that prioritise the most critical information for on-the-go access.
Mobile dashboard design principles:
- Fewer metrics: Show only the 3-5 most critical numbers. Everything else is one tap away.
- Larger touch targets: Buttons and interactive elements sized for fingers, not mouse pointers.
- Vertical layout: Single column, scroll-based navigation. No multi-column layouts that require horizontal scrolling.
- Simplified charts: Sparklines work. Complex charts don't. Save detailed visualisation for larger screens.
Responsive Breakpoints
We design for three primary breakpoints:
The same underlying data powers all views. The presentation adapts to the context.
Alerts and Notifications
Dashboards show current state. Alerts notify when state changes. The two work together: the dashboard for active monitoring, alerts for passive notification.
Alert Design Principles
Alerts must earn attention. Every alert that fires without requiring action trains users to ignore alerts. We design alerts with strict criteria:
- Actionable: Every alert should prompt a specific action. If the user can't do anything about it, don't alert.
- Specific: "Order #1234 stuck in packing for 4 hours" not "Orders need attention."
- Timely: Alert when intervention can help, not after it's too late.
- Configurable: Users can adjust thresholds to match their risk tolerance.
Alert Channels
Different urgency levels warrant different channels:
| Urgency | Channel | Example |
|---|---|---|
| Critical | SMS + Dashboard | Payment processing down, major system failure |
| High | Push notification + Email | Deal at risk, SLA breach imminent |
| Medium | Email + Dashboard badge | Invoice overdue 7 days, task approaching deadline |
| Low | Dashboard only | Weekly report ready, routine status update |
Notification Centre
For dashboards with significant alert volume, we build a notification centre: a dedicated area showing recent alerts with status. Users can see what's been flagged, what's been acknowledged, and what's been resolved. Historical alerts create an audit trail and help identify patterns.
Personalisation and Customisation
No two users have identical needs. Even within the same role, individuals prioritise different metrics based on their current projects, responsibilities, or working style. We build dashboards that adapt to individual preferences without becoming chaotic.
What We Allow Users to Customise
- Widget arrangement: Users can reorder dashboard components to match their scanning pattern.
- Date ranges: Default views can be adjusted (last 7 days vs last 30 days vs current month).
- Alert thresholds: Individual tolerance for when metrics turn amber or red.
- Favourites: Pin frequently accessed drill-downs for quick navigation.
What We Don't Allow
- Adding arbitrary metrics: The core metric set is designed deliberately. Users can't add random fields that might mislead or distract.
- Changing colour meanings: Green always means healthy. Consistency across users enables team communication.
- Hiding mandatory metrics: Some metrics must be visible for governance or operational reasons. Users can't hide them.
The balance is giving users control over presentation while maintaining the integrity of the information architecture.
Saved Views
For users who need different perspectives at different times, we support saved views. A sales manager might have a "Pipeline Review" view for Monday meetings and a "Daily Activity" view for quick morning checks. Each view has its own layout and filters, switchable with one click.
Performance and Loading States
A slow dashboard is a useless dashboard. If users have to wait for data, they'll stop checking. We design for speed from the start, and we handle loading states gracefully when data takes time.
Performance Targets
Our standard performance targets for dashboard loading:
- Initial render: Under 2 seconds. Users should see the dashboard structure immediately.
- Primary metrics: Under 3 seconds. Critical numbers appear quickly.
- Complete load: Under 5 seconds. All data visible and interactive.
- Refresh: Under 1 second. Updating existing data should feel instant.
These targets assume reasonable network conditions. For users on slower connections, we prioritise critical metrics and load secondary content progressively.
Loading State Design
When data takes time to load, users need feedback. Empty screens create uncertainty. We use loading states that maintain the dashboard structure while indicating progress:
- Skeleton screens: Placeholder shapes showing where content will appear. Users understand the layout before data arrives.
- Progressive loading: Critical metrics load first, secondary content follows. Users can start working before everything is ready.
- Stale data indicators: If cached data is being shown while fresh data loads, we indicate this clearly: "Updated 5 minutes ago, refreshing..."
- Error states: When data fails to load, we show what's missing and offer a retry. "Revenue data unavailable. Retry."
Real-Time vs Refresh
Not all dashboards need real-time data. The appropriate update frequency depends on the use case:
Real-time (websocket updates)
For operations dashboards where things change minute-by-minute: order status, system health, live activity. Data pushes to the browser as it changes.
Frequent refresh (1-5 minutes)
For activity monitoring that doesn't need instant updates: sales activity, support tickets, task progress. Auto-refresh or pull-to-refresh.
Periodic refresh (hourly or daily)
For executive dashboards and trend views: revenue, pipeline, forecasts. Data changes slowly; frequent refresh adds load without value.
Different Dashboards for Different Roles
We design role-specific views from the same underlying data. The information architecture starts with the data model, but the presentation layer is entirely role-driven.
Executive view
High-level health check. Few metrics, strong summarisation, exception-focused. Answers "do I need to worry about anything?" in seconds. Weekly or daily check-in frequency. Drill-down available but rarely used.
Manager view
Team performance and operational status. Resource allocation, project health, issue tracking. Answers "is my team on track?" Daily use, deeper engagement with the detail. Comparisons and trends prominent.
Individual view
Personal focus. My tasks, my metrics, my priorities. Answers "what should I be working on?" Continuous use throughout the day. Action-oriented: click to start work, not just to view status.
Client view
External-facing status. Project progress, deliverables, upcoming milestones. Appropriate transparency without internal detail. Read-only, focused on their specific engagement.
How We Build Dashboards
Dashboard design is iterative. We don't design in isolation then reveal the finished product. We build understanding through conversation, prototype quickly, test with real users, and refine based on observation.
Interview the users
We talk to the people who'll use the dashboard. What questions do they ask regularly? What decisions do they make? What information do they currently dig for? The dashboard answers real questions, not hypothetical ones. We often shadow users for a day to see what they actually check and when.
Map decisions to metrics
For each question users ask, we identify the metric that answers it. For each decision they make, we identify the information that informs it. This mapping determines what belongs on the dashboard. Metrics without connected decisions don't make the cut.
Design for the worst day
Dashboards get tested when things go wrong. Late shipments, dropped revenue, client escalations. We design for these moments: does the dashboard help you understand and respond to problems? We prototype crisis scenarios alongside happy-path views.
Build rough, test early
We prototype quickly and put dashboards in front of real users with real data. Watch how they use them. What do they look at? What do they ignore? What's missing? Refinement comes from observation, not assumption. First versions are always wrong in interesting ways.
Prune continuously
Dashboards accumulate cruft. Metrics added "just in case" that no one uses. Views created for a project that ended. We build in review cycles: if something isn't earning attention, it gets removed. A quarterly audit asks: "What haven't we looked at in three months?"
Common Dashboard Mistakes
We've seen enough dashboard projects to recognise the patterns that lead to wallpaper. Avoiding these mistakes is as important as following best practices.
| Mistake | Symptom | Fix |
|---|---|---|
| Too many metrics | Users don't know where to look first | Ruthlessly cut to 5-8 key metrics |
| No context for numbers | "Is 47 good or bad?" | Always show comparison (vs target, vs last period) |
| Complex charts | Users study charts to understand them | Simplify or move to analysis tools |
| Stale data | Users don't trust the numbers | Show last updated time, ensure freshness |
| No clear hierarchy | All metrics look equally important | Use size, position, colour to guide attention |
| Dead metrics | Metrics no one acts on | Regular audits, remove what's not used |
What You Get
Dashboards designed with these principles achieve something rare: they get used. Users check them because checking them is useful. The dashboard becomes part of the workflow, not an obligation.
-
Get looked at Because they answer questions people actually have
-
Save time Glanceable status replaces investigation and asking around
-
Surface problems early Exceptions visible before they become crises
-
Drive decisions Every metric connects to something someone can act on
-
Stay relevant Built for evolution as your questions change
-
Work everywhere Mobile, tablet, desktop: the right view for each context
The dashboard becomes a tool people use, not wallpaper they ignore. It earns its place on the second monitor, in the morning routine, in the weekly review. It answers questions before anyone has to ask them.
Further Reading
- Stephen Few - Information Dashboard Design - The foundational text on dashboard design principles from a recognised authority in the field.
- Nielsen Norman Group - Dashboard Design - Research-backed articles on dashboard usability from UX research leaders.
- Coblis Colour Blindness Simulator - Tool for testing dashboard colour choices against various types of colour vision deficiency.
Build Dashboards People Use
We design dashboards that answer your actual questions. Your metrics, your decisions, your users. Integrated with your data, updated in real-time or on appropriate schedules, accessible where your team works. Not generic BI tool configuration. Custom dashboards built for how your business actually operates.
Let's talk about your dashboard needs →