Make software people actually want to use
Most business software gets used because people have to use it, not because they want to. The screens are cluttered. The workflows make sense to the developer who built them but not to the person clicking through them eight hours a day. Every task takes three clicks too many.
Good user experience design changes that equation. When software fits the way people actually work, adoption stops being a battle. Support tickets drop. Training shrinks from weeks to days. The system becomes something the team relies on rather than something they work around.
Consumer apps get most of the UX attention: sleek onboarding flows, gamification, social nudges. Very little of that applies when you are building an order management system or a project tracking dashboard. Business software serves people who perform the same tasks hundreds of times a month, where every unnecessary step compounds into hours of lost productivity. That is a different design problem entirely.
We have been designing interfaces for custom business software since 2005, across more than 50 operational systems used daily by staff. Not marketing websites. Not mobile games. Tools that people open at 9am and close at 5pm, five days a week. That experience has taught us where business software UX goes wrong, and what it looks like when it goes right.
The Shadow Systems Diagnostic
Every business with poorly designed software develops the same pattern. Staff build workarounds. Spreadsheets that duplicate data the official system should hold. WhatsApp groups that replace broken notification features. Manual checklists taped beside monitors because the on-screen workflow misses steps. Tribal knowledge hoarded by the two people who have figured out how to make the system behave.
These are shadow systems: unofficial tools and processes that grow in the gaps between what the software does and what people actually need. They are not a sign of lazy or untrained staff. They are a diagnostic. When people build workarounds, they are telling you (through their behaviour, not their words) that the software is harder to use than the problem it was meant to solve.
The symptoms are consistent across industries and company sizes. Staff avoid the system when they can, entering data in batches at the end of the week instead of in real time. New hires take weeks to become productive because the interface makes sense only to people who have memorised its quirks. Error rates climb because the system does not prevent mistakes or make the correct path obvious. The person who approved the purchase of the software may not be the person who uses it daily, and that gap between buyer and user is where usability problems hide.
The instinct, when these symptoms appear, is to schedule more training. Sometimes that is the right call. But often it is not, and the distinction matters.
If the symptoms point to UX, the fix is not more training. It is better design. Training teaches people to work around friction. Good design removes the friction entirely. The fix starts with understanding what causes the friction in the first place.
Cognitive load: the theory behind the friction
The previous section identified the symptoms. Shadow systems, workaround spreadsheets, staff avoidance. But diagnosing a UX problem is only useful if you understand why those problems form in the first place. The most common answer is cognitive load: the total mental effort a person spends to complete a task. (Process design, data quality, and permissions also contribute, but cognitive load is the factor most directly under the designer's control.) When an interface demands too much of that effort for the wrong reasons, people make mistakes, slow down, or find a way around it.
In the 1980s, educational psychologist John Sweller developed cognitive load theory to explain why some learning environments work and others overwhelm. The framework translates directly to interface design, and it gives you a practical way to identify what is actually causing friction in your software.
Sweller's three types of cognitive load:
Intrinsic load comes from the task itself. Configuring a multi-currency invoice is inherently complex, and no interface can eliminate that complexity entirely.
Extraneous load comes from the interface. Unclear labels, inconsistent layouts, and unnecessary steps all add mental effort that has nothing to do with the task.
Germane load is productive effort. It is the mental work of building understanding: learning domain patterns, recognising how data relates, getting better at the job through use of the tool.
Of the three, extraneous load is the one design should target. Intrinsic load belongs to the domain (you cannot make VAT rules simpler by redesigning a form). Germane load is desirable (a well-structured interface actually helps users learn the business domain faster). But extraneous load is pure waste. Every ounce of mental effort the interface adds through poor design is effort stolen from the actual work.
What extraneous load looks like in business software
Extraneous cognitive load hides in the details that teams stop noticing because they have adapted to them. A button labelled "Process" that could mean process the order, process the payment, or process the return. Navigation that works one way on the orders screen and a different way on the inventory screen. A form that presents 30 fields when the task at hand only needs five, forcing the user to scan past irrelevant data every single time. Date fields that accept three different formats across three different modules. Colour coding that means "urgent" on one dashboard and "overdue" on another.
None of these problems is catastrophic on its own. But they accumulate. Each one forces a small decision, a moment of hesitation, a conscious effort to interpret rather than act. This is where two foundational principles from cognitive science explain the compound effect. Hick's Law states that decision time increases logarithmically with the number of choices: a screen showing 15 actions where the user needs three is not just cluttered, it is measurably slower. Miller's Law describes the limits of working memory (roughly seven items, plus or minus two), which means that dense screens packed with unrelated information literally exceed what a person can hold in mind at once.
The practical test is straightforward. For every element on a screen, ask: is this essential to the task the user is performing right now? If the answer is no, it should either be removed or made invisible until needed. That single principle, applied consistently, eliminates more extraneous load than any amount of visual polish. It is the difference between an interface that looks clean and one that actually thinks clearly on the user's behalf.
Designing for the person who uses it twice a month
The daily power user gets most of the design attention. They are vocal, they file feature requests, and their workflows are visible. But the hardest design challenge in business software is the person who logs in once or twice a month: the operations manager running end-of-month invoicing, the warehouse supervisor pulling quarterly stock reports, the finance director approving expenses every few weeks. These occasional users have forgotten where everything lives since last time. They cannot rely on muscle memory. Every session feels like a partial re-learning exercise.
This is precisely why occasional users are the best litmus test for interface quality. If someone who uses your software twelve times a year can complete their task without asking for help, your design is working. If they cannot, no amount of power-user shortcuts will compensate for the underlying clarity problem.
Progressive disclosure: show what matters now
The previous section identified extraneous cognitive load as the primary target for elimination. Progressive disclosure is the most direct technique for achieving that. Rather than presenting every option and data field at once, the interface reveals complexity only when the user needs it. A customer record shows contact details and recent activity by default. Order history, financial summary, and communication log sit one click away, visible but not competing for attention. The occasional user sees a manageable screen. The power user knows where to find depth.
The key is understanding which information each user role needs most often. That understanding comes from process mapping done before any screen is designed. Without it, progressive disclosure becomes guesswork: developers hiding fields they personally find unimportant rather than fields the workflow genuinely does not need at that moment.
Smart defaults that reduce decisions
Every blank field is a decision. For occasional users, many of those decisions require recalling information they have not thought about since last month. Smart defaults eliminate that recall burden entirely. When the invoicing form pre-selects the most common payment terms, pre-fills the VAT rate, and defaults to the current date, the occasional user can review and confirm rather than remember and enter. The difference in task completion time is substantial: seconds per field, minutes per form, hours over a year.
Good defaults depend on understanding the typical workflow. If process mapping reveals that 80% of purchase orders use the same supplier and the same delivery address, those become the defaults. The remaining 20% can override them. This connection between workflow knowledge and interface behaviour is where design stops being decorative and starts being structural.
Recognition over recall
Don Norman's design principles include a concept that matters enormously for occasional users: recognition is easier than recall. A dropdown showing the five most recent suppliers is faster than a text field requiring the user to type a supplier code from memory. Breadcrumb navigation showing where you are in a multi-step process is easier than remembering which step comes next. The interface should provide answers, not demand them. For power users, this same approach simply speeds up what they already know. For occasional users, it is the difference between completing the task and abandoning it to ask a colleague.
Contextual help instead of training manuals
Traditional training documentation assumes the user will read it before they need it, remember it when they do, and find the relevant section quickly. None of these assumptions hold for someone who invoices once a month. Contextual help, placed exactly where confusion is likely to arise, replaces all three assumptions with a single one: the user will read a short explanation at the moment they need it. Tooltips on form fields, inline guidance below section headings, examples in placeholder text. Help at the point of confusion, not in a PDF nobody opens.
The comparison below shows how the same interface can serve both occasional and power users without compromise. The goal is not two separate experiences but a single design that scales gracefully across usage frequency.
| Design area | What occasional users need | What power users need | How the same interface serves both |
|---|---|---|---|
| Navigation | Clear labels and visible paths; no memorised shortcuts | Speed; fewer clicks to reach frequent screens | Descriptive menu labels with keyboard shortcuts shown alongside |
| Form fields | Pre-filled defaults so they confirm rather than recall | Quick override capability; tab-through efficiency | Smart defaults with full editability; tab order follows the task sequence |
| Information density | Only the fields relevant to their task | All data visible without extra clicks | Progressive disclosure: essential fields shown, detail panels expandable |
| Help | Inline guidance explaining what each field means | No help needed; possibly dismissible | Contextual tooltips that appear on hover; dismissible once learned |
| Error handling | Gentle, specific messages explaining what went wrong | Quick correction without losing flow | Inline validation with plain-language messages; field-level, not page-level |
Task-centred design and process mapping
Most business software is designed around features. The system has an invoicing module, a reporting module, a contacts module, and each gets its own screen, its own navigation item, its own logic. The problem is that users do not think in modules. They think in tasks: "send this client their monthly invoice," "find out why this order is late," "onboard a new supplier." A task might touch three modules in the space of two minutes. When the interface is organised around what the system can do rather than what the user needs to accomplish, every task becomes a scavenger hunt across screens.
This is the core of task-centred design. Rather than structuring navigation and screens around the software's architecture, you structure them around the work people actually perform. The concept overlaps with Clayton Christensen's Jobs to Be Done framework, which holds that people do not buy products for their features but to make progress on a specific job. Applied to interface design, JTBD shifts the organising question from "what can this screen display?" to "what is the user trying to get done right now?"
The practical challenge is knowing what those tasks are, in what order they happen, who hands off to whom, and where the pain points sit. That knowledge does not live in a requirements document. It lives in the habits, workarounds, and institutional memory of the people doing the work. This is where process mapping becomes essential. Before sketching a single wireframe, you map the real user flows: how information enters the business, where it needs to go, which steps are sequential and which run in parallel, and where bottlenecks form. How we approach process mapping is covered in depth on its own page, but the short version is that mapping reveals the tasks, and task-centred design translates those tasks into screens.
That translation is where the information architecture takes shape. Navigation follows task frequency. Default views surface the data each role needs most. Form layouts match the order information arrives in, not the order the database stores it. The interface mirrors the workflow instead of forcing the workflow to mirror the interface.
One consequence of this approach is that it exposes broken processes before any design work begins. A three-step approval chain that could be one step. A form that asks for data the system already holds. A notification that fires so often it gets ignored. Fixing these first is not optional. Wrapping a pretty interface around a broken process just makes a broken process faster.
Error prevention and recovery in business software
Errors in a consumer app are a minor frustration. Errors in an invoicing system cost real money: a wrong line item propagates through accounts receivable, triggers an incorrect payment, and damages a client relationship that took months to build. A miskeyed stock receipt creates a discrepancy that surfaces weeks later when a physical count does not match the system. In business software, errors cascade.
Good error handling works across three layers: preventing mistakes before they happen, making recovery easy when they do, and communicating clearly so the user knows exactly what went wrong.
Prevention: stop the error at the source
The most effective error strategy is making mistakes difficult to make in the first place. Inline validation that checks a field before the form is submitted catches problems at the point of entry, not five clicks later. Sensible constraints (a date picker instead of a free-text field, a dropdown instead of manual entry for known values) eliminate entire categories of input error. Smart defaults, drawn from the process mapping work covered earlier, pre-fill fields with the most likely value so the user only needs to correct exceptions rather than entering everything from scratch.
Recovery: make mistakes reversible
Prevention cannot catch everything. When errors do happen, the interface should make recovery painless. Undo functionality for routine actions removes the need for "Are you sure?" confirmation dialogs that slow everyone down. Soft delete (moving records to a trash folder rather than destroying them permanently) means an accidental deletion is a two-click fix, not a support ticket. Confirmation dialogs should be reserved for genuinely destructive actions: deleting an entire project, sending a bulk invoice run, or changing permissions that affect other users.
Communication: tell the user what actually happened
When something does go wrong, the error message is the interface's last chance to be helpful. The difference between a good error message and a bad one is the difference between a three-second fix and a five-minute investigation.
Specific, inline error messages placed next to the problem field are always more effective than a generic banner at the top of the page. The best error messages name the problem, explain why it matters, and suggest how to fix it. That specificity protects data integrity across the entire system, not just on one screen.
Accessibility as baseline, not bolt-on
Accessibility in business software is not a finishing touch applied before launch. It is a structural decision that shapes how every screen, form, and interaction works. In the UK, this is not optional.
Legal context: The UK Equality Act 2010 places a duty on employers to make reasonable adjustments for disabled employees. Where business software is a core part of someone's job, an inaccessible interface can create a barrier the employer is expected to address. This is not the same as public-sector WCAG compliance, but the practical outcome is similar: accessibility in workplace software is a reasonable-adjustments question, not an optional polish.
Beyond permanent disability, workplace software faces situational impairment constantly. A warehouse worker picking orders in bright sunlight cannot read low-contrast text. A manager approving purchase orders one-handed while on a phone call needs keyboard shortcuts or large tap targets. A field engineer wearing gloves on a noisy site needs clear visual feedback because audio cues are useless. These are not edge cases. They are Tuesday morning.
Accessibility rests on four pillars, and each one benefits every user, not just those with disabilities.
- Keyboard navigation: everything operable without a mouse. This also serves power users who work faster from the keyboard than they ever could with a mouse.
- Screen reader support: semantic HTML, proper labelling, and meaningful alt text. This forces clear information architecture, which helps everyone.
- Colour independence: meaning conveyed through shape, text, or position as well as colour. A status indicator that relies solely on red versus green fails for 8% of men with colour vision deficiency.
- Sufficient contrast: text readable in varied lighting conditions. WCAG 2.2 AA requires a minimum contrast ratio of 4.5:1 for body text, and meeting that standard makes every screen easier to read.
The GOV.UK Design System is the strongest publicly available reference for accessible interface patterns. It was built to serve the entire UK population, including users with low digital confidence, and the patterns it established (clear labels, generous spacing, minimal reliance on colour alone) translate directly to business software. When we build accessible interfaces, the result is not a separate "accessible version." It is simply a better interface for everyone.
Measuring UX: from gut feeling to evidence
Most businesses evaluate their software's usability by feel. The team seems to manage. Complaints are infrequent. Nobody has quit over it. That is not measurement. It is absence of catastrophe. Genuine UX measurement uses standardised instruments and repeatable metrics that let you track improvement over time and compare across systems.
The most practical standardised instrument is the System Usability Scale (SUS). Ten statements, five-point scale, five minutes to complete. John Brooke designed it in 1986 and it has held up remarkably well since. The scoring is simple: below 50 means the system has serious problems. Between 50 and 68 is marginal (usable but frustrating). Above 68 is good, with 68 being the published average. Above 80 puts you in the top 10%. No specialist training needed to run it or interpret the results, which makes SUS practical for teams without a dedicated UX researcher.
A SUS score tells you how the system feels to use. To understand where the friction actually sits, you need four operational metrics tracked over time.
Task completion time
How long does it take to complete a core workflow (processing an order, raising an invoice, updating a record)? Measure before and after design changes. A 30% reduction in task time across 20 daily users recovers hours every week.
Error rate per task
How often do users make mistakes during a given workflow, and how many of those errors reach downstream systems? Track validation catches (the interface prevented the error) separately from escaped errors (the mistake made it through). The ratio tells you whether your prevention patterns are working.
Time-to-competency
How long does it take a new user to complete core tasks without assistance? Measure from first login to unassisted task completion. If this number is days rather than hours, the interface is failing regardless of what the SUS score says.
Support ticket volume
Count tickets categorised as "how do I..." questions. These are direct evidence of interface confusion. A sustained drop after a redesign is one of the clearest signals that UX improvements are working.
One important nuance on testing scope. Jakob Nielsen's often-cited finding that five users uncover roughly 80% of usability problems holds true for a homogeneous user group. Business software serving multiple roles (warehouse staff, sales reps, finance managers) is not homogeneous. Each role interacts with different screens, different data, and different workflows. Five warehouse operators will find warehouse problems. They will not find the issues a finance manager encounters in the reporting module. Multi-role software needs role-specific testing rounds: five users per role, focused on that role's core tasks. The total participant count rises, but each round stays small and fast.
Keeping UX quality after launch
Software does not stay well-designed on its own. Every feature added, every edge case patched, every "quick fix" requested by a single user erodes the original interface quality. This is UX debt, and it accumulates the same way technical debt does: silently, steadily, until the system that launched cleanly has become the cluttered, inconsistent tool that everyone complains about.
The problem is structural. Features get added one at a time, each making local sense but degrading the whole. A new status field appears on a form because one team needed it. A second notification type gets bolted on. A report screen grows three new filters. None of these changes go through the same design rigour as the original build, because they feel too small to warrant it. Over 18 months, the cumulative effect is an interface that has drifted far from its low-friction interface principles.
Small teams cannot afford a dedicated UX review board, but they can maintain three habits that prevent the worst of this drift.
Three governance habits for small teams:
1. Component library enforcement. Every new screen or feature must use existing UI patterns. If a new pattern is genuinely needed, it gets added to the library, not invented in isolation. This keeps interfaces consistent even as different developers build different features.
2. Quarterly "fresh eyes" testing. One new user, three core tasks, 30 minutes. Watch them work without helping. The friction points they hit are the ones your experienced team has stopped noticing.
3. Track your UX metrics over time. Task completion speed, error rates, support ticket volume, and feature adoption from the outcomes discussed above. When these numbers start moving in the wrong direction, you catch the drift before it becomes a redesign.
None of this requires a large budget or a specialist team. It requires the discipline to treat UX as an ongoing practice rather than a launch-day achievement.
How design connects to development
Design decisions only matter if they survive contact with production code. At IGC, design and development happen together, which eliminates the handoff gap where good intentions get lost. Read more about our web application development approach for the full technical picture.
This tight loop means UX improvements ship quickly. When user testing reveals a friction point, it can often be fixed the same day.
Start with the friction
Shadow systems (the spreadsheets, the sticky notes, the "just ask Sarah" workarounds) are not a people problem. They are a design problem. People build workarounds when the official system creates more friction than the workaround does. Fix the friction and the workarounds disappear on their own.
Three ideas from this guide are worth holding onto. First, cognitive load is the real enemy of business software usability, not aesthetics, not feature count, but the mental effort each interaction demands. Second, you do not need a research lab to test your UX: one new user, three tasks, and 30 minutes of observation will reveal more than months of internal debate. Third, good user experience is measurable. Task completion speed, error rates, support tickets, and feature adoption give you hard numbers, not opinions.
If your team is working around your software rather than with it, that is a design problem worth fixing.
Design software people actually use
We have been building business interfaces since 2005. If your current system is generating workarounds instead of results, a UX friction review is the fastest way to find out why and what to fix first. A consulting session covers the friction audit, prioritised recommendations, and a clear path forward.
Book a UX friction review →