Most custom software projects that fail do not fail because of technology. They fail because of process. Unclear requirements that nobody challenged. Feedback loops that ran quarterly instead of fortnightly. A big-bang launch on a Friday afternoon with no rollback plan.
The custom software development process determines whether you end up with a system that fits your business or an expensive artefact that nobody uses. This page walks through how a project actually runs, from the first conversation to years of production use, with concrete deliverables, timelines, and decision points at each phase.
If you are evaluating how custom software is built, this is the practitioner's version. No methodology diagrams. No acronym soup. Just what happens, what gets produced, and where things go wrong.
Why process matters more than technology
A competent development team can build almost anything. Laravel, PostgreSQL, React, Vue: the technology choices matter, but they are rarely the reason a project succeeds or fails. Process failures are.
Three patterns account for most custom software project failures:
Each of these is a process failure, not a technology failure. The software development lifecycle we follow exists specifically to prevent them.
The naive approach versus what actually works
Most businesses commissioning custom software for the first time expect a linear process: gather all requirements, build everything, then launch. This is the waterfall model, and it sounds logical. In practice, it concentrates risk at the worst possible moment.
The comparison below shows why iterative delivery outperforms waterfall across every dimension that matters to the business owner.
| Dimension | Waterfall / big-bang | Iterative delivery |
|---|---|---|
| Requirements | Single phase upfront, then locked | Initial discovery, then refined every sprint based on working software |
| Feedback frequency | End of project (or never) | Every two weeks, against working software |
| Launch risk | All-or-nothing. One chance to get it right | Incremental releases. Each deployment is small and reversible |
| Budget control | Overruns discovered late. Scope creep invisible until invoice | Budget tracked per sprint. Overruns visible within two weeks |
| Change handling | Change requests are expensive and adversarial | Changes are expected. Scope is reprioritised, not frozen |
| Time to first value | Months (or longer) | Working software within 4 weeks |
The rest of this page describes the iterative model we use: four phases, each with specific outputs, decision points, and failure modes.
Phase 1: Discovery (1 to 2 weeks)
Discovery is the phase where we learn how your business actually operates. Not how you think it operates. Not how the org chart says it should operate. How work actually flows through your teams, where it gets stuck, and where manual effort is hiding.
We run structured interviews with the people who do the work. Not just the project sponsor or managing director (though they are in the room too). The person processing orders. The person chasing late invoices. The person who maintains the spreadsheet that holds everything together.
What we ask
The questions are specific and operational. They are designed to surface the real requirements, which rarely match the initial brief.
-
What does your Monday morning look like? Which systems do you open first?
-
Where do you re-enter data that already exists somewhere else? Duplicate data entry is often the first sign of a missing integration.
-
What breaks when someone is on holiday? Key-person dependencies reveal processes that need encoding in software.
-
Which decisions require checking three different places? Scattered data means slow decisions and inconsistent outcomes.
-
What workaround have you built that you are slightly embarrassed about? The most honest answers come from this question.
What gets produced
Discovery produces four concrete deliverables. Each one forces decisions early, when they are cheap to change.
Discovery also surfaces whether process mapping is needed before development begins. If the business process itself is not yet stable, building software to encode it is premature.
Failure mode: "The person who actually does the work was not in the room." The managing director describes the invoicing process. Three weeks into the build, the accounts administrator sees the system and says, "That is not how we do it." Discovery interviews must include the people who perform the work, not just the people who manage it. If we cannot speak to end users during discovery, the specification will contain assumptions that cost weeks to correct later.
Phase 2: Build (6 to 16 weeks)
The build phase is where working software emerges. Not all at once. In two-week cycles, each producing a demonstrable increment.
How a sprint works
Each two-week sprint follows a consistent structure. The predictability is the point: both sides know what to expect and when.
Days 1 to 2: Sprint planning. The development team reviews the next batch of work from the backlog, clarifies ambiguities, and commits to what will be built in this cycle.
Days 3 to 9: Development. Code is written, tested, and deployed to a staging environment. The staging environment mirrors production: same database engine, same server configuration, same authentication. If it works on staging, it will work in production.
Day 10: Sprint demo and feedback. The client sees working software, clicks through it, and provides feedback. This feedback shapes the next sprint. Not a slide deck. Not a progress report. Working software that you can use.
How feedback works
Feedback is collected during the sprint demo, but it is also welcome at any point. The staging environment is available throughout the sprint. Clients can log in, test features, and raise questions.
Feedback falls into three categories, each handled differently:
Corrections
"This field should be a dropdown, not free text." Handled in the current or next sprint.
Refinements
"Can we add a filter to this report?" Added to the backlog, prioritised against other work.
Scope changes
"We need a whole new module for stock management." Discussed, scoped, and either added to the timeline (with budget adjustment) or deferred to a later phase.
The key principle: no feedback is wasted, and no feedback is free if it expands scope. Both sides know where they stand.
How scope changes are handled
Requirements change. This is normal. A business owner sees working software and realises they need something different from what they originally described. An external factor shifts priorities. A regulation changes.
In an iterative process, scope changes are managed, not prevented:
- Small changes (clarifications, UI adjustments) are absorbed into the current sprint.
- Medium changes (new features, workflow modifications) are added to the backlog and prioritised in the next sprint planning session.
- Large changes (new modules, significant architectural shifts) trigger a mini-discovery session. We scope the change, price it, and agree on timeline impact before any work begins.
The sprint structure means scope changes are never invisible. They surface within two weeks, not at the end of the project.
Failure mode: "Feedback delayed three weeks. The team built the wrong thing." Sprint demos are non-negotiable calendar commitments. They take 30 to 60 minutes. If the primary contact is unavailable, a delegate attends. The rule is simple: if we cannot show you working software every two weeks, we pause development until we can. Building without feedback is worse than not building at all.
Phase 3: Launch
Launch is not a single event. It is a sequence of controlled steps, each with its own verification and rollback procedure.
Zero-downtime deployment
Production deployments use zero-downtime techniques. The new version of the application is deployed alongside the existing version. Traffic is switched once automated smoke tests confirm the new version is healthy. If anything fails, traffic reverts to the previous version within seconds.
This means launches happen during business hours. No weekend deployments. No 2am maintenance windows. No crossing fingers.
Data migration
If the new system replaces an existing system (spreadsheets, an Access database, legacy software), data migration is a project within the project. It follows its own process:
Schema mapping. Every field in the old system is mapped to its equivalent in the new system. Fields that do not map cleanly are flagged for manual review.
Validation rules. Automated checks verify data integrity after migration: record counts match, totals reconcile, relationships are intact.
Parallel running. For critical systems, both old and new systems run simultaneously for one to two weeks. Staff enter data in both systems. Discrepancies are investigated and resolved before the old system is retired.
User training
Training happens before launch, not after. Users practise with real (migrated) data on the staging environment. Training covers the daily workflows they will use, not a tour of every feature. Questions and confusion during training often surface final refinements.
Failure mode: "Big-bang cutover on a Friday. Data migration had edge cases." The old system is switched off. The new system goes live. Monday morning: 200 customer records are missing. Parallel running eliminates this risk. Both systems operate simultaneously. Data migration is run multiple times in testing before the final cutover. Edge cases (null values, special characters, duplicate records, legacy encoding) are identified and handled in advance. The old system is only retired once the new system has proven itself over days, not hours.
Phase 4: Support and evolution
Launch is the beginning, not the end. A custom software system in production needs ongoing attention: monitoring, maintenance, and continued development.
Monitoring
Every production system we build includes monitoring from day one. Problems are caught before users report them.
Error tracking (Sentry)
Exceptions captured in real time with full stack traces, user context, and environment data. The development team is alerted immediately, not when a user reports a problem.
Performance monitoring
Slow queries, memory usage, and response times tracked continuously. Degradation is caught before it affects users.
Queue health
Background jobs (data imports, email sends, report generation) monitored through Laravel Horizon. Failed jobs retried automatically. Persistent failures trigger alerts.
Uptime monitoring
External checks verify the application is responding. Downtime triggers immediate notification.
Ongoing maintenance
Production systems need regular maintenance. This is not optional; it is the cost of running software that works reliably.
Feature development
Most client relationships extend well beyond the initial launch. The system grows as the business grows. New reports. New integrations. New user roles. Additional workflows.
Post-launch feature development follows the same sprint structure as the initial build. The difference is that priorities shift. During the build, the specification drives priorities. Post-launch, usage data and business needs drive priorities. The features that seemed critical during discovery sometimes turn out to be less important than features nobody anticipated.
Some of our client relationships span over 15 years. The system we launched in year one bears little resemblance to the system running today. That is not a failure of the original design. It is proof that the architecture was flexible enough to evolve.
Failure mode: "No monitoring. First they heard about the outage was from a customer." Monitoring is not optional, and it is configured before launch, not after the first incident. Automated alerts for errors, performance degradation, disk usage, and queue backlogs mean the development team knows about problems before users do.
How to know if the process is working
A well-run custom software development process has observable signs at each phase. If you are not seeing these, the process is not working.
Good methodology is not invisible. It produces visible, measurable confidence at every stage.
Starting the process
The custom software development process begins with a conversation. Not a pitch. Not a requirements document. A conversation about what your business does, where it gets stuck, and whether custom software is the right answer.
Sometimes it is not. Sometimes a well-chosen SaaS product or a better-configured spreadsheet is the right move. We will tell you that honestly. You can read more about how we work and what to expect.
If custom software is the right path, discovery is the first step. It is low-risk, time-boxed, and produces a specification you own regardless of what happens next. Everything described on this page, from sprint demos to parallel running to production monitoring, flows from that first conversation.
For a broader view of what we build with this process, see our custom web application development page.
Start with a discovery call
The first conversation is free and comes with no obligation. We will walk through your situation and tell you honestly whether custom software is the right approach.
Book a discovery call →