Custom Software Development Process

The custom software development process: from discovery to production

Most custom software projects that fail do not fail because of technology. They fail because of process. Unclear requirements that nobody challenged. Feedback loops that ran quarterly instead of fortnightly. A big-bang launch on a Friday afternoon with no rollback plan.

The custom software development process determines whether you end up with a system that fits your business or an expensive artefact that nobody uses. This page walks through how a project actually runs, from the first conversation to years of production use, with concrete deliverables, timelines, and decision points at each phase.

If you are evaluating how custom software is built, this is the practitioner's version. No methodology diagrams. No acronym soup. Just what happens, what gets produced, and where things go wrong.


Why process matters more than technology

A competent development team can build almost anything. Laravel, PostgreSQL, React, Vue: the technology choices matter, but they are rarely the reason a project succeeds or fails. Process failures are.

Three patterns account for most custom software project failures:

Unclear requirements that nobody tested against reality. A 60-page specification written by someone who does not do the daily work. The team builds exactly what was specified. It turns out the specification was wrong.
No feedback loops during the build. The client sees the finished product for the first time after 12 weeks of development. Half the screens need reworking because assumptions made in week two were never validated.
Big-bang launches with no safety net. The old system is switched off on Friday. The new system goes live on Monday. Data migration has edge cases nobody anticipated. There is no way back.

Each of these is a process failure, not a technology failure. The software development lifecycle (SDLC) we follow exists specifically to prevent them. It draws on Agile principles, specifically Scrum's sprint cadence (with elements of Kanban for backlog flow), but adapted for the reality of small-to-medium businesses commissioning their first bespoke system.


The naive approach versus what actually works

Most businesses commissioning custom software for the first time expect a linear process: gather all requirements, build everything, then launch. This is the waterfall model, and it sounds logical. In practice, it concentrates risk at the worst possible moment. The difference between waterfall and iterative (Agile) delivery is not ideology. It is practical risk management.

The comparison below shows why iterative delivery outperforms waterfall across every dimension that matters to the business owner.

Dimension Waterfall / big-bang Iterative delivery
Requirements Single phase upfront, then locked Initial discovery, then refined every sprint based on working software
Feedback frequency End of project (or never) Every two weeks, against working software
Launch risk All-or-nothing. One chance to get it right Incremental releases. Each deployment is small and reversible
Budget control Overruns discovered late. Scope creep invisible until invoice Budget tracked per sprint. Overruns visible within two weeks
Change handling Change requests are expensive and adversarial Changes are expected. Scope is reprioritised, not frozen
Time to first value Months (or longer) Working software within 4 weeks

The rest of this page describes the iterative model we use: four phases, each with specific outputs, decision points, and failure modes.


Phase 1: Discovery (1 to 2 weeks)

Requirements gathering starts here. Discovery is the phase where we learn how your business actually operates. Not how you think it operates. Not how the org chart says it should operate. How work actually flows through your teams, where it gets stuck, and where manual effort is hiding.

We run structured interviews with the people who do the work. Not just the project sponsor or managing director (though they are in the room too). The person processing orders. The person chasing late invoices. The person who maintains the spreadsheet that holds everything together.

What we ask

The questions are specific and operational. They are designed to surface the real requirements, which rarely match the initial brief.

  • ?
    What does your Monday morning look like? Which systems do you open first?
  • ?
    Where do you re-enter data that already exists somewhere else? Duplicate data entry is often the first sign of a missing integration.
  • ?
    What breaks when someone is on holiday? Key-person dependencies reveal processes that need encoding in software.
  • ?
    Which decisions require checking three different places? Scattered data means slow decisions and inconsistent outcomes.
  • ?
    What workaround have you built that you are slightly embarrassed about? The most honest answers come from this question.

What gets produced

Discovery produces four concrete deliverables. Each one forces decisions early, when they are cheap to change. For a deeper exploration of what this phase involves, see our dedicated page on software discovery.

Specification document
Focused document describing what the system does, who uses it, and which business rules it encodes. Typically 10 to 20 pages with diagrams.
Architecture diagram
How the system fits together: database, application layer, external integrations, user roles.
Data model sketch
Database tables, relationships, and key fields. Forces early decisions about data structure that are expensive to change later.
Timeline and budget
Sprint-by-sprint breakdown of what gets built and when. Fixed pricing for defined scope, with clear mechanisms for handling changes.

Discovery also surfaces whether process mapping is needed before development begins. If the business process itself is not yet stable, building software to encode it is premature.

Failure mode: "The person who actually does the work was not in the room." The managing director describes the invoicing process. Three weeks into the build, the accounts administrator sees the system and says, "That is not how we do it." Discovery interviews must include the people who perform the work, not just the people who manage it. If we cannot speak to end users during discovery, the specification will contain assumptions that cost weeks to correct later.


Phase 2: Build (6 to 16 weeks)

The build phase is where working software emerges. Not all at once. In two-week cycles, each producing a demonstrable increment.

How a sprint works

Each two-week sprint follows a consistent structure. The predictability is the point: both sides know what to expect and when.

1

Days 1 to 2: Sprint planning. The development team reviews the next batch of work from the backlog, clarifies ambiguities, and commits to what will be built in this cycle. Work items are expressed as user stories with acceptance criteria: concrete descriptions of what the feature does, written in terms the client can verify.

2

Days 3 to 9: Development. Code is written, covered by automated tests (PHPUnit for back-end logic, browser tests for user-facing workflows), and deployed to a staging environment via continuous integration. Every code push triggers the full test suite through CI/CD pipelines on GitHub before reaching staging. The staging environment mirrors production: same database engine, same server configuration, same authentication. If it works on staging, it will work in production.

3

Day 10: Sprint demo and feedback. The client sees working software on the staging environment, clicks through it, and provides feedback. This is user acceptance testing (UAT) in practice: you verify the software against your actual workflows, not a test script someone else wrote. This feedback shapes the next sprint. Not a slide deck. Not a progress report. Working software that you can use.

How feedback works

Feedback is collected during the sprint demo, but it is also welcome at any point. The staging environment is available throughout the sprint. Clients can log in, test features, and raise questions.

Feedback falls into three categories, each handled differently:

Corrections

"This field should be a dropdown, not free text." Handled in the current or next sprint.

Refinements

"Can we add a filter to this report?" Added to the backlog, prioritised against other work.

Scope changes

"We need a whole new module for stock management." Discussed, scoped, and either added to the timeline (with budget adjustment) or deferred to a later phase.

The key principle: no feedback is wasted, and no feedback is free if it expands scope. Both sides know where they stand.

Your role during the build

The client is not a spectator. Your active involvement is what makes iterative delivery work. Specifically, this means:

  • Designate a primary contact. One person who can make day-to-day decisions without convening a committee. They do not need to be technical, but they need authority to approve direction.
  • Attend sprint demos. Every two weeks, 30 to 60 minutes. Non-negotiable. If you cannot attend, send a delegate who can make decisions.
  • Test on staging between demos. The staging environment is available throughout the sprint. Log in, try workflows, flag anything that feels wrong. Earlier feedback is cheaper feedback.
  • Prioritise the backlog. When there is more work than fits in a sprint, you decide what comes first. We advise on technical dependencies and effort, but business priority is yours to set.

How scope changes are handled

Requirements change. This is normal. A business owner sees working software and realises they need something different from what they originally described. An external factor shifts priorities. A regulation changes.

In an iterative process, scope changes are managed, not prevented. We use MoSCoW prioritisation (must-have, should-have, could-have, won't-have-yet) to keep scope decisions concrete rather than political:

  • Small changes (clarifications, UI adjustments) are absorbed into the current sprint.
  • Medium changes (new features, workflow modifications) are added to the backlog and prioritised in the next sprint planning session. Something else moves down to make room.
  • Large changes (new modules, significant architectural shifts) trigger a mini-discovery session. We scope the change, price it, and agree on timeline impact before any work begins.

The sprint structure means scope changes are never invisible. They surface within two weeks, not at the end of the project.

Failure mode: "Feedback delayed three weeks. The team built the wrong thing." Sprint demos are non-negotiable calendar commitments. They take 30 to 60 minutes. If the primary contact is unavailable, a delegate attends. The rule is simple: if we cannot show you working software every two weeks, we pause development until we can. Building without feedback is worse than not building at all.


Phase 3: Launch

Launch is not a single event. It is a sequence of controlled steps, each with its own verification and rollback procedure.

Zero-downtime deployment

Production deployments use zero-downtime techniques. Blue-green deployment means the new version of the application is deployed alongside the existing version using production infrastructure managed through Laravel Forge on DigitalOcean (or equivalent hosting). Traffic is switched once automated smoke tests confirm the new version is healthy. If anything fails, traffic reverts to the previous version within seconds. This is the same continuous deployment pipeline used throughout the build phase, now pointed at the production environment.

This means launches happen during business hours. No weekend deployments. No 2am maintenance windows. No crossing fingers.

Data migration

If the new system replaces an existing system (spreadsheets, an Access database, legacy software), data migration is a project within the project. It is consistently the most underestimated phase. Businesses that have outgrown spreadsheet-based processes often discover their data has accumulated years of inconsistencies: duplicate records, missing fields, encoding mismatches, and relationships that only existed in someone's head. Data migration follows its own process:

1

Schema mapping. Every field in the old system is mapped to its equivalent in the new system using formal database migration scripts. Fields that do not map cleanly are flagged for manual review.

2

Validation rules. Automated checks verify data integrity after migration: record counts match, totals reconcile, relationships are intact.

3

Parallel running. For critical systems, both old and new systems run simultaneously for one to two weeks. Staff enter data in both systems. Discrepancies are investigated and resolved before the old system is retired.

User training

Training happens before launch, not after. Users practise with real (migrated) data on the staging environment. Training covers the daily workflows they will use, not a tour of every feature. We train role by role: the warehouse team learns their screens, the accounts team learns theirs. Nobody sits through a two-hour demo of features they will never touch.

Questions and confusion during training often surface final refinements. This is expected and budgeted for. If a user cannot complete their core workflow without asking for help, the interface design needs another pass, not a thicker training manual.

Failure mode: "Big-bang cutover on a Friday. Data migration had edge cases." The old system is switched off. The new system goes live. Monday morning: 200 customer records are missing. Parallel running eliminates this risk. Both systems operate simultaneously. Data migration is run multiple times in testing before the final cutover. Edge cases (null values, special characters, duplicate records, legacy encoding) are identified and handled in advance. The old system is only retired once the new system has proven itself over days, not hours.


Phase 4: Support and evolution

Launch is the beginning, not the end. A custom software system in production needs ongoing attention: monitoring, maintenance, and continued development. For a deeper look at what this involves and what it costs, see our page on software maintenance.

Monitoring

Every production system we build includes monitoring from day one. Problems are caught before users report them.

Error tracking (Sentry)

Exceptions captured in real time with full stack traces, user context, and environment data. The development team is alerted immediately, not when a user reports a problem.

Performance monitoring

Slow queries, memory usage, and response times tracked continuously. Degradation is caught before it affects users.

Queue health

Background jobs (data imports, email sends, report generation) monitored through Laravel Horizon. Failed jobs retried automatically. Persistent failures trigger alerts.

Uptime monitoring

External checks verify the application is responding. Downtime triggers immediate notification.

Ongoing maintenance

Production systems need regular maintenance. This is not optional; it is the cost of running software that works reliably. Neglecting maintenance accumulates technical debt: shortcuts and deferred work that compound over time until the system becomes brittle, expensive to change, and eventually requires a costly rescue.

Security patches. Framework and dependency updates applied promptly when vulnerabilities are disclosed. Laravel releases security updates regularly; staying current is non-negotiable.
Database maintenance. Index optimisation, query performance reviews, storage management. The data model that served 1,000 records may need tuning at 100,000.
Backup verification. Automated backups are tested regularly. A backup you have never restored is not a backup.

Feature development

Most client relationships extend well beyond the initial launch. The system grows as the business grows. New reports. New API integrations with third-party services. New user roles. Additional workflow automation. Real-time dashboards that were not in the original scope but become obvious once data starts flowing.

Post-launch feature development follows the same sprint structure as the initial build. The difference is that priorities shift. During the build, the specification drives priorities. Post-launch, usage data and business needs drive priorities. Sprint velocity (how much the team delivers per cycle) stabilises over time, making future work increasingly predictable to estimate. The features that seemed critical during discovery sometimes turn out to be less important than features nobody anticipated.

Some of our client relationships span over 15 years. The system we launched in year one bears little resemblance to the system running today. That is not a failure of the original design. It is proof that the architecture was flexible enough to evolve.

Failure mode: "No monitoring. First they heard about the outage was from a customer." Monitoring is not optional, and it is configured before launch, not after the first incident. Automated alerts for errors, performance degradation, disk usage, and queue backlogs mean the development team knows about problems before users do.


How to know if the process is working

A well-run custom software development process has observable signs at each phase. If you are not seeing these, the process is not working.

Discovery: You see your business described back to you accurately. The specification matches reality, not aspiration.
Build: You see working software every two weeks. Each demo is more complete than the last. Your feedback is visibly incorporated.
Launch: The transition is calm. Users know what to expect. There is a clear rollback plan that nobody needs to use.
Support: Problems are reported to you as resolved, not as discovered. The system improves steadily without drama.

Good methodology is not invisible. It produces visible, measurable confidence at every stage.


Starting the process

The custom software development process begins with a conversation. Not a pitch. Not a requirements document. A conversation about what your business does, where it gets stuck, and whether custom software is the right answer.

Sometimes it is not. Sometimes a well-chosen SaaS product or a better-configured spreadsheet is the right move. We will tell you that honestly. If you are still weighing up the decision, our build vs buy analysis can help clarify the trade-offs. You can also read more about how we work and what to expect.

If custom software is the right path, discovery is the first step. It is low-risk and time-boxed, often scoped to deliver a minimum viable product (MVP) definition, and produces a specification you own regardless of what happens next. For context on what this investment typically looks like, see our custom software costs breakdown. Everything described on this page, from sprint demos to parallel running to production monitoring, flows from that first conversation.

For a broader view of what we build with this process, see our custom web application development page.

Start with a discovery call

The first conversation is free and comes with no obligation. We will walk through your situation and tell you honestly whether custom software is the right approach.

Start with a conversation →
Graphic Swish