Legacy Migration

Moving Old Systems to Modern Platforms

Every legacy system that still runs in production earned its place there. It survived because it does something the business depends on, usually in ways that nobody fully documents and everybody takes for granted. The Access database tracking 15 years of customer orders. The PHP 4 application handling job scheduling. The Excel workbook that somehow runs payroll.

Legacy code migration is the work of replacing those systems without breaking the business processes embedded in them. It sounds like a technology project. It is actually an archaeology project: excavating business logic from code that was written before anyone thought to explain why it works the way it does. Every brownfield development effort shares this trait, but legacy migration intensifies it because the original constraints have long since been forgotten.

Over the years, we have migrated systems built on Access, Excel, PHP 4, classic ASP, and .NET into modern Laravel applications. The patterns in this guide come from that work, including the failures that taught us more than the successes.


Why Most Legacy Code Migrations Fail

The most common mistake in legacy system migration is underestimating what the old system actually does. A system that looks simple from the outside (a few forms, a database, some reports) typically contains years of accumulated business logic that nobody remembers implementing.

Three factors make migration harder than it appears.

Undocumented business rules

The original developer added a rule that rejects orders below a certain value on Tuesdays. Nobody remembers why, but removing it breaks something downstream. These rules live in application code, database triggers, stored procedures, and sometimes in the gap between what the system does and what the documentation says it does.

Data quality decay

Legacy databases accumulate inconsistencies over years. Nullable fields that should be required. Duplicate records with slightly different spellings. Date formats that changed partway through 2012 when someone updated the input form. This is the technical debt that compounds silently in every long-lived system. A migration plan that ignores data quality will import these problems into the new system.

Integration dependencies nobody mapped

The legacy system sends a nightly CSV to the accounting package. It exposes an undocumented API that the warehouse team built a script against. It writes to a shared folder that three other processes read from. Each of these is a thread that, if pulled, unravels something.

The naive approach is to rebuild the system from requirements documents. This fails because the requirements documents, if they exist, describe what the system was supposed to do five years ago, not what it actually does today. The gap between specification and reality is where migrations die.

The feature parity trap: A related mistake is attempting feature-for-feature replication of the legacy system. The goal of migration is to replicate business outcomes, not screens. Many legacy features exist because of old constraints that no longer apply. A batch import that runs overnight because the original server could not handle it during business hours. A three-step approval flow added to work around a bug in 2014. During discovery, separate genuine business rules from historical workarounds. Migrate the rules. Retire the workarounds.


Big Bang Versus Incremental Migration

There are two fundamental approaches to legacy code migration. The first, big bang migration, replaces the old system in a single cutover. The second, incremental migration, replaces the system piece by piece over weeks or months.

Why big bang usually fails

Big bang migration has an appealing simplicity. Build the new system, migrate the data, switch over on a Friday evening, go live on Monday. In practice, this approach fails for systems of any real complexity. The testing window is too short. You discover on Sunday afternoon that the data migration missed 2,000 records because of a schema mismatch in a field you did not know existed.

The rollback trap: You cannot roll back because the old system's data is now 48 hours stale and the business has been entering new records into the new system since Saturday morning. Big bang works for simple, isolated systems. For anything else, it concentrates risk into a single weekend when you have the least capacity to deal with problems.

Incremental migration

Incremental migration replaces functionality in slices. Users might use the new system for order entry while the old system still handles reporting. Over weeks, each module transfers to the new platform until the old system has no remaining responsibilities. This approach reduces risk because each slice is small enough to test thoroughly and roll back independently. It increases complexity because you are running two systems simultaneously, which means maintaining data consistency between them. The trade-off is worth it for any system that the business cannot afford to lose for a weekend.


The Strangler Fig Pattern

The strangler fig is the most reliable pattern we use for legacy system migration. Named after the fig that grows around a host tree, gradually replacing it, the pattern works by intercepting requests to the legacy system and routing them to new code, one feature at a time.

The implementation uses an API facade that sits in front of both the old and new systems. All traffic flows through the facade. Initially, the facade routes everything to the legacy system. As new features are built and tested, the facade routes those specific requests to the new system instead.

Route-level switching

The facade decides per endpoint whether to send traffic to the old or new system. You migrate /orders/create to the new system while /orders/report still hits the old one.

Feature flags for gradual rollout

Before switching an entire endpoint, route 10% of traffic to the new system and compare results. This catches discrepancies that testing missed.

Shared data layer

During migration, both systems need access to the same data. Options include a shared database (simplest but creates coupling), event-driven synchronisation using change data capture (more complex but cleaner), or dual-write (the facade writes to both systems).

The strangler fig works because it makes migration reversible at every step. If the new endpoint has a bug, the facade routes traffic back to the old system in seconds, not hours. Martin Fowler's original articulation of the pattern was aimed at large-scale systems, but the principle applies just as well to a 15-year-old Access database serving a team of twelve. The scale is different. The discipline is identical.

The 80% stall: A common failure mode with this pattern is letting the migration stall at 80% completion. The last 20% of functionality is always the hardest because it contains the most obscure business logic. Set a decommission deadline early and protect the budget for that final push.


Data Migration Strategies

Data migration is where legacy code migrations most commonly fail. Moving application logic is challenging; moving 15 years of accumulated data without losing records, corrupting relationships, or breaking downstream reports is harder.

The ETL pipeline

Extract, Transform, Load is the standard pattern for moving data between systems. Most guides present ETL as a single pass: extract once, transform once, load once, done. In practice, ETL for legacy data is an iterative loop. You extract, transform, load a sample, validate against the legacy system's outputs, discover edge cases, update your transformation rules, quarantine the failures, remediate, and replay. Plan for at least three full cycles before the numbers match.

Extract
Pull data from the legacy system. This is rarely as simple as a database dump. Legacy systems store data in unexpected places: configuration files, serialised blobs, file system paths that encode metadata, and application logs.
Transform
Schema mapping from legacy data structures to the new model. A legacy "customer" record might need to become three records in the new system: a contact, a company, and a billing entity. Transformation rules must handle every edge case in the source data, and the mapping document becomes the single most referenced artefact during migration.
Load
Insert transformed data into the new system. Run validation after loading to confirm record counts, check referential integrity, and verify that calculated fields produce the same results as the legacy system.

Handling schema mismatches

The most common data migration failures stem from schema differences that were not caught during planning.

Type changes

The legacy system stored phone numbers as integers. Leading zeros are gone. Dates stored as strings in inconsistent formats. Timestamps without timezone information.

Encoding and integrity

The legacy database used Latin-1. The new system uses UTF-8. Customer names with accented characters break. Foreign key relationships that should exist but do not, because the legacy system never enforced referential integrity.

Build a quarantine table for records that fail transformation. Do not skip them silently. Every quarantined record needs manual review before the migration is considered complete. Migration is the one opportunity you get to fix years of data quality decay. Deduplicate customer records. Standardise address formats. Tighten nullable fields that should have been required from the start. Fill in orphaned foreign keys or archive the orphans deliberately. You will never have a better reason to clean the data than "we are moving it to a new home."

Dual-write and shadow reads

These are two distinct patterns that serve different purposes during migration. Most guides either conflate them or skip both entirely.

Dual-write is a data consistency mechanism. During the transition period, the facade writes every transaction to both the old and new databases. This keeps both systems in sync while you migrate modules incrementally. The risk is write failures: if the write to one system succeeds and the other fails, you have a data inconsistency. Handle this with a reconciliation job that runs hourly and flags mismatches. Use dual-write only during the active migration window, not as a permanent architecture. The moment you have validated the new system's data, stop writing to the old one.

Shadow reads are a verification mechanism. You query both systems with the same inputs and compare outputs, logging discrepancies without affecting users. Shadow reads answer the question "does the new system produce the same results as the old one?" Run them for long enough to cover a full business cycle: if the legacy system handles month-end closes, quarterly VAT returns, or annual renewals, your shadow reads need to cover each of those events at least once before you trust the new system.


Testing and Risk Mitigation

Legacy system migration requires a testing strategy that goes beyond standard application testing. You are not just verifying that new code works; you are verifying that the new system produces identical outcomes to the old one for every scenario the business depends on.

Parallel running

Run both systems simultaneously with the same inputs and compare outputs. This catches the problems that unit tests miss: the edge case where a discount calculation rounds differently, the report that includes records the new system filters out, the nightly batch job that processes records in a different order and produces different totals.

The common advice is "run parallel for three months." That is arbitrary. The correct duration is one full business cycle. Define it operationally: a month-end close, a payroll run, scheduled invoicing, quarterly reporting. For a system that only handles monthly reconciliation, six weeks might be sufficient. For a system processing annual renewals, you need thirteen months. The cost of parallel running is real (two systems to maintain, two sets of outputs to compare), but it is always cheaper than discovering a discrepancy six months after decommissioning the old system.

Rollback plans

Every migration step needs a tested rollback plan. Not a theoretical rollback plan documented in a wiki. A tested one, rehearsed on production-equivalent data, with a measured rollback time.

The critical question: If we discover a critical issue at 3am on Tuesday, how long does it take to restore the previous state, and what data do we lose? If the answer is "we lose anything entered since the migration step," then your migration step is too large.

Feature flags and change freezes

Feature flags control which users see the new system. Start with internal users, expand to a pilot group, then roll out to everyone. During active migration, freeze changes to the legacy system. Any change to the old system during migration invalidates your testing baseline.

The real failure modes during migration are not dramatic. They are quiet: a database migration script that silently truncates a text field, a batch job that skips records with null values, an integration that stops receiving data because the API endpoint changed. Build monitoring that catches these discrepancies in hours, not weeks. Audit trails on both systems give you the evidence to trace exactly when and where a discrepancy was introduced.


When to Migrate Versus When to Wrap

Not every legacy system should be replaced. Sometimes the right decision is to leave the legacy system running and wrap it with a modern interface.

Wrap when: The core logic is correct and stable, but the interface is unusable.
Wrap when: The system integrates with hardware or protocols that the new stack cannot easily replicate.
Wrap when: The system is scheduled for decommission within 2-3 years and full migration is not worth the investment.
Be honest: Wrapping is not a long-term strategy. It defers the migration cost while adding a new layer to maintain.

An API facade pattern exposes the legacy system's functionality through a modern REST API. New applications consume the facade instead of talking to the legacy system directly. The legacy system continues running, but it is contained.

There are also options beyond migrate or wrap. The AWS 7-R framework (rehost, relocate, replatform, refactor/re-architect, repurchase, retire, retain) provides a useful lens, even at SMB scale. Two of those Rs deserve more attention than they usually get.

Replace means swapping the legacy system for an off-the-shelf product that did not exist when the original was built. A custom order management system written in 2008 might now be better served by a SaaS tool with an API. Consider the full picture when evaluating build vs buy decisions, and weigh custom vs SaaS honestly.

Retire means deliberately switching the system off and migrating its users to another existing system. If three departments each built their own tracking tool over the years, the right move might be to consolidate into one rather than migrating all three. This requires process mapping to confirm that the surviving system genuinely covers the workflows the retired ones supported.


What a well-run legacy migration looks like

Legacy migrations involve systems that most developers would rather not touch: Access databases with tens of thousands of records and no documentation, Excel workbooks with VBA macros that encode an entire pricing engine, PHP 4 applications that predate Laravel by a decade, and .NET systems running on Windows Server 2003.

Each migration follows the same structure.

1

Discovery (2-4 weeks)

We map what the legacy system actually does, not what anyone thinks it does. This code archaeology phase means reading VBA macros line by line, tracing stored procedures and database triggers, interviewing users about edge cases ("what happens at year-end?", "what about when a customer is also a supplier?"), and inventorying every scheduled task, CSV export, and shared-folder dependency. The output is a business-rule catalogue and a dependency map. If you have inherited software from a previous developer, this phase overlaps significantly with a system takeover assessment.

2

Architecture

We choose the migration pattern (usually strangler fig), design the data model for the new system, and plan the migration sequence. Migrate the features with the most pain first to build momentum.

3

Incremental build

We build and deploy in slices, with each slice tested against the legacy system's outputs. Users transition gradually. At no point does the business lose access to its data or its processes.

4

Hypercare (4-6 weeks)

After the final cutover, heightened monitoring catches post-migration issues. We set up automated reconciliation checks comparing key outputs (report totals, record counts, calculated fields) between the old and new systems. Most discrepancies surface within the first two weeks. The remainder tend to appear during the first month-end close or payroll run processed entirely on the new system.

5

Decommission

The legacy system runs in read-only mode for an agreed period (typically 3-6 months), then shuts down. Data is archived in a searchable format with a defined retention period appropriate to the business purpose (UK GDPR does not prescribe fixed retention periods, but requires that you justify how long you keep personal data). We disable scheduled jobs, disconnect integration feeds, cancel backups on the old infrastructure, and update service catalogues. Setting a firm decommission deadline early matters: without one, the old system lingers indefinitely, accumulating maintenance cost for a system that nobody is supposed to be using.

Timelines vary. A simple Access-to-web migration might take 6-8 weeks. A complex multi-system migration with data transformation and parallel running typically takes 3-6 months. We provide fixed-price quotes after discovery so there are no surprises.


Replace Your Legacy System

If you are running a system that everyone is afraid to touch, we are happy to talk it through. We will tell you honestly whether migration makes sense for your situation, or whether the smarter move is to wrap it and wait. If the system was built by another team and you need someone to pick it up, our software takeover service is where we start. For situations where you are not sure whether to migrate, wrap, or replace, a consulting session will clarify the options before committing to a path.

Discuss your legacy system →
Graphic Swish