Engineering

A CTO's Guide to Migrating Legacy Banking Cores to Microservices

Banking core migrations fail more often than they succeed. The path that does work is incremental, evidence-driven, and treats the monolith with respect.

3 min read
Core banking modernization architecture diagram

Every banking CTO inherits the same problem: a system that runs the business, was last modernised before they joined, and cannot move at the speed the market now demands. Replacing it sounds heroic; doing it badly is career-ending.

Here is the playbook we use with global banks and challenger banks alike when the monolith must come down without taking the bank with it.

#1Why big-bang core replacements keep failing

The pattern is depressingly consistent: a multi-year programme, a new vendor core, parallel-run targets that slip, regulator concern, sunk-cost momentum, and an eventual write-down. Big-bang replacements fail because they ask the business to accept extended risk windows for benefits that are abstract until cut-over — and cut-over rarely happens cleanly.

#2Why the strangler fig pattern still wins

Martin Fowler's strangler fig pattern keeps earning its keep because it lets a bank ship value every quarter while the monolith shrinks. New capabilities are built as services in front of the legacy core; existing capabilities are extracted one domain at a time. The legacy core remains the source of truth until each domain is unambiguously safer in its new home.

#3Choose domains before you choose technologies

Most architecture mistakes happen when the team picks Kafka, Kubernetes, and a new programming language before they have agreed on bounded contexts. Domain Driven Design's strategic patterns — context maps, ubiquitous language, anti-corruption layers — pay for themselves a hundred times over in a banking migration because they prevent the new estate from inheriting the old estate's coupling.

  • Map the bank's business capabilities to bounded contexts before any service split.
  • Document the legacy data model alongside the target FHIR / ISO 20022 representation.
  • Identify the anti-corruption layer for every integration so the new model is never polluted by legacy quirks.
  • Only then choose persistence technology, messaging, and runtime.

#4Beating data gravity without freezing the business

Data is the gravitational centre of a banking core. Successful migrations decouple writes from reads early, use change-data-capture to feed a streaming pipeline, and migrate ownership of writes only when both the new system and the operational runbook are demonstrably ready. We never migrate ownership on a Friday, never migrate ownership without rollback rehearsed, and never migrate ownership during a regulatory reporting window.

The takeaway

Core banking migrations are won by the team that respects the monolith enough to learn from it before they replace it. Incremental, evidence-driven extraction is unsexy — and it ships.

Frequently asked questions

Should we adopt a vendor core or build in-house?
Vendor cores accelerate non-differentiated capabilities (general ledger, regulatory reporting). Build in-house when the capability is competitively differentiating and you can sustain the engineering investment.
How do we communicate progress when the monolith is still alive?
Measure share of customer journeys served by the new platform. It is a more honest indicator of progress than 'percentage of services migrated'.
Keep reading

Similar articles