← All insightsBanking · Migration

Core migration without the downtime drama

Teleport-X·February 9, 2026·8 min

Every core banking migration is sold as a modernization and remembered as a survival exercise. The deadlines are regulatory. The data is decades old. Half the business logic lives in stored procedures nobody has read since a merger in 2011. And the one thing you cannot do, at any point, is lose a customer’s money or a customer’s trust.

The teams that ship these cleanly have stopped treating the cutover as an event. They treat it as the last step of a months long dress rehearsal in which the new system has already been running quietly, in the shadows, long enough that everyone believes it.

Parallel run is a technique, not a phase

The default migration antipattern is the “big bang weekend.” Freeze writes on Friday, run a one shot ETL, validate on Saturday, cut traffic over on Sunday, pray on Monday. This works until it does not, and when it does not, the recovery window is measured in minutes visible to customers and regulator phone calls.

A parallel run replaces the event with a duration. From the first week, the new core is receiving a change data capture feed of every transaction on the legacy core: debits, credits, holds, reversals, overnight batches. The new core processes them through its real code paths. You do not yet act on its output. You compare it, continuously, against the legacy.

CDC is the substrate, idempotency is the law

The mechanics: a logical replication stream (Debezium against the Oracle archive logs, or the equivalent on DB2 or mainframe VSAM adapters) publishes every write to Kafka. A thin translation service maps legacy records into the new core’s canonical events. The new core consumes them through its ordinary event pipeline.

Everything downstream of CDC must be idempotent. Every event carries a stable business key, typically legacy_txn_id or equivalent, and every write is an upsert keyed on it. Reprocess the stream from day zero and the new core’s state must be byte identical. If it is not, you have a bug you cannot afford to discover during cutover.

Shadow reads catch what reconciliation misses

Reconciliation jobs that run at end of day will tell you that balances drifted by the time you notice. Shadow reads tell you which specific query shape breaks and why, within seconds.

Wrap the legacy read API in a proxy. Every production read is mirrored to the new core. The proxy returns the legacy result to the caller and asynchronously compares it to the new one, logging structured diffs. Within two weeks of turning this on, you will have a taxonomy of every place the new system disagrees with the old, ranked by frequency. Fix them in priority order. The parallel run clock is the time it takes this diff rate to reach your agreed tolerance, typically under 0.01% of reads, with any remaining diffs explicitly classified as “expected” (rounded differently, presented differently, intentionally changed).

The cutover is boring on purpose

When the new core has been shadow processing every transaction for months, and shadow serving every read for weeks, cutover becomes a traffic routing decision, not a data migration decision. You are not copying data. The new system already has the data, validated.

Ours typically runs like this. At T-60, freeze writes on the legacy core. Let CDC drain: usually seconds, but wait until the lag is zero. Flip the read router to the new core. Begin accepting writes on the new core. Keep the legacy core running, receiving a reverse CDC stream from the new core, for an agreed in advance rollback window (30 days is reasonable). Monitor the same diffs in reverse. If anything anomalous shows up in that window, you can cut back in the same minutes. It almost never does, because the dress rehearsal caught it.

What this actually costs

Parallel run migrations feel expensive because you are paying for two systems for months. The comparison is not “parallel run vs. no parallel run.” It is “parallel run vs. the incident response, regulatory exposure, and reputational cost of a bad cutover.” Every bank that has survived the second of those comes out preferring the first.

The teams that run this well have one thing in common: a senior engineer whose job, for the duration, is to read the diff logs every morning and refuse to let anyone downplay them. Migrations do not fail on the cutover weekend. They fail in the 12 weeks before, when someone decided a 0.3% diff rate was “probably fine.”