Skip to content
Product Engineering Case Study Mar 23, 2026 7 min read

Event Sourcing in Core Banking: A Case Study

How event sourcing and CQRS power a 48-module open-source banking platform. Decisions, trade-offs, and lessons from 6,500+ tests.

Event Sourcing in Core Banking: A Case Study

Why event sourcing for financial software

"You're a small consultancy. Why are you building core banking software?" We got this question a lot. The honest answer: after years in fintech, including a stretch leading engineering at one of the largest payment companies in the Baltics, we had strong opinions about how financial software should be built. The kind of opinions you form after debugging a production incident at 2am because an account balance went negative and nobody could figure out when it happened or why.

Event sourcing was the answer. Instead of storing just the current state of an account (balance: 150.00), you store every event that led to that state: AccountOpened, FundsDeposited(200), FundsWithdrawn(50). The current balance is a projection, a number calculated by replaying those events.

More work? Yes. But it gives you a complete audit trail, temporal queries ("what was the balance at 3pm on Tuesday?"), and the ability to replay the exact sequence of events that caused a bug. For financial software, where regulators and auditors will eventually ask these questions, having this built in from day one is worth the upfront complexity.

So we built FinAegis: an open-source core banking platform released under Apache 2.0. What started as a prototype has grown into a system with 48 domain modules, a GraphQL API, and over 6,500 automated tests.

CQRS: separating reads from writes

Event sourcing pairs naturally with CQRS (Command Query Responsibility Segregation). The idea is simple: the model you use to write data is different from the model you use to read it.

Commands (OpenAccount, PostTransaction, SuspendAccount) go through aggregate roots that enforce business rules and emit events. Those events are processed by projectors that materialize read-optimized views: denormalized tables, API responses, dashboard summaries. The write model cares about correctness. The read model cares about speed.

We built a custom command/query bus in our infrastructure layer rather than using a generic package, because banking operations have specific requirements around idempotency, retry behavior, and distributed transactions that generic solutions don't handle well. Each command passes through validation, authorization, and execution stages, all with separate logging.

Where this really pays off is reporting. A query like "show all transactions above €10,000 in the last quarter, grouped by currency and counterparty" doesn't touch the event store at all. It reads from a pre-computed projection that's updated in near real-time as events flow through the system.

48 bounded contexts at scale

The platform is organized into 48 domain modules, each representing a bounded context in DDD terms. We wrote a separate article about our DDD approach, but the scale is worth discussing here.

The domains cover the full spectrum of banking operations:

  • Core financial: Account, Exchange, Lending, Treasury, Wallet, Payment, Banking, Stablecoin
  • Digital assets: CardIssuance, CrossChain, Custodian, DeFi, Asset
  • AI and automation: AI, AgentProtocol, Governance, VirtualsAgent
  • Regulatory: RegTech, Regulatory, Compliance
  • Mobile: Mobile, MobilePayment, KeyManagement, TrustCert
  • Infrastructure: Monitoring, Fraud, Performance, Batch, Webhook, Security

Each domain has its own manifest file defining dependencies, interfaces, events, and commands. Domains can be enabled or disabled dynamically without losing data. php artisan module:disable exchange turns off the exchange functionality while preserving its event stream for potential re-enabling later. This plugin-style architecture means deployments can include only the domains they actually need.

One decision that mattered more than we expected: domain-specific event tables. instead of one massive stored_events table, each domain stores its events in its own table (exchange_events, lending_events, wallet_events). This improves query performance, simplifies backups, and lets us apply different retention policies per domain.

The Global Currency Unit: building a basket currency

The most ambitious feature in FinAegis is the Global Currency Unit (GCU) — a synthetic basket currency modeled on the IMF's Special Drawing Rights (SDR). Where SDR is a weighted basket of five currencies managed by the IMF, GCU is a basket managed through transparent governance rules encoded in the domain model.

The current composition: 40% USD, 30% EUR, 15% GBP, 10% CHF, 3% JPY, 2% XAU (gold). The weights are maintained through democratic governance: stakeholders vote on rebalancing proposals, and the system executes approved changes automatically.

Automatic rebalancing triggers when any constituent drifts more than 5% from its target weight. The rebalancing algorithm calculates the minimum trades needed to restore target weights, generates the appropriate exchange events, and updates the NAV (Net Asset Value) with decimal precision that would make an accountant comfortable.

Is GCU going to replace the dollar? No. But building it taught us more about multi-currency systems, governance patterns, and precision arithmetic than any client project ever could. And the basket currency implementation is a reference that anyone can study, fork, or adapt under the Apache 2.0 license.

Tech stack: Laravel, GraphQL, and post-quantum cryptography

We built FinAegis on Laravel — not because it's the obvious choice for banking (Java or .NET would be conventional), but because it's what we ship fastest with. When you're validating an architecture, development speed matters more than having the "right" enterprise language on your stack.

The API layer uses GraphQL via Lighthouse PHP. Schema-first, with coverage across 36 domains, real-time subscriptions via WebSocket, and DataLoaders to prevent N+1 queries. Security includes query cost analysis (preventing expensive recursive queries) and introspection controls for production.

Event streaming runs on Redis Streams with 15 dedicated streams, consumer groups with acknowledgement and dead-letter handling, and a live monitoring dashboard that tracks projector lag, throughput, and domain health across five REST endpoints.

The most forward-looking addition is post-quantum cryptography. We implemented ML-KEM-768 (key encapsulation) and ML-DSA-65 (digital signatures) in a hybrid encryption mode, running the post-quantum algorithms alongside classical algorithms so the system is secure even if only one of them holds. Key rotation is supported without downtime. The quantum threat to financial cryptography isn't here today, but migration is expensive and we'd rather be early than scrambling.

Multi-tenancy uses stancl/tenancy for team-based isolation, with tenant-aware event sourcing that keeps each tenant's event streams physically separated.

What we got wrong

Projection rebuilds. Our initial approach was to replay all events sequentially, which works fine for small datasets. Then we hit tens of thousands of events in testing and rebuild times went from seconds to minutes. We solved it with snapshotting: store a checkpoint every N events, start rebuilds from the last checkpoint. Should have planned for this from day one.

Multi-currency complexity was brutal. It's not just exchange rates. It's fee denomination, rounding in different currency pairs, conversion timing in the transaction lifecycle, per-jurisdiction regulatory requirements. This one context probably has more edge cases than all the others combined. If we started over, currencies would be a first-class domain from the beginning, not retrofitted into an account model that assumed a single currency.

Event versioning. When you change an event's structure (new field, renamed property), all the old events in the store still have the old structure. We use upcasters, small functions that transform old shapes into new ones during replay. Works fine, but the collection of upcasters only grows. Every schema change adds a new one. Forever. That's a form of technical debt the event sourcing blog posts don't warn you about.

And not everything needs event sourcing. We learned this one the expensive way. Updating a user's display name doesn't need an immutable event and a projector. A simple Eloquent update is fine. Now we reserve events for operations where audit trails and temporal queries provide real business value.

Why open source under Apache 2.0

The reasoning is straightforward: if we claim to build transparent financial infrastructure, the code should be transparent too. Open-sourcing the domain model forces us to keep it clean — no hardcoded secrets, no environment-specific hacks, no shortcuts we'd be embarrassed by.

We don't expect FinAegis to power a production bank. That's not the point. The point is demonstrating that auditable, event-sourced financial infrastructure can be built with familiar tools and reasonable effort. The 6,500+ tests across 925 test files aren't there to impress — they're there because financial software without comprehensive testing is a liability.

The full source code is on GitHub. We applied the same event sourcing patterns to a solar panel recycling marketplace, which proved the architecture transfers well to non-financial domains. If you're considering event sourcing for a compliance-heavy application, that's another data point.

Questions about the architecture, war stories about event sourcing, or ideas for the GCU — we're always up for that conversation.

Working on something similar?

We bring the same engineering approach to client projects. Tell us about yours.