I guess it depends on what that payment application includes…
I’ve had some discussions over time with people interested in building a banking core system based on Moqui, but to date the main production uses I’m aware of for FinancialAccount are for managing customer, employee, and other third party liabilities for a company. That includes scenarios like customer credits, employee loans, customer billing (ie manage customer books separate from invoices), and one that is a bit more like a bank for co-op member billing and distribution management (with loan, revenue, and fee transactions posted to the member’s financial account, along with periodic withdrawal payments from the account).
In other words, the current functionality is designed for corporate rather than bank uses, but from the bit I’m familiar with in the word of banking a lot of the functionality would overlap (and others far more familiar with banking have said similar things, which informs my opinion but I’ve also spent about 10% of my career in the banking and FinTech industry).
By loan management I’m guessing you mean interest bearing loans like credit cards or home loans. There are a few things missing for that, such as scheduled jobs to calculate and post interest to loan accounts. To date I’m aware of companies using Moqui to track loan balances, but these are zero interest pay advance loans to employees and things like that.
FWIW, that sort of thing is fairly easy to implement. My experience may be biased by the sort of FinTech work I’ve done, but my guess is that building a banking core system wouldn’t be too difficult… but integrating it with the impressive variety of internal and third party software used in banks would be a significant effort. In other words, building the thing wouldn’t be too bad, but integrations in banking are generally complex and it is common to use many different systems for different parts of a banking operation.
That is partly a perspective from considering a commercial offering where day 1 sales will require a significant number of integrations, even if only the most popular auxiliary systems are selected for an initial product offering or minimum viable product. The common pattern of integrations for Moqui is that while there are a few in the Moqui repos many users, nearly all, have custom integrations as well. It’s great when these are shared as open source, but there are so many things to integrate with that overlaps are not super common.
For example I’m aware of two Shopify integrations, and chances are there others who would like to use that so one will probably be contributed sooner or later. Other integrations aren’t so common, or involve a lot of custom business logic to try to supplement what the other system can do. Anyway, for open source we get away with partial solutions more than commercial software typically does, so we could build a banking core system without all the integrations need it and chances are it would still be picked up and used, integrations built as needed.
One thing to note is that scalability is a concern with Moqui’s architecture. This would not be the sort of system that processes batches overnight, though some of the heavy lifting could be written that way and of course overnight clearing and such would be batched because the entire banking system is based on batch-based specifications for old batch-based systems (ie using batching for scalability, at the cost of VERY high transaction latency). With Moqui we’d want to build a real-time transaction system, at least for internal transactions and future real-time financial infrastructure (like crypto style settlement). That makes scalability more difficult because all processing happens on the fly, no luxury of overnight batches where scalability and database transaction management are very different.
What that means is that Moqui is limited in scale to what the relational database can handle. A database like Postgres or a MySQL variant can scale incredibly well on modern commodity hardware, especially when running on dedicated servers. These days with big AMD Epyc servers and such, or the new data center ARM chips with hundreds of cores, it’s amazing what a single box can handle… a single box being the limiting factor for strict transaction management. You’ll want one or more backup boxes that can be used for reporting queries and such, but a single concurrency management node for transaction isolation is necessary for things like posting to the bank’s ledger for every customer transaction (the bank’s ledger being the high-conflict records involved, customer ledgers are more separated, relatively few potential conflicts).
On a side note, there are lots of ways to make things more scalable even with a real-time transactional database. For example, Moqui uses summary tables for GL account reporting and has an option to update these in real time or to just let the scheduled job pick up changes. Real-time updates to these summary records are a problem for very high volume systems, the summary records end up getting locked for every GL post for the query + sum + update operation. By doing the sums in a separate transaction in a scheduled job removes the need to lock those high-conflict records. The logic could be improved to send a sum update (ie total = total + TX amount in the SQL) to the database instead of query + sum + update, but that still locks the record on the update and the transaction would have to wait in line.
On another side note, there are some cool database alternatives that handle this better, but building with such would be an adventure. The general idea is to use non-transactional atomic operations. These are great for summary data such as an account balance because the design of these sorts of databases (they have a name, but I don’t remember it now) works well for incremental/sum sorts of operations with non-blocking concurrency handling, but many other things still need standard locking and blocking for reliable results.
Anyway, some thoughts on the topic… probably in different directions from what you were wondering about. Most banks will never scale beyond what something like Postgres on modern hardware can handle, but I’d have doubts about this architecture for a bank the scale of Chase or BofA… but who knows.
Visa claims to be able to process around 24,000 transactions per second but normally does more like 2,000/sec. I don’t know, but that doesn’t sound too bad. I think we could pump that many into a big relational database that fast just fine, as long as we avoid high-conflict records like Moqui already does for the GL account totals. From some quick searching there are some pgbench results with mixed reads/writes (looks like they use 5 SELECT, INSERT, and UPDATE operations per TX) that get over 26,000 transaction per second on recent server chips with a 1,000 client/thread simulation. Maybe hardware is good enough these days for high volume real-time transaction processing… even for large banks.