Yes, one of the main uses internally for cache are artifact definitions or object representations, including Moqui artifacts like entities, services, and screens and also including more general artifacts like compiled Groovy scripts and parsed/compiled FreeMarker templates. The other main cache use is database data for entities that are always cached (entity.@cache=true) or finds that are cached.
For both of these what we want is the fastest access to frequently used data. Java’s Cache API (JSR 107) is really just an API for compatibility. It’s generally a good API but unfortunately doesn’t include everything might want in a cache… including meta data like when the resource the cached item came from was last updated so that per-entry/line invalidation on an updated underlying resource (by last updated timestamp) is possible.
Using anything external to the Java VM process that Moqui is running in will require a lot of overhead, ie anything other than quick memory access. The Moqui cache implementation is fairly well optimized for a high read rate with a decent feature set (idle and live expire, size limit, etc), and the reason for using it is that after trying various others (and originally using ehCache) none really performed all that well or supported extended features that are useful.
Something like Redis or memcached would require going over the network… which is slow… and serialization & deserialization which is also slow, or at least is slow in terms of a local in-memory cache… and this is something 2-3 orders of magnitude in speed.
The real question for shared network accessible caches is what benefit do they add? For very large scale they might be useful for caching records from a database, but only at high scale where you need to take some load off the relational database itself… because relational databases can cache too, and they are quite good at caching and can respond nearly as fast as redis/memcached.
In one funny story for a system designed for ~100k concurrent users doing fairly complex things there was an attempt to Redis for shared DB record caching. The implementation was buggy and incomplete (didn’t handle invalidation and cache reloading adequately), but even worse when we turned off entity data caching completely the servers ran FASTER… just using the DB cache (in that case MySQL).
IMO it would never make sense to use an over-the-network cache like redis or memcached for artifact definitions and compiled/etc object representations. We want those to be super fast to reduce the overhead of entity operations, service and script calls, screen and template renders, etc. If we had to hop over the network to get the definition each time it would be slow, maybe not slower than reading from the local disk plus re-parsing/compiling… but for some things local disk would be faster (by a fair bit).
For these reasons Moqui doesn’t use shared caching right now, even in deployments with multiple app servers in a cluster. If a shared cache was used it would probably make more sense to use Hazelcast directly (in the moqui-hazelcast component) which is used for various distributed things, including DCI (distributed cache invalidate), session replication, distributed async service calls, etc. For caching DCI is important, and local cache + DCI is the solution used for rapid access to frequently used DB data (like Enumeration, StatusItem, etc) rather than a shared network cache.
In general the architecture of Moqui Framework is designed for medium scale transactional (as in ACID) data processing. It is not designed for massive scale applications like Facebook or Twitter. If you’re running those you don’t want to use clustered stateful app servers and relational data stores like Moqui does, you don’t need nearly as many screens or database tables and such for the main application, and the data doesn’t need to be strictly transactional with record locking and transaction isolation and such for immediate consistency… eventual consistency of data is fine (and inconsistency often not a critical issue).