According to hazelcast.com’s github repo:
Hazelcast is a distributed computation and storage platform for consistently low-latency querying, aggregation and stateful computation against event streams and traditional data sources. A cluster of Hazelcast nodes share both the data storage and computational load which can dynamically scale up and down.
In moqui-hazelcast it looks like AWS and Kubernetes integrations are programmed. In the config it looks like it caches the entity names and entity records as well as topics for notifications and entity cache validation.
Based on the moqui-hazelcast github repo, moqui uses hazelcast to allow more than one moqui instance to work together to store sessions, use a distributed cache, and have distributed async service calls.
Additionally in the moqui-framework hazelcast docker compose example, 2 different moqui instances use hazelcast multicast (hazelcast’s network discovery service) at the group
126.96.36.199 on port
54327 (see question 1).
I have a couple questions about this:
- Does the
hazelcast_multicast_group refer to an ip address of a server?
1a. If so, is that server a moqui instance or a hazelcast specific instance?
- For the distributed async service calls functionality does hazelcast work as a load balancer of sorts?
2a. If so, how does it know which moqui instance to send traffic to? Is it based on ram / cpu usage?
Any other details about Hazelcast or how Moqui uses it would be much appreciated. Also feel free to add questions to be answered later.
with hazelcast, you generally select which way the instances of the hazelcast cluster (in this case each moqui instance) would connect among each other. Multicast is one option, in which you use broadcasting within a local network, and broadcasting + routing rules for nodes on different networks. You would never use an IP address that is assigned to a single device, but rather IP address between 188.8.131.52 and 184.108.40.206, with some ranges having different meaning (e.g. local vs global). This is just so that an instance can filter out irrelevant packets and look at the ones that are interesting (no spoofing protection or other security at this level). This works well and practically OOTB with local networks or docker environments (provided the containers reach each other with their broadcasts, so network config is relevant).
Within a cloud environment (AWS, Azure, etc) , the provider enables other modes to discover peers, and when configured to use those discovery modes, hazelcast establishes point-to-point connections to synchronize, so you would not use the multicast address.
As of 2:
Not sure there is any load balancer, I would think there isn’t any of that. But what we have been doing is to determine which member(s) to use for running jobs. In our case, we needed some specific Job Runs to be run only on a moqui instance with enough memory (the job requires a ridiculous amount of memory to generate some huge XLSX files), so we could have one normal instance of moqui, really only needed when restarting or deploying a new version with no downtime, and the big memory instance.
This can be done by adding an ExecutorService ToolFactory with a hazelcast MemberSelector class that discriminates which member(s) can be used for executing a call.
In our case, when the big memory instance is not available for any reason, the Job Run simply fails, which is the desired outcome.