LLMs have made significant strides in bridging the gap between linguistic communication and business logic. It may be possible to implement models like Alpaca in the Moqui environment in a way that allows users to describe actions that should occur and have them performed by an automated agent.
How can we integrate Moqui’s business “vocabulary” into an existing LLM knowledge base? We need to convert a natural language sentence into a series of service invocations that achieve the described task. If you dig into LangChain’s concepts you can imagine how they might relate to various Moqui or JVM based facilities. There very likely needs to be a number of different agents working in tandem.
The Dolly 2 and related stuff looks really cool. From a brief read of the code there it looks like there are lots of touch points to work with. I guess one of these days I might have to learn me some Python, and PyTorch and such.
Theoretically DJL can run Pytorch under Jython. Langchain has important concepts around agents and memory already well underway. If we could integrate entities and services into its nomenclature (maybe by training in Swagger documentation) then we might be able to do GPT-4ish things in a standalone configuration.