Could users write services in English?

LLMs have made significant strides in bridging the gap between linguistic communication and business logic. It may be possible to implement models like Alpaca in the Moqui environment in a way that allows users to describe actions that should occur and have them performed by an automated agent.

Java deep learning:

Pytorch on the JVM:

Stanford Alpaca:


1 Like

It may not be the best to use alpaca: Stanford takes costly, risky Alpaca AI model offline • The Register. Instead maybe use something like GitHub - databrickslabs/dolly: Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform or another solution.

You can kinda already do this with github copilot.

What would it look like to run this locally / in the cloud?

How can we integrate Moqui’s business “vocabulary” into an existing LLM knowledge base? We need to convert a natural language sentence into a series of service invocations that achieve the described task. If you dig into LangChain’s concepts you can imagine how they might relate to various Moqui or JVM based facilities. There very likely needs to be a number of different agents working in tandem.

You could just start with something like a vector based search for a nearest neighbor of the service names and input parameters

get party content with party id 100000

Langchain provides facilities for connecting external resources. We would need to interface with that: SQL Chain example — 🦜🔗 LangChain 0.0.143

We could start with just SQL, but there’s a reason that we don’t encourage that in moqui. It is very easy to break things if the services’ validation logic isn’t used.

@schue @jonesde Check this out: Google Colab. It’s the dolly 2 8gb model. Runs on a 15 gb gpu. Is an open model for business, and answers questions comprehensively about moqui.

The Dolly 2 and related stuff looks really cool. From a brief read of the code there it looks like there are lots of touch points to work with. I guess one of these days I might have to learn me some Python, and PyTorch and such.


I would train in python and deploy in java if possible.

Theoretically DJL can run Pytorch under Jython. Langchain has important concepts around agents and memory already well underway. If we could integrate entities and services into its nomenclature (maybe by training in Swagger documentation) then we might be able to do GPT-4ish things in a standalone configuration.

Moqui can have a python interprator inside as it did with groovy interprator?

I think Jython translates Python to Java and then compiles it so it is faster than an interpreter. Unfortunately, Jython only supports Python 2.7 and not 3.

@schue Check out this paper: It’s a different perspective on small LLMs.