Strategies/Rules/Prompts to make LLMs usable with Moqui

Good Day Folks,

Does anyone have any recommended Strategies/Rules/Prompts to make using LLMs for Mopqui development usable ?

I have spent a little over 2 months attempting to evaluate Moqui as the foundation of an ERP/CRM solution for a customer. I currently have reliably bad results/slow progress in using AI code generation with Moqui. I am a routine user of Cursor, Claude Code, CoPilot and ChatGPT, and DeepWiki and I am comfortable crafting prompts and rules for Java development.

Some of the things I see

  • Routine hallucinations such as alternately extending entities with <entity extends=".." or <extend-entity name=".." simply by change the name of the mantle entity in the prompt (nothing else)
  • Referencing services by name when location is wanted and vice-a-versa
  • Tests attempting to access screen REST APIs rather than service APIs even though only the service ones are the only REST endpoints.
  • And of course my favorite: examples for code that , when asked, the LLMs can find no evidence for other than its interpretation of comments in moqui-framework.

I have created specialized document digests and RAGs systems as well as using the DeepWiki MCP and I have no real results to report.

Are people using AI for doing reliable development with Moqui and, if so , how ?

Thank you in advance

Jim Mochel

I have been using an approach based on Agent-OS with Claude Code (GitHub - buildermethods/agent-os: Agent OS is a system for injecting your codebase standards and writing better specs for spec-driven development.). The latest release is much more lightweight (it throws around 70% of the codebase out), relying more on Claude Code skills, and focusing on documenting the standards.
It requires a real effort to set it up, but in my experience, it is less effort than training a person that is new to moqui. So you basically end up with several specialized agents, standards, skills and cross-references between them, which works surprisingly well and even gets to follow your preferences in coding style and others.
Yes, in the beginning you get all of the examples and hallucinations you mention. But if you refrain from fixing the code and make the LLM fix it for you, adding the necessary changes to your setup so it will not make that mistake in the future, it begins to work quite well after some effort.
Currently, I am using 15 moqui-specific agents (could probably be less, some of them were over-specialized to reduce the necessary context before the incorporation of skills and standards), about 48 standards files, 3 to 5 general guidelines, etc. In total a bit over 140 files that form my configuration of how claude should help me coding for moqui projects. The files themselves are written by Claude, under very strict direction and supervision, the main secret is to detect when the LLM looses track and acting in time, otherwise it can get really messy quite fast.
I have been thinking about sharing the files in some way, but they have lots of internal information and are being changed a lot, so the effort of extracting the general aspects is something I have not been able to yet.

Thank you , that is helpful ! after a ton of research last night that was the direction I was heading AND it is great to know that someone has made it work

Here my suggestions ‘enhanced’ with AI of course :

Mastering Moqui ERP with AI: A Developer’s Guide

Developing in Moqui often feels like “assembling” a system rather than just coding it. By using the right AI workflows, you can turn that assembly process into a high-speed pipeline.

1. Establish the “Brain” of Your Project: CLAUDE.md and GEMINI.md

Standard LLMs often hallucinate Moqui syntax because they mix it up with older frameworks like OFBiz or generic Java patterns. To prevent this, you must anchor your AI with Project Guidelines.

Create CLAUDE.md and GEMINI.md files in your project root. These serve as the “System Prompt” that the AI reads before every session.

  • Coding Standards: Specify that services should use Groovy and entities must follow the moqui-framework XSDs.
  • Folder Structure: Tell the AI where your components live (e.g., runtime/component/my-app).
  • No-Go Zones: Instruct the AI never to use deprecated patterns like <entity extends="..."> (use <extend-entity> instead).

Pro Tip: Include a list of frequently used shell commands (like ./gradlew load or java -jar moqui.war) in these files so the AI can help you run and test your changes immediately.


2. The “Pattern Reference” Prompting Strategy

Moqui is built on the Data Model Resource Book patterns. The most efficient way to build new features is to point the AI to a “Gold Standard” already in the system.

Instead of saying “Create a service to track equipment,” use a Pattern Reference Prompt:

“Create a new entity Equipment in my component. Follow the same pattern and use the same standard fields (like description, lastUpdatedStamp) as the Product entity in mantle-udm.”

Why this works:

  • It ensures naming consistency (e.g., using description vs name).
  • It automatically includes Mantle integration logic that the AI might otherwise overlook.
  • It forces the AI to “look” at your existing codebase to maintain architectural harmony.

3. Cloning Services with High Fidelity

Moqui services are the engine of your ERP. When creating a new service, refer to an existing service that has the same “flavor” (e.g., a CRUD service, a process service, or an integration service).

Prompt Example:

“Create a service called updateEquipmentStatus. Use the same pattern as update#Product from the Mantle Product services. Specifically, ensure it includes an ec.message check and follows the same transaction attributes.”

This approach prevents the AI from writing generic Groovy scripts and ensures it uses the Moqui Execution Context (ec) correctly.


4. Post-Change Documentation: The “Closing Loop”

Large changes in an ERP—like adding a new multi-tenant billing logic—can quickly become “black boxes.” Once the AI has finished a significant task, task it with documenting its own work.

The Workflow:

  1. Complete the code changes.
  2. Prompt: “Review the changes made in the last 3 files. Create a technical summary and save it as docs/features/billing-v2.md.”
  3. Ensure the document covers:
  • Data Model Changes: New entities or fields.
  • Service API: Input/output parameters.
  • Side Effects: Any EECAs (Entity-Event-Condition-Actions) or SECAs triggered.

Having a documents/ directory filled with AI-generated explainers makes it significantly easier to onboard other developers (or even a different AI agent) later.


5. Automated Testing Integration

Don’t let the AI stop at the code. Moqui’s XML-based testing is perfect for AI generation.

The Next Step:

“Now that you’ve created the Equipment entity and services, generate a MoquiTest XML file that creates a test record, updates it, and verifies the status change. Refer to ExampleTests.xml for the structure.”

2 Likes

Why are you coding? Is your Customer a brand-new, never-before-seen company that came out of nowhere? IMO, you need to understand Moqui’s process flow. I suppose you already do. If there’s no way to emulate the ERP/CRM flows through configuration, then you should write code. Just saying…

Until we can fine tune a model on Moqui XML you may get better results generating Groovy or Java code and mapping those methods into the service engine. There are reports that using AGENTS.md and friends can actually diminish performance because you are inserting that prompt content all the time, even when it isn’t that relevant.

https://www.reddit.com/r/ClaudeAI/comments/1r7mvja/new_research_agentsmd_files_reduce_coding_agent/

Good Question jcigala.

There are several drivers for some of the custom work we’re doing. We are dealing with coding for several reasons. we’re at company that has 40 years of history (aka legacy processes) that vary worldwide so we are simultaneously trying to reconcile/simplify/cleanup these processes while performing a mandatory rewrite of our existing ERP/CRM solution.

We are extending Mantle UDM models and using USL processes and services wherever possible (which it usually is).

We are not planning on using the Moqui screen functionality for the primary internal user facing UI. We are only using it for the administrative portion of the UI (i.e. an industrial MDD UI).

The UI is being done in Angular talking to Moqui served REST APIs.

This is where a lot of trimming and fitting appears to be required for us. We are figuring out how we can specify and implement REST APIs and underlying services that conform to our requirements for tracing headers, error reporting, pagination, versioning, etc…

1 Like

hansbak. Did the comment about “Moqui’s XML-based testing is perfect for AI generation.” come from an AI ? I have looked and can find no examples of XML based tests …

look at: (list also generated with AI)
In mantle-usl:

Framework/config level:

Other (screen/email):

The first 4 for are entity or service definition, not tests.
SpeedTest.xml is a Screen that can execute an internal service for testing performance
WelcomeTest.xml does not exist in any of the following repos that I have pulled down locally

AuthorizeDotNet
example
HiveMind
mantle-braintree
mantle-edi
mantle-oagis
mantle-paytrace
mantle-rsis
mantle-shippo
mantle-ubpl
mantle-udm
mantle-usl
mantle-yotpo
MarbleERP
moqui-atomikos
moqui-aws
moqui-camel
moqui-cups
moqui-demo
moqui-docker
moqui-elasticsearch
moqui-fop
moqui-framework
moqui-hazelcast
moqui-image
moqui-kie
moqui-mjml
moqui-org
moqui-orientdb
moqui-poi
moqui-quasar
moqui-runtime
moqui-sftp
moqui-sso
moqui-wikitext
PopCommerce
PopRestStore
SimpleScreens
WeCreate

I am sorry but AFAICT, Moqui has no xml based tests

Jim

@jmochel

Thanks for raising this. I completely relate to the challenges you described. Using LLMs with Moqui can be frustrating when the model starts hallucinating patterns that don’t really match Moqui’s architecture.

From my experience, the core issue is not just prompting, but lack of structured context. Moqui has its own strong conventions around entities, services, XML structure, screen definitions, and framework patterns. General LLMs are not trained deeply on those specifics, so they tend to fall back to generic Java or Spring-style assumptions.

This is exactly why I recently submitted a PR to the framework adding initial AI guiding files such as AGENTS.md and related instruction files. The idea is to give AI tools a clear, structured understanding of:

  • Moqui conventions
  • Expected XML and service patterns
  • Architectural boundaries
  • Things that must not be invented

Instead of relying only on clever prompts, we give the model grounded guidance inside the project itself.

Along the same direction, I have pushed a separate moqui-ai-skill component. The goal there is to define reusable “skills” for Moqui projects so AI agents can operate with clearer constraints and structured knowledge. Rather than free-form prompting, the idea is:

  • Provide explicit rules for entity and service generation
  • Describe valid patterns and anti-patterns
  • Reduce hallucination by defining what is allowed

I see this as moving from “prompt engineering” toward “project-aware AI integration.” If we standardize how a Moqui project describes itself to AI, we can make LLM usage much more reliable and less trial-and-error.

I believe combining structured guiding files in the framework with reusable AI skills at the component level can significantly improve usability of LLMs with Moqui.

@schue I think I can back you up on this observation. I have been heavily using Antigravity - which essentially operates in RAG mode (but with the added capacity to access Moqui files thanks to your moqui-mcp component), but I just had it work for a couple of hours to fix a problem that it fixed just last week. When I “confronted” it and said that I was disappointed with it and asked if it would have done better if it had been using a local LLM that had been trained on Moqui, it responded something like “I feel really bad” (just kidding) and “of course”.

I want to get involved with this and I would like to set up a local machine with a low cost GPU board. Gemini recommended a refurbished HP Z4 series or Dell Precision series with a RTX 3060. Does that sound reasonable. I’ve heard you mention Qwen. Do you recommend that for the model? What about the engine? Any other recommendations?

@nirendra I think that I am following much of your approach. I will try to study up on your work. As @schue and I posted (Strategies/Rules/Prompts to make LLMs usable with Moqui - #6 by schue), there seem to be limits on how far you can go by giving it things like GEMINI.md. In the Antigravity environment, that corresponds to the “.agent” folder with “rules” and “skills” subfolders. I have given it many “.md” files and it tends to ignore them.

My thoughts are that if we don’t work together then there will be a lot of wasted effort. Ean seems to be the leader on a lot of this development. I am going out on a limb and saying that we should settle on a model and work together on training it on Moqui XML.

I may be wrong about this, but there has to be some effort to work together on the right path. At least we need to be posting here.

I just wanted to throw in that I think one of the areas in which we need to do more work to make Moqui AI-ready is to expand the library of screen tags that Moqui can handle. Right now we have things like “form-list”, “subscreens-tabs”, etc. I think that we need to look at Quasar components and add ones that we think we might need. I have started doing this in my GitHub - byersa/moqui-ai: Implements Moqui generation by bridging the use of Antigravity with moqui-mcp component. Look at the MoquiAiScreenMacros.qvt2.ftl file.

I have been working on separating the organization-specific aspects from the more general in our LLM setup. We have been working with Claude, but tried to keep it extendable to other LLMs with minimal effort. To include the organization-specific aspects we defined an overlay logic that appears to be working well.
You are invited to check it out at https://github.com/moitcl/moqui-agent-os , there is a README.md that should explain how it works. This repository includes knowledge about moqui as well as process, based on the spec-driven development proposed in Agent-OS. The way of dividing the knowledge into skills and standards is also following the ideas from Agent-OS.
Hope you find it useful.

Thanks for this, Jens. I just cloned it into my project and asked Antigravity to merge it into my “moqui-ai” component, changing the name, “claude”, where it seemed appropriate. I will let you know how it goes.

I realize that the AI coding world is very Claude-centric these days, but it seems like AGY is going to be a real competitor. This Google-produced piece is interesting: