Strategies/Rules/Prompts to make LLMs usable with Moqui

Until we can fine tune a model on Moqui XML you may get better results generating Groovy or Java code and mapping those methods into the service engine. There are reports that using AGENTS.md and friends can actually diminish performance because you are inserting that prompt content all the time, even when it isn’t that relevant.

https://www.reddit.com/r/ClaudeAI/comments/1r7mvja/new_research_agentsmd_files_reduce_coding_agent/