I apologize for injecting a lot of uneducated noise into this discussion, Strategies/Rules/Prompts to make LLMs usable with Moqui. I am anxious to consolidate our work so that Moqui can make some real progress in the use of AI. I know the @jmochel wanted to start such a thread, so maybe we can use this one for a while.
I took a look at the RAG stuff that @jenshp shared and it looks like a very valuable resource. I took the step to make my Antigravity .agent folder to use the same structure and to reference the moqui-agent-os component. I feel that the work that I have been doing deals more with the UI aspects of AI, the use of MCP tools (@schue moqui-mcp and webmcp.dev) and the use of blueprints to guide app generation.
I think that it will take a lot of work to consolidate our efforts, but I think it is necessary. I asked Gemini to look at the work I have done with it, Jens’s moqui-agent-os component and @hansbak comments and do an analysis on where we agree and where we diverge. And I asked it to integrate @schue work on moqui-mcp. It gave me three files and I am just going to paste them in here:
Blueprint: Moqui AI Community Alignment Strategy
This document synthesizes the strategic work of the MoquiAi project with the foundational patterns of Jens’s moqui-agent-os and the practical prompting workflows suggested by Hans. It serves as a blueprint for unified AI-driven development in the Moqui ecosystem.
1. Executive Summary: The Four Pillars of Alignment
To achieve high-fidelity AI collaboration across the Moqui community, we propose a four-pillar approach:
-
Foundation (The OS): Standardize on
moqui-agent-os(Jens) for core framework patterns and XML/Groovy syntax rules. -
Interactive Bridge (The UI): Standardize on MoquiAi Macro Extensions (
form-query,bp-parameter) and Blueprints for metadata-driven frontends. -
Connectivity & Semantics (The Protocol): Standardize on
moqui-mcp(Ean) and MARIA identifiers to treat AI as a first-class, “accessibility-aware” user. -
Workflow (The Loop): Adopt Pattern Reference Prompting (Hans) and Closing-the-Loop Documentation as the interaction standard.
2. Comparative Analysis
| Feature | Jens (moqui-agent-os) |
Hans (Suggestions) | MoquiAi (Blueprints/WebMCP) |
|---|---|---|---|
| Anchoring | Overlay system & symlinks to CLAUDE.md. |
Root CLAUDE.md/GEMINI.md as “Brain”. |
Internal .agent directory with shadowing protocol. |
| Patterns | Domain-specific references/ guides. |
“Pattern Reference Prompting” (Mantle UDM). | JSON-LD “Blueprints” for UI consistency. |
| Iteration | Universal Task Execution Protocol. | “Closing the Loop” via post-task docs. | “Shadowing” for local vs global logic. |
| Interaction | Command-based (slash commands). | High-fidelity CRUD and logic cloning. | WebMCP interactive browser bridge. |
Points of Convergence
- Context Management: All parties emphasize that AI must be “anchored” with project guidelines (
CLAUDE.md,README.md) to prevent hallucinations. - Pattern-First logic: Standardizing on existing “Gold Standards” (like
mantle-udmorExampleapps) rather than writing from scratch. - Tiered Knowledge: Recognizing that foundational framework knowledge should be separated from specific business domain logic.
3. The Unified Community Blueprint
A. The Directory Taxonomy (Mirroring Jens’s OS)
All community-aligned Moqui components should adopt a standardized .agent (or .agent-os) directory structure:
guidelines/: Architectural strategy (e.g., Blueprint definitions).instructions/: Workflow “how-tos” (e.g., WebMCP setup).standards/: Declarative rules (e.g., Groovy usage, Security).templates/: XML/Groovy snippets (e.g., Entity/Service CRUD patterns).references/: Domain-specific pattern guides.
B. Standard Prompting Workflows (Integrating Hans’s Strategy)
- The Pattern Reference: Always instruct the AI to “look at” a specific Mantle or Framework file before generating new code.
- The CRUD Clone: When building services, explicitly refer to high-fidelity service patterns (e.g.,
update#Product). - The Closing Loop: Every significant task should end with the AI generating a technical summary in
docs/features/or a specialized KI.
C. The Interactive Layer (Integrating MoquiAi Strategy)
- Extensible Macro DSL: Use custom tags like
<form-query>,<menu-dropdown>, and<bp-parameter>(defined inmoqui-ai-screen.xsd) to bridge Moqui logic with reactive state. See [moqui-ai-macro-extensions.md](file:///home/byersa/IdeaProjects/aitree-project/runtime/component/moqui-ai/.agent/references/moqui-ai-macro-extensions.md) for a deep dive. - Blueprints as Source of Truth: Move away from raw HTML/CSS generation. The AI should generate Blueprints (JSON-LD) which are rendered by the
DeterministicVueRenderer. - WebMCP for Verification: Use the WebMCP bridge to allow the AI to “see” the rendered output, take screenshots, and interact with the DOM during the VALIDATE phase of the task.
4. Implementation Guidelines for Developers
- Bootstrap: Install
moqui-agent-osas a foundational component in your Moqui runtime. - Overlay: Create your local project
.agentfolder. Add your unique business rules inguidelines/andstandards/. - Anchor: Use a root
CLAUDE.mdorGEMINI.mdthat directs the AI to prioritize the local.agentfolder over the globalmoqui-agent-osinstructions. - Notify on Conflict: If a global standard (from Jens) conflicts with a local requirement, explicitly document it in the project’s
standards/folder so the AI knows which “branch” to follow.
5. Conclusion: A Shared Vision
By meshing Jens’s structural foundation, Hans’s practical prompting workflows, and the MoquiAi interactive bridge, we move from “AI as a code generator” to “AI as a collaborative system architect.” This unified approach ensures that code remains consistent, documentation stays current, and the UI is natively agent-ready from day zero.
Deep Dive: MoquiAi Screen Macro Extensions
The MoquiAi project extends the standard Moqui XML Screen DSL (via moqui-ai-screen.xsd) to create an Interactive Bridge between Moqui’s server-side logic and modern reactive frontends. This strategy is critical for making Moqui applications “Agent-Ready.”
1. The Strategy: Instructions-as-UI
Standard Moqui HTML rendering produces complex DOM trees that are difficult for AI agents to reason about. The MoquiAi macro extensions solve this by:
- Declarative Intent: Using semantic tags (e.g.,
<screen-header>,<form-query>) instead of generic<div>or<span>blocks. - Blueprint Emission: The
DeterministicVueRenderertransforms these macros into JSON-LD Blueprints, which provide a clean, structured representation of the UI for both the Vue client and the AI agent.
2. Key Macro Patterns
A. The <form-query> Pattern
In traditional Moqui, search forms are often tightly coupled to the table rendering. The <form-query> macro creates a standalone, client-side filtering container.
- Functionality: Defines a set of search fields (
<form-query-field>) that sync with aform-list. - Agent Benefit: When an AI sees a
<form-query>, it immediately knows exactly which parameters can be filtered and which transitions (options-url) yield valid search criteria. - Implementation: It supports
enum-type-idandstatus-type-iddirectly, allowing the AI to populate dropdowns without complex entity-find boilerplate.
B. The State Bridge: <bp-parameter>
The biggest hurdle in building SPAs with Moqui is syncing the server-side ec.context with the client-side Pinia store.
- The Bridge:
<bp-parameter>takes a server-side value (e.g.,${agendaContainerId}) and maps it directly to a named field in a specific Pinia store (e.g.,useMeetingsStore.activeContainerId). - Interactive Benefit: This allows the AI to “know” the state of the application in the browser and manipulate it by updating the store, which in turn triggers reactive UI updates.
C. Layout & Responsive Grid
<container-row>and<row-col>: These map Moqui’s logical structure directly to Quasar’s flex grid.<screen-split>: A resizable splitter that supports dynamic component loading. This is ideal for “Master-Detail” views where the AI needs to swap out content on the right panel based on a selection on the left.
3. Semantic Mapping with semantic-handle
Nearly every custom macro supports a semantic-handle attribute.
- Concept: Links a UI element to a known Mantle entity, an AI intent, or a “domain concept” from the
moqui-agent-osreferences. - Example:
<screen-content semantic-handle="PatientClinicalDashboard">. - Outcome: The AI agent doesn’t just see a “page container”; it sees a Clinical Dashboard with specific expected behaviors and data patterns.
4. Integration with Community Strategy
While Jens’s moqui-agent-os provides the Foundational OS (patterns for entities/services) and Hans provides Prompting Workflows (how to talk to the AI), these macro extensions provide the Implementation DSL.
| Component | Role |
|---|---|
moqui-agent-os |
The “Grammar” (Standards and Guides). |
| Hans’s Work | The “Communication Style” (Pattern Reference Prompting). |
| MoquiAi Macros | The “Vocab” (Declarative UI tags and State Bridging). |
5. Conclusion
Extending the Moqui Screen DSL is not an aesthetic choice; it is a structural necessity for AI collaboration. By using these macros, developers can build UIs that are natively “Agent-Readable.” The AI no longer has to guess where a button is or how a search works—it interacts with the DSL, and the DeterministicVueRenderer handles the complex bridging to the browser.
Deep Dive: Moqui MCP and the MARIA Semantic Layer
Ean’s moqui-mcp component provides the core infrastructure for connecting AI agents directly to the Moqui eco-system. The breakthrough insight in this component is the MARIA format, which fundamentally changes how we think about “AI-Ready” user interfaces.
1. The Vision: AI as an Accessibility-Challenged User
Traditional AI-UI interaction (like Playwright or Vision-based models) treats the AI as a “Human Mimic” that scrapes the DOM or looks at screenshots. This is high-latency, expensive, and fragile.
The MARIA (MCP Accessible Rich Internet Applications) philosophy flips this:
-
Insight: AI agents cannot “see” pixels, interpret CSS layouts, or understand visual hierarchy. They are effectively “accessibility-challenged.”
-
Solution: Just as we use ARIA for screen readers, we use MARIA to provide a structured Accessibility Tree in JSON format.
-
Benefit: The agent gets pure semantics (Roles, Names, States, Actions) without the “noise” of HTML/CSS.
2. The MARIA Identifier: Why it is Mandatory
For an agent to navigate a Moqui screen via moqui-mcp, every UI artifact MUST have high-fidelity identifiers.
Why IDs and Names Matter:
-
Navigation: Without a clear
nameorid, the agent cannot distinguish between multiple instances of a component (e.g., three different “Submit” buttons). -
Action Binding: The MCP tool
moqui_browse_screensrelies on MARIA roles (e.g.,button,textbox,grid) to understand what actions are possible. -
Stability: While CSS classes and HTML structure may change, the MARIA identifier acts as a Stable Semantic Contract.
Developer Requirement:
When building screens or Blueprints in the MoquiAi ecosystem, you MUST decorate your XML and JSON-LD with:
-
role: What is this? (e.g.,form,grid,heading). -
name: A unique, human-readable label (e.g.,CreatePersonForm). -
id: A unique machine-readable key (e.g.,partyIdField).
3. moqui-mcp Architecture
The moqui-mcp component is more than just a data exporter; it’s an Agent Runtime:
-
JSON-RPC Bridge: Provides a standardized endpoint for any model (Claude, GPT, Ollama) to interact with Moqui.
-
Secure Impersonation: Agents execute tools by impersonating a Moqui user, ensuring that the AI never bypasses existing Permission logic (RBAC).
-
Self-Guided Narratives: Screens include
uiNarrativeblocks—natural language descriptions that guide the AI on how to use the screen.
4. The Unified Alignment (The Three-Tier Model)
With the addition of moqui-mcp, we now have a complete, cohesive community strategy:
| Tier | Component | Analogy | Responsibility |
| :— | :— | :— | :— |
| Logic/Grammar | moqui-agent-os (Jens) | The Brain | Foundation, Entity patterns, technical standards. |
| Interface/DSL | moqui-ai (Us) | The Body | Blueprints, Macro extensions, WebMCP bridge. |
| Protocol/Senses | moqui-mcp (Ean) | The Voice | Connectivity, MARIA semantics, Agent impersonation. |
5. Conclusion: “The Identifier is the Map”
In a MARIA-powered world, the identifier is the agent’s map. If our UI artifacts don’t have clear, semantic identifiers, the agent is effectively “blind.” By standardizing on the MARIA format across all Moqui components, we ensure that AI agents can perform “real jobs in real business systems” with the same precision—and security—as a human user.