Further discussion on AI cooperation

I apologize for injecting a lot of uneducated noise into this discussion, Strategies/Rules/Prompts to make LLMs usable with Moqui. I am anxious to consolidate our work so that Moqui can make some real progress in the use of AI. I know the @jmochel wanted to start such a thread, so maybe we can use this one for a while.

I took a look at the RAG stuff that @jenshp shared and it looks like a very valuable resource. I took the step to make my Antigravity .agent folder to use the same structure and to reference the moqui-agent-os component. I feel that the work that I have been doing deals more with the UI aspects of AI, the use of MCP tools (@schue moqui-mcp and webmcp.dev) and the use of blueprints to guide app generation.

I think that it will take a lot of work to consolidate our efforts, but I think it is necessary. I asked Gemini to look at the work I have done with it, Jens’s moqui-agent-os component and @hansbak comments and do an analysis on where we agree and where we diverge. And I asked it to integrate @schue work on moqui-mcp. It gave me three files and I am just going to paste them in here:


Blueprint: Moqui AI Community Alignment Strategy

This document synthesizes the strategic work of the MoquiAi project with the foundational patterns of Jens’s moqui-agent-os and the practical prompting workflows suggested by Hans. It serves as a blueprint for unified AI-driven development in the Moqui ecosystem.

1. Executive Summary: The Four Pillars of Alignment

To achieve high-fidelity AI collaboration across the Moqui community, we propose a four-pillar approach:

  1. Foundation (The OS): Standardize on moqui-agent-os (Jens) for core framework patterns and XML/Groovy syntax rules.

  2. Interactive Bridge (The UI): Standardize on MoquiAi Macro Extensions (form-query, bp-parameter) and Blueprints for metadata-driven frontends.

  3. Connectivity & Semantics (The Protocol): Standardize on moqui-mcp (Ean) and MARIA identifiers to treat AI as a first-class, “accessibility-aware” user.

  4. Workflow (The Loop): Adopt Pattern Reference Prompting (Hans) and Closing-the-Loop Documentation as the interaction standard.


2. Comparative Analysis

Feature Jens (moqui-agent-os) Hans (Suggestions) MoquiAi (Blueprints/WebMCP)
Anchoring Overlay system & symlinks to CLAUDE.md. Root CLAUDE.md/GEMINI.md as “Brain”. Internal .agent directory with shadowing protocol.
Patterns Domain-specific references/ guides. “Pattern Reference Prompting” (Mantle UDM). JSON-LD “Blueprints” for UI consistency.
Iteration Universal Task Execution Protocol. “Closing the Loop” via post-task docs. “Shadowing” for local vs global logic.
Interaction Command-based (slash commands). High-fidelity CRUD and logic cloning. WebMCP interactive browser bridge.

Points of Convergence

  • Context Management: All parties emphasize that AI must be “anchored” with project guidelines (CLAUDE.md, README.md) to prevent hallucinations.
  • Pattern-First logic: Standardizing on existing “Gold Standards” (like mantle-udm or Example apps) rather than writing from scratch.
  • Tiered Knowledge: Recognizing that foundational framework knowledge should be separated from specific business domain logic.

3. The Unified Community Blueprint

A. The Directory Taxonomy (Mirroring Jens’s OS)

All community-aligned Moqui components should adopt a standardized .agent (or .agent-os) directory structure:

  • guidelines/: Architectural strategy (e.g., Blueprint definitions).
  • instructions/: Workflow “how-tos” (e.g., WebMCP setup).
  • standards/: Declarative rules (e.g., Groovy usage, Security).
  • templates/: XML/Groovy snippets (e.g., Entity/Service CRUD patterns).
  • references/: Domain-specific pattern guides.

B. Standard Prompting Workflows (Integrating Hans’s Strategy)

  • The Pattern Reference: Always instruct the AI to “look at” a specific Mantle or Framework file before generating new code.
  • The CRUD Clone: When building services, explicitly refer to high-fidelity service patterns (e.g., update#Product).
  • The Closing Loop: Every significant task should end with the AI generating a technical summary in docs/features/ or a specialized KI.

C. The Interactive Layer (Integrating MoquiAi Strategy)

  • Extensible Macro DSL: Use custom tags like <form-query>, <menu-dropdown>, and <bp-parameter> (defined in moqui-ai-screen.xsd) to bridge Moqui logic with reactive state. See [moqui-ai-macro-extensions.md](file:///home/byersa/IdeaProjects/aitree-project/runtime/component/moqui-ai/.agent/references/moqui-ai-macro-extensions.md) for a deep dive.
  • Blueprints as Source of Truth: Move away from raw HTML/CSS generation. The AI should generate Blueprints (JSON-LD) which are rendered by the DeterministicVueRenderer.
  • WebMCP for Verification: Use the WebMCP bridge to allow the AI to “see” the rendered output, take screenshots, and interact with the DOM during the VALIDATE phase of the task.

4. Implementation Guidelines for Developers

  1. Bootstrap: Install moqui-agent-os as a foundational component in your Moqui runtime.
  2. Overlay: Create your local project .agent folder. Add your unique business rules in guidelines/ and standards/.
  3. Anchor: Use a root CLAUDE.md or GEMINI.md that directs the AI to prioritize the local .agent folder over the global moqui-agent-os instructions.
  4. Notify on Conflict: If a global standard (from Jens) conflicts with a local requirement, explicitly document it in the project’s standards/ folder so the AI knows which “branch” to follow.

5. Conclusion: A Shared Vision

By meshing Jens’s structural foundation, Hans’s practical prompting workflows, and the MoquiAi interactive bridge, we move from “AI as a code generator” to “AI as a collaborative system architect.” This unified approach ensures that code remains consistent, documentation stays current, and the UI is natively agent-ready from day zero.

Deep Dive: MoquiAi Screen Macro Extensions

The MoquiAi project extends the standard Moqui XML Screen DSL (via moqui-ai-screen.xsd) to create an Interactive Bridge between Moqui’s server-side logic and modern reactive frontends. This strategy is critical for making Moqui applications “Agent-Ready.”

1. The Strategy: Instructions-as-UI

Standard Moqui HTML rendering produces complex DOM trees that are difficult for AI agents to reason about. The MoquiAi macro extensions solve this by:

  • Declarative Intent: Using semantic tags (e.g., <screen-header>, <form-query>) instead of generic <div> or <span> blocks.
  • Blueprint Emission: The DeterministicVueRenderer transforms these macros into JSON-LD Blueprints, which provide a clean, structured representation of the UI for both the Vue client and the AI agent.

2. Key Macro Patterns

A. The <form-query> Pattern

In traditional Moqui, search forms are often tightly coupled to the table rendering. The <form-query> macro creates a standalone, client-side filtering container.

  • Functionality: Defines a set of search fields (<form-query-field>) that sync with a form-list.
  • Agent Benefit: When an AI sees a <form-query>, it immediately knows exactly which parameters can be filtered and which transitions (options-url) yield valid search criteria.
  • Implementation: It supports enum-type-id and status-type-id directly, allowing the AI to populate dropdowns without complex entity-find boilerplate.

B. The State Bridge: <bp-parameter>

The biggest hurdle in building SPAs with Moqui is syncing the server-side ec.context with the client-side Pinia store.

  • The Bridge: <bp-parameter> takes a server-side value (e.g., ${agendaContainerId}) and maps it directly to a named field in a specific Pinia store (e.g., useMeetingsStore.activeContainerId).
  • Interactive Benefit: This allows the AI to “know” the state of the application in the browser and manipulate it by updating the store, which in turn triggers reactive UI updates.

C. Layout & Responsive Grid

  • <container-row> and <row-col>: These map Moqui’s logical structure directly to Quasar’s flex grid.
  • <screen-split>: A resizable splitter that supports dynamic component loading. This is ideal for “Master-Detail” views where the AI needs to swap out content on the right panel based on a selection on the left.

3. Semantic Mapping with semantic-handle

Nearly every custom macro supports a semantic-handle attribute.

  • Concept: Links a UI element to a known Mantle entity, an AI intent, or a “domain concept” from the moqui-agent-os references.
  • Example: <screen-content semantic-handle="PatientClinicalDashboard">.
  • Outcome: The AI agent doesn’t just see a “page container”; it sees a Clinical Dashboard with specific expected behaviors and data patterns.

4. Integration with Community Strategy

While Jens’s moqui-agent-os provides the Foundational OS (patterns for entities/services) and Hans provides Prompting Workflows (how to talk to the AI), these macro extensions provide the Implementation DSL.

Component Role
moqui-agent-os The “Grammar” (Standards and Guides).
Hans’s Work The “Communication Style” (Pattern Reference Prompting).
MoquiAi Macros The “Vocab” (Declarative UI tags and State Bridging).

5. Conclusion

Extending the Moqui Screen DSL is not an aesthetic choice; it is a structural necessity for AI collaboration. By using these macros, developers can build UIs that are natively “Agent-Readable.” The AI no longer has to guess where a button is or how a search works—it interacts with the DSL, and the DeterministicVueRenderer handles the complex bridging to the browser.

Deep Dive: Moqui MCP and the MARIA Semantic Layer

Ean’s moqui-mcp component provides the core infrastructure for connecting AI agents directly to the Moqui eco-system. The breakthrough insight in this component is the MARIA format, which fundamentally changes how we think about “AI-Ready” user interfaces.


1. The Vision: AI as an Accessibility-Challenged User

Traditional AI-UI interaction (like Playwright or Vision-based models) treats the AI as a “Human Mimic” that scrapes the DOM or looks at screenshots. This is high-latency, expensive, and fragile.

The MARIA (MCP Accessible Rich Internet Applications) philosophy flips this:

  • Insight: AI agents cannot “see” pixels, interpret CSS layouts, or understand visual hierarchy. They are effectively “accessibility-challenged.”

  • Solution: Just as we use ARIA for screen readers, we use MARIA to provide a structured Accessibility Tree in JSON format.

  • Benefit: The agent gets pure semantics (Roles, Names, States, Actions) without the “noise” of HTML/CSS.


2. The MARIA Identifier: Why it is Mandatory

For an agent to navigate a Moqui screen via moqui-mcp, every UI artifact MUST have high-fidelity identifiers.

Why IDs and Names Matter:

  • Navigation: Without a clear name or id, the agent cannot distinguish between multiple instances of a component (e.g., three different “Submit” buttons).

  • Action Binding: The MCP tool moqui_browse_screens relies on MARIA roles (e.g., button, textbox, grid) to understand what actions are possible.

  • Stability: While CSS classes and HTML structure may change, the MARIA identifier acts as a Stable Semantic Contract.

Developer Requirement:

When building screens or Blueprints in the MoquiAi ecosystem, you MUST decorate your XML and JSON-LD with:

  1. role: What is this? (e.g., form, grid, heading).

  2. name: A unique, human-readable label (e.g., CreatePersonForm).

  3. id: A unique machine-readable key (e.g., partyIdField).


3. moqui-mcp Architecture

The moqui-mcp component is more than just a data exporter; it’s an Agent Runtime:

  • JSON-RPC Bridge: Provides a standardized endpoint for any model (Claude, GPT, Ollama) to interact with Moqui.

  • Secure Impersonation: Agents execute tools by impersonating a Moqui user, ensuring that the AI never bypasses existing Permission logic (RBAC).

  • Self-Guided Narratives: Screens include uiNarrative blocks—natural language descriptions that guide the AI on how to use the screen.


4. The Unified Alignment (The Three-Tier Model)

With the addition of moqui-mcp, we now have a complete, cohesive community strategy:

| Tier | Component | Analogy | Responsibility |

| :— | :— | :— | :— |

| Logic/Grammar | moqui-agent-os (Jens) | The Brain | Foundation, Entity patterns, technical standards. |

| Interface/DSL | moqui-ai (Us) | The Body | Blueprints, Macro extensions, WebMCP bridge. |

| Protocol/Senses | moqui-mcp (Ean) | The Voice | Connectivity, MARIA semantics, Agent impersonation. |


5. Conclusion: “The Identifier is the Map”

In a MARIA-powered world, the identifier is the agent’s map. If our UI artifacts don’t have clear, semantic identifiers, the agent is effectively “blind.” By standardizing on the MARIA format across all Moqui components, we ensure that AI agents can perform “real jobs in real business systems” with the same precision—and security—as a human user.

As if that wasn’t enough, I failed to mention the need for a common place to put a “blueprint” for app generation. In “solution” components, I propose to include a blueprints folder. The structure of the folder will follow the Moqui component layout (ie data, entity, screen, service, etc) but the contents will be .md files describing what should be done to generate the component or subcomponent’s code. And I asked Gemini to generate another file:

Deep Dive: Blueprint-Driven Development (BDD)

Commonly in the MoquiAi project, we use a Spec-First strategy known as Blueprint-Driven Development (BDD). This approach centers on the use of the blueprints/ directory within solution components to drive high-fidelity AI code generation.


1. Distinguishing the Two “Blueprints”

To avoid architectural confusion, we must distinguish between the two types of blueprints used in this ecosystem:

| Feature | Development Blueprints (Spec) | Runtime Blueprints (Render) |

| :— | :— | :— |

| Location | runtime/component/[name]/blueprints/ | Emitted by DeterministicVueRenderer |

| Format | Markdown (.md) | JSON-LD |

| Purpose | Code Generation: Instructions for the AI to build the .xml and .groovy files. | UI Rendering: Instructions for the Vue client to draw the reactive screen. |

| Audience | The AI Assistant (Authoring phase). | The Browser & Agent (Runtime phase). |


2. The Structure of a Development Blueprint

A development blueprint is a structured Markdown file that acts as a “Requirement Specification” for an AI agent. By providing this file, a human developer can ensure the AI generates code that follows the exact architecture and naming conventions of the project.

Key Sections:

  1. Instructions for AI: Global directives (e.g., “Prefer <form-single>”, “Use <form-query>”).

  2. Architecture Pattern: Defines where this artifact fits in the hierarchy (e.g., PatientMedicalRoot -> ClinicalDashboard).

  3. Declarative Moqui Context:

  • parameter: Input variables.

  • actions: Data fetching logic (Entity-Finds).

  • widgets: A high-level DSL layout using MoquiAi macro extensions.

  1. Vue/Pinia Integration: Explicitly names the Pinia stores (e.g., useMeetingsStore) the screen must interact with.

  2. Quality Assurance: A test-plan and Implementation Checklist to verify the generated code.


3. The BDD Workflow: From Spec to Solution

The workflow for adding a new feature follows this pipeline:

  1. Blueprint Creation: The developer (or AI) drafts a .md file in the blueprints/ folder.

  2. AI Generation: An AI agent reads the blueprint and generates:

  • The Moqui XML Screen.

  • Any necessary Groovy Services.

  • Any custom CSS or JS fragments.

  1. Verification: The generated code is verified against the test-plan section of the blueprint.

  2. Maintenance: When a screen needs to change, the Blueprint is updated first, and the code is re-generated or patched accordingly. This ensures the documentation never drifts from the implementation.


4. Why BDD is Critical for Community Alignment

  • Instructional Anchoring: Blueprints provide the “Final Word” on how a component should be built, overriding generic AI assumptions.

  • Taxonomy Mirroring: By mirroring the blueprints/ folder with the implementation folder (e.g., blueprints/screen/ maps to screen/), the project becomes self-documenting.

  • Knowledge Transfer: A developer joining the Moqui community can read the blueprints/ folder to understand the intent and logic of a complex screen without parsing 500 lines of XML.


5. Conclusion: The Source of Truth

In the MoquiAi paradigm, the implementation file (.xml) is a derivative artifact. The source of truth is the Development Blueprint in the blueprints/ folder. This ensures that the AI assistant remains a coordinated collaborator rather than a disconnected code generator.