Designing for end-to-end testing

My whole interest in Moqui at this point in my life is to see if AI can be applied to making the generation of solutions easier and more cost effective for novice users. I am mostly observing the work of others, but I am also doing my own research. One area would be necessary in order to have a somplete AI solution would be auto generation of test code - not just unit tests, but end-to-end testing. In light of that I gave Gemini the following prompt:

While it is not critical for AI automation at this point, I am wondering that with the effort to migrate the screen UI to a Vue 3 /Quasar 2 environment, would it be possible to add tags or other features to generated screen artifacts to make automated client-side testing possible? I have used Selenium in the past but it has been at least 10 years ago and I do not have any idea how functional testing has evolved. I have not seen any specific technology applied to Moqui, so if you can identify possible candidates that would be a good start.

I am including the response with no intention other than to share information , and, possibly, start discussion.

Since you are moving to a Vue 3 / Quasar 2 frontend (which is heavily client-rendered and asynchronous), legacy Selenium will likely be brittle due to timing issues with animations and DOM updates.

Here is a breakdown of modern candidates and the specific architectural changes you can make to Moqui to support them.

1. The Tool Candidates
For a Vue 3/Quasar stack, there are two primary industry leaders that have largely superseded Selenium for this use case:

A. Playwright (Recommended)

Why: It allows "Auto-waiting." It waits for elements to be actionable (visible, not animating, not covered) before clicking, which solves the #1 issue with testing Quasar apps.   

Language: It has native bindings for Java, meaning you can write your E2E tests in the same language as your Moqui services, or use TypeScript/JavaScript.   

AI Compatibility: Its "Codegen" tool generates clean selector-based code that can be fed to Gemini as a "few-shot" example.   

B. Cypress

Why: Built specifically for modern frontends. It runs inside the browser loop, giving it direct access to the Vue application state (Vuex/Pinia), allowing you to assert data states, not just UI pixels.   

Constraint: strictly JavaScript/TypeScript.

2. Strategy for "Tags" in Generated Artifacts
To make automated testing (and AI generation of tests) efficient, you need stable selectors. Quasar generates dynamic classes (e.g., q-field__native q-placeholder) that change between versions. You should not rely on them.

You need to inject data-testid attributes (or similar) into the HTML.

The "Macro Injection" Strategy
In Moqui, XML Screens are rendered into HTML/Vue templates using FreeMarker macros (typically found in runtime/template/screen-macro).

Instead of manually adding tags to every single XML screen (which is tedious), you can modify the underlying macros to automatically generate test IDs based on the field names you already defined.

Current Moqui Behavior (Simplified):

XML

<field name="patientName"><text-line/></field>
Renders to:

HTML

<input name="patientName" id="FormName_patientName"... />
Proposed Change (The "Test Hook"): You can modify the text-line macro in your Theme component to inject a data attribute:

HTML

<input 
    name="${fieldName}" 
    id="${fieldId}" 
    data-test="${fieldName}" ... 
/>
Result: Every input field in your entire application now has a stable selector [data-test="patientName"].

3. How this enables AI
If you implement the "Macro Injection" strategy, you can give the AI a very simple rule in your "Personal Context":

"When generating Playwright tests, assume every input field can be selected using page.locator(""). Do not use CSS classes or XPaths."

Example of AI-Generated Test with this setup: Because the AI knows the Entity definition (e.g., Patient has a firstName field), and it knows your macro rule, it can hallucinate correct test code without seeing the screen:

Java

// AI can predict this accurately because of the convention
page.locator("[data-test='firstName']").fill("John");
page.locator("[data-test='lastName']").fill("Doe");
page.locator("").click();
Summary Recommendation
Adopt Playwright (Java binding) to keep your stack unified.

Modify your Screen Macros (in the Vue 3 / Quasar 2 theme component) to automatically append data-test="${name}" to all form fields and buttons.

Prompt the AI to use these data-test attributes for all selectors.

It is obviously too early to do much, but maybe not to think about adding “stable selectors”.

I am not quite sure of this, but I suspect that AI would be more useful in higher level tests than lower level ones.

What I mean by that, is lower level tests (very technical, things like core logic, UI code, etc…) are still best reviewed and confirmed by humans, as absolute accuracy is needed and mistakes are very easy to crawl in not to mention false positives.

But higher level tests are more conceptual, and are easier to generate from AI. For example, if I add an item to the shopping cart, then quantity should increase. This kind of test is higher level, easier to reason about, and might be better suited for the AI to spot and test. I imagine that at this level of abstraction. We can catch many more mistakes that are easy to spot for AI. Whereas for low level stuff you might get messier and less reliable reports.

Imagine an AI test layer that just tests our services. It acts as a collection of integration tests, it’s really testing at a high level, and it might report lots of useful information like unexpected exceptions, invalid behavior, unexpected results given the input parameters, and so on.