This is distinct but related to the concept of exposing UI widgets directly via MCP. This technology isn’t meant to be used in a headless scenario, more of a way to give agents a leg up when using browser control, but it still seems very much related.
Will you be investigating this for use with moqui-mcp?
I haven’t been able to dive too deeply into this yet, but I ask Gemini if and how it would benefit the work that I have been doing (code named “blueprints”) and here is the conclusion:
ntegrating WebMCP with MoquiAi Blueprints would bridge the gap between human-first UIs and agent-first APIs without duplicating effort. By adding WebMCP hook generation into the
BlueprintClient.js renderer, every Moqui screen built with blueprints would automatically become an interactive, agent-ready playground . The AI interacts safely with the frontend architecture, while the user maintains final visual verification and control over the app state.