In Chinese, elasticsearch/opensearch requires the installation of a word segmentation plugin to support it. Currently, ik word segmentation is widely used (GitHub - infinilabs/analysis-ik: 🚌 The IK Analysis plugin integrates Lucene IK analyzer into Elasticsearch and OpenSearch, support customized dictionary. ), Can you add some configurations in the MoquiDevConf.xml file so that when generating the schema, Moqui can add the specified tokenizer to all fields of type text according to the configuration?