standard
tokenizer(标准分词器)提供基于语法的分词(基于Unicode文本分割算法,如 Unicode标准附件29 中所述),并且适用于大多数语言。
POST _analyze
{
"tokenizer": "standard",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
上面的句子会生成如下的词元:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
standard
tokenizer(标准分词器) 接受以下参数:
max_token_length 单个 token 的最大长度。如果一个 token 超过这个长度,则以 max_token_length 为间隔分割。默认值是
255
.。
下面的例子中,我们配置标准分词器的 _**max_token_length**_ 为 5 (便于展示):
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "standard",
"max_token_length": 5
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
输出如下:
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]