ClassicTokenizer interface

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

Extends

Properties

maxTokenLength

The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

odatatype

Polymorphic discriminator, which specifies the different types this object can be

Inherited Properties

name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

Property Details

maxTokenLength

The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

maxTokenLength?: number

Property Value

number

odatatype

Polymorphic discriminator, which specifies the different types this object can be

odatatype: "#Microsoft.Azure.Search.ClassicTokenizer"

Property Value

"#Microsoft.Azure.Search.ClassicTokenizer"

Inherited Property Details

name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

name: string

Property Value

string

Inherited From BaseLexicalTokenizer.name