NGramTokenizer interface

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

Extends

Properties

maxGram

The maximum n-gram length. Default is 2. Maximum is 300.

minGram

The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.

odatatype

Polymorphic discriminator, which specifies the different types this object can be

tokenChars

Character classes to keep in the tokens.

Inherited Properties

name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

Property Details

maxGram

The maximum n-gram length. Default is 2. Maximum is 300.

maxGram?: number

Property Value

number

minGram

The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.

minGram?: number

Property Value

number

odatatype

Polymorphic discriminator, which specifies the different types this object can be

odatatype: "#Microsoft.Azure.Search.NGramTokenizer"

Property Value

"#Microsoft.Azure.Search.NGramTokenizer"

tokenChars

Character classes to keep in the tokens.

tokenChars?: TokenCharacterKind[]

Property Value

Inherited Property Details

name

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

name: string

Property Value

string

Inherited From BaseLexicalTokenizer.name