@azure/search-documents package
Classes
AzureKeyCredential |
A static-key-based credential that supports updating the underlying key value. |
GeographyPoint |
Represents a geographic point in global coordinates. |
IndexDocumentsBatch |
Class used to perform batch operations with multiple documents to the index. |
SearchClient |
Class used to perform operations against a search index, including querying documents in the index as well as adding, updating, and removing them. |
SearchIndexClient |
Class to perform operations to manage (create, update, list/delete) indexes, & synonymmaps. |
SearchIndexerClient |
Class to perform operations to manage (create, update, list/delete) indexers, datasources & skillsets. |
SearchIndexingBufferedSender |
Class used to perform buffered operations against a search index, including adding, updating, and removing them. |
Interfaces
AnalyzeRequest |
Specifies some text and analysis components used to break that text into tokens. |
AnalyzeResult |
The result of testing an analyzer on text. |
AnalyzedTokenInfo |
Information about a token returned by an analyzer. |
AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene. |
AutocompleteItem |
The result of Autocomplete requests. |
AutocompleteRequest |
Parameters for fuzzy matching, and other autocomplete query behaviors. |
AutocompleteResult |
The result of Autocomplete query. |
AzureActiveDirectoryApplicationCredentials |
Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault. |
AzureOpenAIEmbeddingSkill |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
AzureOpenAIParameters |
Contains the parameters specific to using an Azure Open AI service for vectorization at query time. |
AzureOpenAIVectorizer |
Contains the parameters specific to using an Azure Open AI service for vectorization at query time. |
BM25Similarity |
Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter). |
BaseCharFilter |
Base type for character filters. |
BaseCognitiveServicesAccount |
Base type for describing any Azure AI service resource attached to a skillset. |
BaseDataChangeDetectionPolicy |
Base type for data change detection policies. |
BaseDataDeletionDetectionPolicy |
Base type for data deletion detection policies. |
BaseLexicalAnalyzer |
Base type for analyzers. |
BaseLexicalTokenizer |
Base type for tokenizers. |
BaseScoringFunction |
Base type for functions that can modify document scores during ranking. |
BaseSearchIndexerDataIdentity |
Abstract base type for data identities. |
BaseSearchIndexerSkill |
Base type for skills. |
BaseSearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
BaseTokenFilter |
Base type for token filters. |
BaseVectorQuery |
The query parameters for vector and hybrid search queries. |
BaseVectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
BaseVectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
BaseVectorSearchVectorizer |
Contains specific details for a vectorization method to be used during query time. |
BinaryQuantizationCompression |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene. |
ClassicSimilarity |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries. |
ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. |
CognitiveServicesAccountKey |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene. |
ComplexField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
CreateOrUpdateIndexOptions |
Options for create/update index operation. |
CreateOrUpdateSkillsetOptions |
Options for create/update skillset operation. |
CreateOrUpdateSynonymMapOptions |
Options for create/update synonymmap operation. |
CreateorUpdateDataSourceConnectionOptions |
Options for create/update datasource operation. |
CreateorUpdateIndexerOptions |
Options for create/update indexer operation. |
CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer. |
CustomEntity |
An object that contains information about the matches that were found, and related metadata. |
CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases. |
DefaultCognitiveServicesAccount |
An empty object that represents the default Azure AI service resource for a skillset. |
DeleteDataSourceConnectionOptions |
Options for delete datasource operation. |
DeleteIndexOptions |
Options for delete index operation. |
DeleteIndexerOptions |
Options for delete indexer operation. |
DeleteSkillsetOptions |
Options for delete skillset operaion. |
DeleteSynonymMapOptions |
Options for delete synonymmap operation. |
DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. |
DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location. |
DistanceScoringParameters |
Provides parameter values to a distance scoring function. |
DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline. |
EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. |
EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
ElisionTokenFilter |
Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene. |
EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text. |
EntityRecognitionSkill |
Text analytics entity recognition. |
EntityRecognitionSkillV3 |
Using the Text Analytics API, extracts entities of different types from text. |
ExhaustiveKnnParameters |
Contains the parameters specific to exhaustive KNN algorithm. |
ExtractiveQueryAnswer |
Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language. |
ExtractiveQueryCaption |
Extracts captions from the matching documents that contain passages relevant to the search query. |
FacetResult |
A single bucket of a facet query result. Reports the number of documents with a field value falling within a particular range or having a particular value or interval. |
FieldMapping |
Defines a mapping between a field in a data source and a target field in an index. |
FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing. |
FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field. |
FreshnessScoringParameters |
Provides parameter values to a freshness scoring function. |
GetDocumentOptions |
Options for retrieving a single document. |
HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
HnswParameters |
Contains the parameters specific to hnsw algorithm. |
ImageAnalysisSkill |
A skill that analyzes image files. It extracts a rich set of visual features based on the image content. |
IndexDocumentsClient |
Index Documents Client |
IndexDocumentsOptions |
Options for the modify index batch operation. |
IndexDocumentsResult |
Response containing the status of operations for all documents in the indexing request. |
IndexerExecutionResult |
Represents the result of an individual indexer execution. |
IndexingParameters |
Represents parameters for indexer execution. |
IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
IndexingResult |
Status of an indexing operation for a single document. |
IndexingSchedule |
Represents a schedule for indexer execution. |
InputFieldMappingEntry |
Input field mapping for a skill. |
KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene. |
KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction. |
KeywordMarkerTokenFilter |
Marks terms as keywords. This token filter is implemented using Apache Lucene. |
KeywordTokenizer |
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. |
LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis. |
LengthTokenFilter |
Removes words that are too long or too short. This token filter is implemented using Apache Lucene. |
LimitTokenFilter |
Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene. |
ListSearchResultsPageSettings |
Arguments for retrieving the next page of search results. |
LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. |
MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field. |
MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function. |
MappingCharFilter |
A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene. |
MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base forms. |
MicrosoftLanguageTokenizer |
Divides text using language-specific rules. |
NGramTokenFilter |
Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. |
NGramTokenizer |
Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
OcrSkill |
A skill that extracts text from image files. |
OutputFieldMappingEntry |
Output field mapping for a skill. |
PIIDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
PathHierarchyTokenizer |
Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. |
PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. |
PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene. |
PatternReplaceCharFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene. |
PatternReplaceTokenFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene. |
PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. |
PhoneticTokenFilter |
Create tokens for phonetic matches. This token filter is implemented using Apache Lucene. |
QueryAnswerResult |
An answer is a text passage extracted from the contents of the most relevant documents that matched the query. Answers are extracted from the top search results. Answer candidates are scored and the top answers are selected. |
QueryCaptionResult |
Captions are the most representative passages from the document relatively to the search query. They are often used as document summary. Captions are only returned for queries of type |
ResourceCounter |
Represents a resource's usage and quota. |
ScalarQuantizationCompression |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
ScalarQuantizationParameters |
Contains the parameters specific to Scalar Quantization. |
ScoringProfile |
Defines parameters for a search index that influence scoring in search queries. |
SearchClientOptions |
Client options used to configure Cognitive Search API requests. |
SearchDocumentsPageResult |
Response containing search page results from an index. |
SearchDocumentsResult |
Response containing search results from an index. |
SearchDocumentsResultBase |
Response containing search results from an index. |
SearchIndex |
Represents a search index definition, which describes the fields and search behavior of an index. |
SearchIndexClientOptions |
Client options used to configure Cognitive Search API requests. |
SearchIndexStatistics |
Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date. |
SearchIndexer |
Represents an indexer. |
SearchIndexerClientOptions |
Client options used to configure Cognitive Search API requests. |
SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource. |
SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an indexer. |
SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use. |
SearchIndexerError |
Represents an item- or document-level indexing error. |
SearchIndexerIndexProjection |
Definition of additional projections to secondary search indexes. |
SearchIndexerIndexProjectionParameters |
A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
SearchIndexerIndexProjectionSelector |
Description for what data to store in the designated search index. |
SearchIndexerKnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data. |
SearchIndexerKnowledgeStoreBlobProjectionSelector |
Abstract class to share properties between concrete selectors. |
SearchIndexerKnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files. |
SearchIndexerKnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob. |
SearchIndexerKnowledgeStoreParameters |
A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
SearchIndexerKnowledgeStoreProjection |
Container object for various projection selectors. |
SearchIndexerKnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors. |
SearchIndexerKnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables. |
SearchIndexerLimits | |
SearchIndexerSkillset |
A list of skills. |
SearchIndexerStatus |
Represents the current status and execution history of an indexer. |
SearchIndexerWarning |
Represents an item-level warning. |
SearchIndexingBufferedSenderOptions |
Options for SearchIndexingBufferedSender. |
SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure Cognitive Search, such as indexes and synonym maps. |
SearchServiceStatistics |
Response from a get service statistics request. If successful, it includes service level counters and limits. |
SearchSuggester |
Defines how the Suggest API should apply to a group of fields in the index. |
SemanticConfiguration |
Defines a specific configuration to be used in the context of semantic capabilities. |
SemanticField |
A field that is used as part of the semantic configuration. |
SemanticPrioritizedFields |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
SemanticSearch |
Defines parameters for a search index that influence semantic capabilities. |
SemanticSearchOptions |
Defines options for semantic search queries |
SentimentSkill |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1. |
SentimentSkillV3 |
Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. |
ServiceCounters |
Represents service-level resource counters and quotas. |
ServiceLimits |
Represents various service level limits. |
ShaperSkill |
A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields). |
ShingleTokenFilter |
Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene. |
Similarity |
Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results. |
SimpleField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene. |
SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column. |
SplitSkill |
A skill to split a string into chunks of text. |
SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. |
StemmerTokenFilter |
Language specific stemming filter. This token filter is implemented using Apache Lucene. |
StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene. |
StopwordsTokenFilter |
Removes stop words from a token stream. This token filter is implemented using Apache Lucene. |
SuggestDocumentsResult |
Response containing suggestion query results from an index. |
SuggestRequest |
Parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors. |
SynonymMap |
Represents a synonym map definition. |
SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene. |
TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
TagScoringParameters |
Provides parameter values to a tag scoring function. |
TextTranslationSkill |
A skill to translate text from one language to another. |
TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries. |
TruncateTokenFilter |
Truncates the terms to a specific length. This token filter is implemented using Apache Lucene. |
UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene. |
UniqueTokenFilter |
Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene. |
VectorSearch |
Contains configuration options related to vector search. |
VectorSearchOptions |
Defines options for vector search queries |
VectorSearchProfile |
Defines a combination of configurations to use with vector search. |
VectorizableTextQuery |
The query parameters to use for vector search when a text value that needs to be vectorized is provided. |
VectorizedQuery |
The query parameters to use for vector search when a raw vector value is provided. |
WebApiParameters |
Specifies the properties for connecting to a user-defined vectorizer. |
WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
WebApiVectorizer |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset. |
WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene. |
Type Aliases
AnalyzeTextOptions |
Options for analyze text operation. |
AutocompleteMode |
Defines values for AutocompleteMode. |
AutocompleteOptions |
Options for retrieving completion text for a partial searchText. |
AzureOpenAIModelName |
Defines values for AzureOpenAIModelName. Known values supported by the servicetext-embedding-ada-002 |
BlobIndexerDataToExtract | |
BlobIndexerImageAction | |
BlobIndexerPDFTextRotationAlgorithm | |
BlobIndexerParsingMode | |
CharFilter |
Contains the possible cases for CharFilter. |
CharFilterName |
Defines values for CharFilterName. Known values supported by the servicehtml_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html |
CjkBigramTokenFilterScripts |
Defines values for CjkBigramTokenFilterScripts. |
CognitiveServicesAccount |
Contains the possible cases for CognitiveServicesAccount. |
ComplexDataType |
Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)' |
CountDocumentsOptions |
Options for performing the count operation on the index. |
CreateDataSourceConnectionOptions |
Options for create datasource operation. |
CreateIndexOptions |
Options for create index operation. |
CreateIndexerOptions |
Options for create indexer operation. |
CreateSkillsetOptions |
Options for create skillset operation. |
CreateSynonymMapOptions |
Options for create synonymmap operation. |
CustomEntityLookupSkillLanguage | |
DataChangeDetectionPolicy |
Contains the possible cases for DataChangeDetectionPolicy. |
DataDeletionDetectionPolicy |
Contains the possible cases for DataDeletionDetectionPolicy. |
DeleteDocumentsOptions |
Options for the delete documents operation. |
EdgeNGramTokenFilterSide |
Defines values for EdgeNGramTokenFilterSide. |
EntityCategory | |
EntityRecognitionSkillLanguage | |
ExcludedODataTypes | |
ExhaustiveKnnAlgorithmConfiguration |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
ExtractDocumentKey | |
GetDataSourceConnectionOptions |
Options for get datasource operation. |
GetIndexOptions |
Options for get index operation. |
GetIndexStatisticsOptions |
Options for get index statistics operation. |
GetIndexerOptions |
Options for get indexer operation. |
GetIndexerStatusOptions |
Options for get indexer status operation. |
GetServiceStatisticsOptions |
Options for get service statistics operation. |
GetSkillSetOptions |
Options for get skillset operation. |
GetSynonymMapsOptions |
Options for get synonymmaps operation. |
HnswAlgorithmConfiguration |
Contains configuration options specific to the hnsw approximate nearest neighbors algorithm used during indexing time. |
ImageAnalysisSkillLanguage | |
ImageDetail | |
IndexActionType |
Defines values for IndexActionType. |
IndexDocumentsAction |
Represents an index action that operates on a document. |
IndexIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
IndexNameIterator |
An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
IndexProjectionMode |
Defines values for IndexProjectionMode. Known values supported by the serviceskipIndexingParentDocuments: The source document will be skipped from writing into the indexer's target index. |
IndexerExecutionEnvironment | |
IndexerExecutionStatus |
Defines values for IndexerExecutionStatus. |
IndexerStatus |
Defines values for IndexerStatus. |
KeyPhraseExtractionSkillLanguage | |
LexicalAnalyzer |
Contains the possible cases for Analyzer. |
LexicalAnalyzerName |
Defines values for LexicalAnalyzerName. Known values supported by the servicear.microsoft: Microsoft analyzer for Arabic. |
LexicalTokenizer |
Contains the possible cases for Tokenizer. |
LexicalTokenizerName |
Defines values for LexicalTokenizerName. Known values supported by the serviceclassic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html |
ListDataSourceConnectionsOptions |
Options for a list data sources operation. |
ListIndexersOptions |
Options for a list indexers operation. |
ListIndexesOptions |
Options for a list indexes operation. |
ListSkillsetsOptions |
Options for a list skillsets operation. |
ListSynonymMapsOptions |
Options for a list synonymMaps operation. |
MergeDocumentsOptions |
Options for the merge documents operation. |
MergeOrUploadDocumentsOptions |
Options for the merge or upload documents operation. |
MicrosoftStemmingTokenizerLanguage |
Defines values for MicrosoftStemmingTokenizerLanguage. |
MicrosoftTokenizerLanguage |
Defines values for MicrosoftTokenizerLanguage. |
NarrowedModel |
Narrows the Model type to include only the selected Fields |
OcrLineEnding |
Defines values for OcrLineEnding. Known values supported by the servicespace: Lines are separated by a single space character. |
OcrSkillLanguage | |
PIIDetectionSkillMaskingMode | |
PhoneticEncoder |
Defines values for PhoneticEncoder. |
QueryAnswer |
A value that specifies whether answers should be returned as part of the search response.
This parameter is only valid if the query type is 'semantic'. If set to |
QueryCaption |
A value that specifies whether captions should be returned as part of the search response. This parameter is only valid if the query type is 'semantic'. If set, the query returns captions extracted from key passages in the highest ranked documents. When Captions is 'extractive', highlighting is enabled by default. Defaults to 'none'. |
QueryType |
Defines values for QueryType. |
RegexFlags | |
ResetIndexerOptions |
Options for reset indexer operation. |
RunIndexerOptions |
Options for run indexer operation. |
ScoringFunction |
Contains the possible cases for ScoringFunction. |
ScoringFunctionAggregation |
Defines values for ScoringFunctionAggregation. |
ScoringFunctionInterpolation |
Defines values for ScoringFunctionInterpolation. |
ScoringStatistics |
Defines values for ScoringStatistics. |
SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
SearchFieldArray |
If |
SearchFieldDataType |
Defines values for SearchFieldDataType. Known values supported by the service:Edm.String: Indicates that a field contains a string. Edm.Int32: Indicates that a field contains a 32-bit signed integer. Edm.Int64: Indicates that a field contains a 64-bit signed integer. Edm.Double: Indicates that a field contains an IEEE double-precision floating point number. Edm.Boolean: Indicates that a field contains a Boolean value (true or false). Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information. Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude. Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types. Edm.Single: Indicates that a field contains a single-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Single). Edm.Half: Indicates that a field contains a half-precision floating point number. This is only valid when used as part of a collection type, i.e. Collection(Edm.Half). Edm.Int16: Indicates that a field contains a 16-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Int16). Edm.SByte: Indicates that a field contains a 8-bit signed integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.SByte). Edm.Byte: Indicates that a field contains a 8-bit unsigned integer. This is only valid when used as part of a collection type, i.e. Collection(Edm.Byte). |
SearchIndexerDataIdentity |
Contains the possible cases for SearchIndexerDataIdentity. |
SearchIndexerDataSourceType | |
SearchIndexerSkill |
Contains the possible cases for Skill. |
SearchIndexingBufferedSenderDeleteDocumentsOptions |
Options for SearchIndexingBufferedSenderDeleteDocuments. |
SearchIndexingBufferedSenderFlushDocumentsOptions |
Options for SearchIndexingBufferedSenderFlushDocuments. |
SearchIndexingBufferedSenderMergeDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeDocuments. |
SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderMergeOrUploadDocuments. |
SearchIndexingBufferedSenderUploadDocumentsOptions |
Options for SearchIndexingBufferedSenderUploadDocuments. |
SearchIterator |
An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration. |
SearchMode |
Defines values for SearchMode. |
SearchOptions |
Options for committing a full search request. |
SearchPick |
Deeply pick fields of T using valid Cognitive Search OData $select paths. |
SearchRequestOptions |
Parameters for filtering, sorting, faceting, paging, and other search query behaviors. |
SearchRequestQueryTypeOptions | |
SearchResult |
Contains a document found by a search query, plus associated metadata. |
SelectArray |
If |
SelectFields |
Produces a union of valid Cognitive Search OData $select paths for T using a post-order traversal of the field tree rooted at T. |
SemanticErrorMode | |
SemanticErrorReason | |
SemanticSearchResultsType | |
SentimentSkillLanguage | |
SimilarityAlgorithm |
Contains the possible cases for Similarity. |
SnowballTokenFilterLanguage |
Defines values for SnowballTokenFilterLanguage. |
SplitSkillLanguage | |
StemmerTokenFilterLanguage |
Defines values for StemmerTokenFilterLanguage. |
StopwordsList |
Defines values for StopwordsList. |
SuggestNarrowedModel | |
SuggestOptions |
Options for retrieving suggestions based on the searchText. |
SuggestResult |
A result containing a document found by a suggestion query, plus associated metadata. |
TextSplitMode | |
TextTranslationSkillLanguage | |
TokenCharacterKind |
Defines values for TokenCharacterKind. |
TokenFilter |
Contains the possible cases for TokenFilter. |
TokenFilterName |
Defines values for TokenFilterName. Known values supported by the servicearabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html |
UnionToIntersection | |
UploadDocumentsOptions |
Options for the upload documents operation. |
VectorEncodingFormat |
Defines values for VectorEncodingFormat. Known values supported by the servicepackedBit: Encoding format representing bits packed into a wider data type. |
VectorFilterMode | |
VectorQuery |
The query parameters for vector and hybrid search queries. |
VectorQueryKind | |
VectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing and/or querying. |
VectorSearchAlgorithmKind | |
VectorSearchAlgorithmMetric | |
VectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. |
VectorSearchCompressionKind |
Defines values for VectorSearchCompressionKind. Known values supported by the servicescalarQuantization: Scalar Quantization, a type of compression method. In scalar quantization, the original vectors values are compressed to a narrower type by discretizing and representing each component of a vector using a reduced set of quantized values, thereby reducing the overall data size. |
VectorSearchCompressionTarget |
Defines values for VectorSearchCompressionTarget. Known values supported by the serviceint8 |
VectorSearchVectorizer |
Contains configuration options on how to vectorize text vector queries. |
VectorSearchVectorizerKind |
Defines values for VectorSearchVectorizerKind. Known values supported by the serviceazureOpenAI: Generate embeddings using an Azure OpenAI resource at query time. |
VisualFeature |
Enums
Functions
create |
Helper method to create a SynonymMap object. This is a NodeJS only method. |
odata(Template |
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter |
Function Details
createSynonymMapFromFile(string, string)
Helper method to create a SynonymMap object. This is a NodeJS only method.
function createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>
Parameters
- name
-
string
Name of the SynonymMap.
- filePath
-
string
Path of the file that contains the Synonyms (seperated by new lines)
Returns
Promise<SynonymMap>
SynonymMap object
odata(TemplateStringsArray, unknown[])
Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:
const baseRateMax = 200;
const ratingMin = 4;
const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;
For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter
function odata(strings: TemplateStringsArray, values: unknown[]): string
Parameters
- strings
-
TemplateStringsArray
Array of strings for the expression
- values
-
unknown[]
Array of values for the expression
Returns
string