OpenAIModelOptions interface
Options for configuring an OpenAIModel
to call an OpenAI hosted model.
- Extends
Properties
api |
API key to use when calling the OpenAI API. |
default |
Default model to use for completions. |
endpoint | Optional. Endpoint to use when calling the OpenAI API. |
organization | Optional. Organization to use when calling the OpenAI API. |
project | Optional. Project to use when calling the OpenAI API. |
Inherited Properties
client |
Optional. Custom client options to use when calling the OpenAI API. |
log |
Optional. Whether to log requests to the console. |
max |
Optional. Maximum number of retries to use when calling the OpenAI API. |
request |
|
response |
Optional. Forces the model return a specific response format. |
retry |
|
seed | Optional. A static seed to use when making model calls. |
stream | Optional. Whether the models responses should be streamed back using Server Sent Events (SSE.) |
use |
Optional. Whether to use |
Property Details
apiKey
API key to use when calling the OpenAI API.
apiKey: string
Property Value
string
Remarks
A new API key can be created at https://platform.openai.com/account/api-keys.
defaultModel
Default model to use for completions.
defaultModel: string
Property Value
string
endpoint
Optional. Endpoint to use when calling the OpenAI API.
endpoint?: string
Property Value
string
Remarks
For Azure OpenAI this is the deployment endpoint.
organization
Optional. Organization to use when calling the OpenAI API.
organization?: string
Property Value
string
project
Optional. Project to use when calling the OpenAI API.
project?: string
Property Value
string
Inherited Property Details
clientOptions
Optional. Custom client options to use when calling the OpenAI API.
clientOptions?: ClientOptions
Property Value
ClientOptions
Inherited From BaseOpenAIModelOptions.clientOptions
logRequests
Optional. Whether to log requests to the console.
logRequests?: boolean
Property Value
boolean
Remarks
This is useful for debugging prompts and defaults to false
.
Inherited From BaseOpenAIModelOptions.logRequests
maxRetries
Optional. Maximum number of retries to use when calling the OpenAI API.
maxRetries?: number
Property Value
number
Remarks
The default is to retry twice.
Inherited From BaseOpenAIModelOptions.maxRetries
requestConfig
Warning
This API is now deprecated.
Optional. Request options to use when calling the OpenAI API.
requestConfig?: AxiosRequestConfig<any>
Property Value
AxiosRequestConfig<any>
Inherited From BaseOpenAIModelOptions.requestConfig
responseFormat
Optional. Forces the model return a specific response format.
responseFormat?: { type: "json_object" }
Property Value
{ type: "json_object" }
Remarks
This can be used to force the model to always return a valid JSON object.
Inherited From BaseOpenAIModelOptions.responseFormat
retryPolicy
Warning
This API is now deprecated.
Optional. Retry policy to use when calling the OpenAI API.
retryPolicy?: number[]
Property Value
number[]
Remarks
Use maxRetries
instead.
Inherited From BaseOpenAIModelOptions.retryPolicy
seed
Optional. A static seed to use when making model calls.
seed?: number
Property Value
number
Remarks
The default is to use a random seed. Specifying a seed will make the model deterministic.
Inherited From BaseOpenAIModelOptions.seed
stream
Optional. Whether the models responses should be streamed back using Server Sent Events (SSE.)
stream?: boolean
Property Value
boolean
Remarks
Defaults to false
.
Inherited From BaseOpenAIModelOptions.stream
useSystemMessages
Optional. Whether to use system
messages when calling the OpenAI API.
useSystemMessages?: boolean
Property Value
boolean
Remarks
The current generation of models tend to follow instructions from user
messages better
then system
messages so the default is false
, which causes any system
message in the
prompt to be sent as user
messages instead.
Inherited From BaseOpenAIModelOptions.useSystemMessages