Interface ChatOptions

Describes the request format for sending a chat or chatStream request to an assistant.

Hierarchy

  • ChatOptions

Properties

contextOptions?: ChatContextOptions

Controls the context snippets sent to the LLM.

filter?: object

A filter against which documents can be retrieved.

includeHighlights?: boolean

If true, the assistant will be instructed to return highlights from the referenced documents that support its response.

jsonResponse?: boolean

If true, the assistant will be instructed to return a JSON response.

messages: MessagesModel

The MessagesModel to send to the Assistant. Can be a list of strings or a list of objects. If sent as a list of objects, must have exactly two keys: role and content. The role key can only be one of user or assistant.

model?: string

The large language model to use for answer generation. Must be one of the models defined in ChatModelEnum. If empty, the assistant will default to using 'gpt-4o' model.

temperature?: number

Controls the randomness of the model's output: lower values make responses more deterministic, while higher values increase creativity and variability. If the model does not support a temperature parameter, the parameter will be ignored.

topK?: number

The maximum number of context snippets to use. Default is 16. Maximum is 64. topK can also be passed through contextOptions. If both are passed, contextOptions.topK will be used.

Deprecated

Use contextOptions.topK instead.