Optional
contextControls the context snippets sent to the LLM.
Optional
filterA filter against which documents can be retrieved.
Optional
includeIf true, the assistant will be instructed to return highlights from the referenced documents that support its response.
Optional
jsonIf true, the assistant will be instructed to return a JSON response.
The MessagesModel to send to the Assistant. Can be a list of strings or a list of objects. If sent as a list of
objects, must have exactly two keys: role
and content
. The role
key can only be one of user
or assistant
.
Optional
modelThe large language model to use for answer generation. Must be one of the models defined in ChatModelEnum. If empty, the assistant will default to using 'gpt-4o' model.
Optional
temperatureControls the randomness of the model's output: lower values make responses more deterministic, while higher values increase creativity and variability. If the model does not support a temperature parameter, the parameter will be ignored.
Optional
topKThe maximum number of context snippets to use. Default is 16. Maximum is 64.
topK
can also be passed through contextOptions
. If both are passed, contextOptions.topK
will be used.
Use contextOptions.topK
instead.
Describes the request format for sending a
chat
orchatStream
request to an assistant.