Class ChatCompletionRequest
java.lang.Object
nl.dannyj.mistral.models.completion.ChatCompletionRequest
- All Implemented Interfaces:
Request
The ChatCompletionRequest class represents a request to create a chat completion (an assistant reply to the conversation).
Most of the field descriptions are taken from the Mistral API documentation.
-
Nested Class Summary
Nested Classes -
Constructor Summary
ConstructorsConstructorDescriptionChatCompletionRequest
(String model, List<Message> messages, Double temperature, Double topP, Integer maxTokens, Boolean stream, boolean safePrompt, Long randomSeed, ResponseFormat responseFormat) -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
protected boolean
boolean
The maximum number of tokens to generate in the completion.The prompt(s) to generate completions for, encoded as a list of dict with role and content.getModel()
ID of the model to use.The seed to use for random sampling.The response format of the completion request.Whether to stream back partial progress.What sampling temperature to use, between 0.0 and 1.0.getTopP()
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.int
hashCode()
boolean
Whether to inject a safety prompt before all conversations.void
setMaxTokens
(Integer maxTokens) The maximum number of tokens to generate in the completion.void
setMessages
(List<Message> messages) The prompt(s) to generate completions for, encoded as a list of dict with role and content.void
ID of the model to use.void
setRandomSeed
(Long randomSeed) The seed to use for random sampling.void
setResponseFormat
(ResponseFormat responseFormat) The response format of the completion request.void
setSafePrompt
(boolean safePrompt) Whether to inject a safety prompt before all conversations.void
Whether to stream back partial progress.void
setTemperature
(Double temperature) What sampling temperature to use, between 0.0 and 1.0.void
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.toString()
-
Constructor Details
-
ChatCompletionRequest
-
ChatCompletionRequest
public ChatCompletionRequest()
-
-
Method Details
-
builder
-
getModel
ID of the model to use. You can use the List Available Models API to see all of your available models.- Returns:
- The model's ID.
-
getMessages
The prompt(s) to generate completions for, encoded as a list of dict with role and content. Must contain at least one message and the first prompt role should be user or system.- Returns:
- The messages/conversation to generate completions for.
-
getTemperature
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. Defaults to 0.7.- Returns:
- The sampling temperature to use.
-
getTopP
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Defaults to 1.0 (i.e., no nucleus sampling).- Returns:
- the top p value to use.
-
getMaxTokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Defaults to 32000, which is the maximum value for all currently available models.- Returns:
- The maximum number of tokens to generate in the completion.
-
getStream
Whether to stream back partial progress. When set to true, theMistralClient.createChatCompletionStream(ChatCompletionRequest, ChatCompletionChunkCallback)
method has to be used.- Returns:
- Whether to stream back partial progress.
-
isSafePrompt
public boolean isSafePrompt()Whether to inject a safety prompt before all conversations. Toggling the safe prompt will prepend your messages with the following system prompt: Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.- Returns:
- Whether to inject a safety prompt before all conversations.
-
getRandomSeed
The seed to use for random sampling. If set, different calls will generate deterministic results.- Returns:
- The seed to use for random sampling.
-
getResponseFormat
The response format of the completion request. Defaults to "text". Currently only available when using mistral small and mistral large models. For other models, this MUST be set to null. Otherwise, you may get a 422 Unprocessable Content error.- Returns:
- The response format of the completion request.
-
setModel
ID of the model to use. You can use the List Available Models API to see all of your available models.- Parameters:
model
- The model's ID. Can't be null or empty.
-
setMessages
The prompt(s) to generate completions for, encoded as a list of dict with role and content. Must contain at least one message and the first prompt role should be user or system.- Parameters:
messages
- The messages/conversation to generate completions for. Can't be null or empty.
-
setTemperature
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. Defaults to 0.7.- Parameters:
temperature
- The sampling temperature to use. Has to be between 0.0 and 1.0.
-
setTopP
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Defaults to 1.0 (i.e., no nucleus sampling).- Parameters:
topP
- the top p value to use. Has to be between 0.0 and 1.0.
-
setMaxTokens
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Defaults to 32000, which is the maximum value for all currently available models.- Parameters:
maxTokens
- The maximum number of tokens to generate in the completion. Has to be positive or zero.
-
setStream
Whether to stream back partial progress. When set to true, theMistralClient.createChatCompletionStream(ChatCompletionRequest, ChatCompletionChunkCallback)
method has to be used.- Parameters:
stream
- Whether to stream back partial progress. Setting to null will default to false.
-
setSafePrompt
public void setSafePrompt(boolean safePrompt) Whether to inject a safety prompt before all conversations. Toggling the safe prompt will prepend your messages with the following system prompt: Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.- Parameters:
safePrompt
- Whether to inject a safety prompt before all conversations.
-
setRandomSeed
The seed to use for random sampling. If set, different calls will generate deterministic results.- Parameters:
randomSeed
- The seed to use for random sampling. Set to null for a random seed.
-
setResponseFormat
The response format of the completion request. Defaults to "text". Currently only available when using mistral small and mistral large models. For other models, this MUST be set to null. Otherwise, you may get a 422 Unprocessable Content error.- Parameters:
responseFormat
- The response format of the completion request. Currently only available when using mistral small and mistral large models. For other models, this MUST be set to null.
-
equals
-
canEqual
-
hashCode
public int hashCode() -
toString
-