Generate a chat completion using the Mistral Large 3 model
Bearer authentication using API key
Request payload
OpenAI-compatible chat completions API request format
Array of conversation messages with roles
1Model name to use for the request
mistral/mistral-large-3 Penalty for frequent tokens
-2 <= x <= 2Maximum number of tokens to generate
1 <= x <= 262144Penalty for new topics
-2 <= x <= 2Enable streaming response
Controls randomness in output (higher = more random)
0 <= x <= 2Nucleus sampling parameter (controls diversity)
0 <= x <= 1Successful response
Chat completion response
Unix timestamp of when the completion was created
Unique identifier for the chat completion
The model used for the completion
Object type, typically 'chat.completion.chunk'
System fingerprint for the completion
Token usage information