Requests
These Pydantic models represent the configuration for a request to a specific OpenAI API endpoint. They contain all the parameters you can set, such as model
, temperature
, max_tokens
, etc.
You use these models when defining a common_request
for the BatchJobManager
or when creating a request via the BatchCollector
.
ChatCompletionsRequest
Configuration for a /v1/chat/completions
API request.
Bases: TextGenerationRequest
Configuration for a /v1/chat/completions API request.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
Model ID used to generate the response, like "gpt-4.1". Defaults to "gpt-4.1". |
messages |
List[Dict[str, str]]
|
A list of messages in the conversation. |
frequency_penalty |
Optional[float]
|
Penalizes new tokens based on frequency (-2.0 to 2.0). |
logit_bias |
Optional[Dict]
|
Modifies the likelihood of specified tokens. |
logprobs |
Optional[bool]
|
Whether to return log probabilities. |
max_completion_tokens |
Optional[int]
|
Upper bound for generated completion tokens. |
modalities |
Optional[List[str]]
|
Output types the model should generate. |
n |
Optional[int]
|
How many chat completion choices to generate. |
prediction |
Optional[object]
|
Configuration for a Predicted Output. |
presence_penalty |
Optional[float]
|
Penalizes new tokens based on presence (-2.0 to 2.0). |
reasoning_effort |
Optional[Literal['minimal', 'low', 'medium', 'high']]
|
Constrains reasoning effort. |
response_format |
Optional[Dict]
|
Specifies the format that the model must output (e.g., JSON schema). |
verbosity |
Optional[Literal['low', 'medium', 'high']]
|
Constrains the response verbosity. |
web_search_options |
Optional[object]
|
Configuration for the web search tool. |
tools |
Optional[List[object]]
|
An array of tools the model may call. |
top_p |
Optional[float]
|
An alternative to sampling with temperature (nucleus sampling). |
parallel_tool_calls |
Optional[bool]
|
Whether to allow parallel tool calls. |
prompt_cache_key |
Optional[str]
|
Used by OpenAI to cache responses. |
safety_identifier |
Optional[str]
|
A stable identifier for policy monitoring. |
service_tier |
Optional[Literal['auto', 'default', 'flex', 'priority']]
|
Specifies the processing type. |
store |
Optional[bool]
|
Whether to store the generated model response. |
temperature |
Optional[float]
|
Sampling temperature to use (0 to 2). |
tool_choice |
Optional[str | object]
|
How the model should select which tool to use. |
top_logprobs |
Optional[int]
|
Number of most likely tokens to return at each position (0 to 20). |
Source code in openbatch/model.py
296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 |
|
ResponsesRequest
Configuration for a /v1/responses
API request.
Bases: TextGenerationRequest
Configuration for a /v1/responses API request.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
Model ID used to generate the response, like "gpt-4.1". Defaults to "gpt-4.1". |
conversation |
Optional[str]
|
The conversation this response belongs to. |
include |
Optional[List[Literal[...]]]
|
Specify additional output data to include. |
input |
Optional[str | List[Dict[str, str]]]
|
Text, image, or file inputs to the model. |
instructions |
Optional[str]
|
A system or developer message. |
max_output_tokens |
Optional[int]
|
Upper bound for generated tokens. |
max_tool_calls |
Optional[int]
|
Maximum number of tool calls allowed. |
previous_response_id |
Optional[str]
|
ID of the previous response for multi-turn. |
prompt |
Optional[ReusablePrompt]
|
Reference to a prompt template and its variables. |
reasoning |
Optional[ReasoningConfig]
|
Configuration for reasoning models. |
text |
Optional[object]
|
Configuration options for a text response from the model (e.g., JSON schema). |
truncation |
Optional[Literal['auto', 'disabled']]
|
The truncation strategy to use. |
tools |
Optional[List[object]]
|
An array of tools the model may call. |
top_p |
Optional[float]
|
An alternative to sampling with temperature (nucleus sampling). |
parallel_tool_calls |
Optional[bool]
|
Whether to allow parallel tool calls. |
prompt_cache_key |
Optional[str]
|
Used by OpenAI to cache responses. |
safety_identifier |
Optional[str]
|
A stable identifier for policy monitoring. |
service_tier |
Optional[Literal['auto', 'default', 'flex', 'priority']]
|
Specifies the processing type. |
store |
Optional[bool]
|
Whether to store the generated model response. |
temperature |
Optional[float]
|
Sampling temperature to use (0 to 2). |
tool_choice |
Optional[str | object]
|
How the model should select which tool to use. |
top_logprobs |
Optional[int]
|
Number of most likely tokens to return at each position (0 to 20). |
Source code in openbatch/model.py
242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 |
|
EmbeddingsRequest
Configuration for a /v1/embeddings
API request.
Bases: BaseRequest
Configuration for a /v1/embeddings API request.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
Model ID used to generate the response, like "text-embedding-3-small". |
input |
Union[str | List[str]]
|
Input text or array of tokens to embed. |
dimensions |
Optional[int]
|
The desired number of dimensions for the resulting embeddings. |
encoding_format |
Optional[Literal['base64', 'float']]
|
The format to return the embeddings in. |
user |
Optional[str]
|
A unique identifier representing the end-user. |
Source code in openbatch/model.py
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 |
|