Requests
These Pydantic models represent the configuration for a request to a specific OpenAI API endpoint. They contain all the parameters you can set, such as model, temperature, max_tokens, etc.
You use these models when defining a common_request for the BatchJobManager or when creating a request via the BatchCollector.
ChatCompletionsRequest
Configuration for a /v1/chat/completions API request.
Bases: TextGenerationRequest
Configuration for a /v1/chat/completions API request.
Attributes:
| Name | Type | Description |
|---|---|---|
model |
str
|
Model ID used to generate the response, like "gpt-4.1". Defaults to "gpt-4.1". |
messages |
List[Dict[str, str]]
|
A list of messages in the conversation. |
frequency_penalty |
Optional[float]
|
Penalizes new tokens based on frequency (-2.0 to 2.0). |
logit_bias |
Optional[Dict]
|
Modifies the likelihood of specified tokens. |
logprobs |
Optional[bool]
|
Whether to return log probabilities. |
max_completion_tokens |
Optional[int]
|
Upper bound for generated completion tokens. |
modalities |
Optional[List[str]]
|
Output types the model should generate. |
n |
Optional[int]
|
How many chat completion choices to generate. |
prediction |
Optional[object]
|
Configuration for a Predicted Output. |
presence_penalty |
Optional[float]
|
Penalizes new tokens based on presence (-2.0 to 2.0). |
reasoning_effort |
Optional[Literal['minimal', 'low', 'medium', 'high']]
|
Constrains reasoning effort. |
response_format |
Optional[Dict]
|
Specifies the format that the model must output (e.g., JSON schema). |
verbosity |
Optional[Literal['low', 'medium', 'high']]
|
Constrains the response verbosity. |
web_search_options |
Optional[object]
|
Configuration for the web search tool. |
tools |
Optional[List[object]]
|
An array of tools the model may call. |
top_p |
Optional[float]
|
An alternative to sampling with temperature (nucleus sampling). |
parallel_tool_calls |
Optional[bool]
|
Whether to allow parallel tool calls. |
prompt_cache_key |
Optional[str]
|
Used by OpenAI to cache responses. |
safety_identifier |
Optional[str]
|
A stable identifier for policy monitoring. |
service_tier |
Optional[Literal['auto', 'default', 'flex', 'priority']]
|
Specifies the processing type. |
store |
Optional[bool]
|
Whether to store the generated model response. |
temperature |
Optional[float]
|
Sampling temperature to use (0 to 2). |
tool_choice |
Optional[str | object]
|
How the model should select which tool to use. |
top_logprobs |
Optional[int]
|
Number of most likely tokens to return at each position (0 to 20). |
Source code in openbatch/model.py
398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 | |
ResponsesRequest
Configuration for a /v1/responses API request.
Bases: TextGenerationRequest
Configuration for a /v1/responses API request.
Attributes:
| Name | Type | Description |
|---|---|---|
model |
str
|
Model ID used to generate the response, like "gpt-4.1". Defaults to "gpt-4.1". |
conversation |
Optional[str]
|
The conversation this response belongs to. |
include |
Optional[List[Literal[...]]]
|
Specify additional output data to include. |
input |
Optional[str | List[Dict[str, str]]]
|
Text, image, or file inputs to the model. |
instructions |
Optional[str]
|
A system or developer message. |
max_output_tokens |
Optional[int]
|
Upper bound for generated tokens. |
max_tool_calls |
Optional[int]
|
Maximum number of tool calls allowed. |
previous_response_id |
Optional[str]
|
ID of the previous response for multi-turn. |
prompt |
Optional[ReusablePrompt]
|
Reference to a prompt template and its variables. |
reasoning |
Optional[ReasoningConfig]
|
Configuration for reasoning models. |
text |
Optional[object]
|
Configuration options for a text response from the model (e.g., JSON schema). |
truncation |
Optional[Literal['auto', 'disabled']]
|
The truncation strategy to use. |
tools |
Optional[List[object]]
|
An array of tools the model may call. |
top_p |
Optional[float]
|
An alternative to sampling with temperature (nucleus sampling). |
parallel_tool_calls |
Optional[bool]
|
Whether to allow parallel tool calls. |
prompt_cache_key |
Optional[str]
|
Used by OpenAI to cache responses. |
safety_identifier |
Optional[str]
|
A stable identifier for policy monitoring. |
service_tier |
Optional[Literal['auto', 'default', 'flex', 'priority']]
|
Specifies the processing type. |
store |
Optional[bool]
|
Whether to store the generated model response. |
temperature |
Optional[float]
|
Sampling temperature to use (0 to 2). |
tool_choice |
Optional[str | object]
|
How the model should select which tool to use. |
top_logprobs |
Optional[int]
|
Number of most likely tokens to return at each position (0 to 20). |
Source code in openbatch/model.py
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 | |
EmbeddingsRequest
Configuration for a /v1/embeddings API request.
Bases: BaseRequest
Configuration for a /v1/embeddings API request.
Attributes:
| Name | Type | Description |
|---|---|---|
model |
str
|
Model ID used to generate the response, like "text-embedding-3-small". |
input |
Union[str | List[str]]
|
Input text or array of tokens to embed. |
dimensions |
Optional[int]
|
The desired number of dimensions for the resulting embeddings. |
encoding_format |
Optional[Literal['base64', 'float']]
|
The format to return the embeddings in. |
user |
Optional[str]
|
A unique identifier representing the end-user. |
Source code in openbatch/model.py
493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 | |