parallel_chat {ellmer} | R Documentation |
Submit multiple chats in parallel
Description
If you have multiple prompts, you can submit them in parallel. This is typically considerably faster than submitting them in sequence, especially with Gemini and OpenAI.
If you're using chat_openai()
or chat_anthropic()
and you're willing
to wait longer, you might want to use batch_chat()
instead, as it comes
with a 50% discount in return for taking up to 24 hours.
Usage
parallel_chat(chat, prompts, max_active = 10, rpm = 500)
parallel_chat_structured(
chat,
prompts,
type,
convert = TRUE,
include_tokens = FALSE,
include_cost = FALSE,
max_active = 10,
rpm = 500
)
Arguments
chat |
A base chat object. |
prompts |
A vector created by |
max_active |
The maximum number of simultaneous requests to send. For |
rpm |
Maximum number of requests per minute. |
type |
A type specification for the extracted data. Should be
created with a |
convert |
If |
include_tokens |
If |
include_cost |
If |
Value
For parallel_chat()
, a list of Chat objects, one for each prompt.
For parallel_chat_structured()
, a single structured data object with one
element for each prompt. Typically, when type
is an object, this will
will be a data frame with one row for each prompt, and one column for each
property.
Examples
chat <- chat_openai()
# Chat ----------------------------------------------------------------------
country <- c("Canada", "New Zealand", "Jamaica", "United States")
prompts <- interpolate("What's the capital of {{country}}?")
parallel_chat(chat, prompts)
# Structured data -----------------------------------------------------------
prompts <- list(
"I go by Alex. 42 years on this planet and counting.",
"Pleased to meet you! I'm Jamal, age 27.",
"They call me Li Wei. Nineteen years young.",
"Fatima here. Just celebrated my 35th birthday last week.",
"The name's Robert - 51 years old and proud of it.",
"Kwame here - just hit the big 5-0 this year."
)
type_person <- type_object(name = type_string(), age = type_number())
parallel_chat_structured(chat, prompts, type_person)