send_prompt {tidyprompt}R Documentation

Send a prompt to a LLM provider

Description

This function is responsible for sending prompts to a LLM provider for evaluation. The function will interact with the LLM provider until a successful response is received or the maximum number of interactions is reached. The function will apply extraction and validation functions to the LLM response, as specified in the prompt wraps (see prompt_wrap()). If the maximum number of interactions

Usage

send_prompt(
  prompt,
  llm_provider = llm_provider_ollama(),
  max_interactions = 10,
  clean_chat_history = TRUE,
  verbose = NULL,
  stream = NULL,
  return_mode = c("only_response", "full")
)

Arguments

prompt

A string or a tidyprompt object

llm_provider

llm_provider object (default is llm_provider_ollama()). This object and its settings will be used to evaluate the prompt. Note that the 'verbose' and 'stream' settings in the LLM provider will be overruled by the 'verbose' and 'stream' arguments in this function when those are not NULL. Furthermore, advanced tidyprompt objects may carry '$parameter_fn' functions which can set parameters in the llm_provider object (see prompt_wrap() and llm_provider for more ).

max_interactions

Maximum number of interactions allowed with the LLM provider. Default is 10. If the maximum number of interactions is reached without a successful response, 'NULL' is returned as the response (see return value). The first interaction is the initial chat completion

clean_chat_history

If the chat history should be cleaned after each interaction. Cleaning the chat history means that only the first and last message from the user, the last message from the assistant, all messages from the system, and all tool results are kept in a 'clean' chat history. This clean chat history is used when requesting a new chat completion. (i.e., if a LLM repeatedly fails to provide a correct response, only its last failed response will included in the context window). This may increase the LLM performance on the next interaction

verbose

If the interaction with the LLM provider should be printed to the console. This will overrule the 'verbose' setting in the LLM provider

stream

If the interaction with the LLM provider should be streamed. This setting will only be used if the LLM provider already has a 'stream' parameter (which indicates there is support for streaming). Note that when 'verbose' is set to FALSE, the 'stream' setting will be ignored

return_mode

One of 'full' or 'only_response'. See return value

Value

See Also

tidyprompt, prompt_wrap(), llm_provider, llm_provider_ollama(), llm_provider_openai()

Other prompt_evaluation: llm_break(), llm_feedback()

Examples

## Not run: 
  "Hi!" |>
    send_prompt(llm_provider_ollama())
  # --- Sending request to LLM provider (llama3.1:8b): ---
  #   Hi!
  # --- Receiving response from LLM provider: ---
  #   It's nice to meet you. Is there something I can help you with, or would you like to chat?
  # [1] "It's nice to meet you. Is there something I can help you with, or would you like to chat?"

  "Hi!" |>
    send_prompt(llm_provider_ollama(), return_mode = "full")
  # --- Sending request to LLM provider (llama3.1:8b): ---
  #   Hi!
  # --- Receiving response from LLM provider: ---
  #   It's nice to meet you. Is there something I can help you with, or would you like to chat?
  # $response
  # [1] "It's nice to meet you. Is there something I can help you with, or would you like to chat?"
  #
  # $chat_history
  # ...
  #
  # $chat_history_clean
  # ...
  #
  # $start_time
  # [1] "2024-11-18 15:43:12 CET"
  #
  # $end_time
  # [1] "2024-11-18 15:43:13 CET"
  #
  # $duration_seconds
  # [1] 1.13276
  #
  # $http_list
  # $http_list[[1]]
  # Response [http://localhost:11434/api/chat]
  #   Date: 2024-11-18 14:43
  #   Status: 200
  #   Content-Type: application/x-ndjson
  # <EMPTY BODY>

  "Hi!" |>
    add_text("What is 5 + 5?") |>
    answer_as_integer() |>
    send_prompt(llm_provider_ollama(), verbose = FALSE)
  # [1] 10

## End(Not run)

[Package tidyprompt version 0.0.1 Index]