quit_if {tidyprompt} | R Documentation |
Make evaluation of a prompt stop if LLM gives a specific response
Description
This function is used to wrap a tidyprompt()
object and ensure that the
evaluation will stop if the LLM says it cannot answer the prompt. This is
useful in scenarios where it is determined the LLM is unable to provide a
response to a prompt.
Usage
quit_if(
prompt,
quit_detect_regex = "NO ANSWER",
instruction =
paste0("If you think that you cannot provide a valid answer, you must type:\n",
"'NO ANSWER' (use no other characters)"),
success = TRUE,
response_result = c("null", "llm_response", "regex_match")
)
Arguments
prompt |
A single string or a |
quit_detect_regex |
A regular expression to detect in the LLM's response which will cause the evaluation to stop. The default will detect the string "NO ANSWER" in the response |
instruction |
A string to be added to the prompt to instruct the LLM
how to respond if it cannot answer the prompt. The default is
"If you think that you cannot provide a valid answer, you must type: 'NO ANSWER' (use no other characters)".
This parameter can be set to |
success |
A logical indicating whether the |
response_result |
A character string indicating what should be returned
when the quit_detect_regex is detected in the LLM's response. The default is
'null', which will return NULL as the response result o f |
Value
A tidyprompt()
with an added prompt_wrap()
which will ensure
that the evaluation will stop upon detection of the quit_detect_regex in the
LLM's response
See Also
Other pre_built_prompt_wraps:
add_text()
,
answer_as_boolean()
,
answer_as_integer()
,
answer_as_json()
,
answer_as_list()
,
answer_as_named_list()
,
answer_as_regex_match()
,
answer_as_text()
,
answer_by_chain_of_thought()
,
answer_by_react()
,
answer_using_r()
,
answer_using_sql()
,
answer_using_tools()
,
prompt_wrap()
,
set_system_prompt()
Other miscellaneous_prompt_wraps:
add_text()
,
set_system_prompt()
Examples
## Not run:
"What the favourite food of my cat on Thursday mornings?" |>
quit_if() |>
send_prompt(llm_provider_ollama())
# --- Sending request to LLM provider (llama3.1:8b): ---
# What the favourite food of my cat on Thursday mornings?
#
# If you think that you cannot provide a valid answer, you must type:
# 'NO ANSWER' (use no other characters)
# --- Receiving response from LLM provider: ---
# NO ANSWER
# NULL
## End(Not run)