chat_vllm {ellmer} | R Documentation |
Chat with a model hosted by vLLM
Description
vLLM is an open source library that
provides an efficient and convenient LLMs model server. You can use
chat_vllm()
to connect to endpoints powered by vLLM.
Usage
chat_vllm(
base_url,
system_prompt = NULL,
model,
seed = NULL,
api_args = list(),
api_key = vllm_key(),
echo = NULL
)
models_vllm(base_url, api_key = vllm_key())
Arguments
base_url |
The base URL to the endpoint; the default uses OpenAI. |
system_prompt |
A system prompt to set the behavior of the assistant. |
model |
The model to use for the chat.
Use |
seed |
Optional integer seed that ChatGPT uses to try and make output more reproducible. |
api_args |
Named list of arbitrary extra arguments appended to the body
of every chat API call. Combined with the body object generated by ellmer
with |
api_key |
API key to use for authentication. You generally should not supply this directly, but instead set the |
echo |
One of the following options:
Note this only affects the |
Value
A Chat object.
Examples
## Not run:
chat <- chat_vllm("http://my-vllm.com")
chat$chat("Tell me three jokes about statisticians")
## End(Not run)