chat_ollama {ellmer} | R Documentation |
Chat with a local Ollama model
Description
To use chat_ollama()
first download and install
Ollama. Then install some models either from the
command line (e.g. with ollama pull llama3.1
) or within R using
ollamar (e.g.
ollamar::pull("llama3.1")
).
This function is a lightweight wrapper around chat_openai()
with
the defaults tweaked for ollama.
Known limitations
Tool calling is not supported with streaming (i.e. when
echo
is"text"
or"all"
)Models can only use 2048 input tokens, and there's no way to get them to use more, except by creating a custom model with a different default.
Tool calling generally seems quite weak, at least with the models I have tried it with.
Usage
chat_ollama(
system_prompt = NULL,
base_url = "http://localhost:11434",
model,
seed = NULL,
api_args = list(),
echo = NULL,
api_key = NULL
)
models_ollama(base_url = "http://localhost:11434")
Arguments
system_prompt |
A system prompt to set the behavior of the assistant. |
base_url |
The base URL to the endpoint; the default uses OpenAI. |
model |
The model to use for the chat.
Use |
seed |
Optional integer seed that ChatGPT uses to try and make output more reproducible. |
api_args |
Named list of arbitrary extra arguments appended to the body
of every chat API call. Combined with the body object generated by ellmer
with |
echo |
One of the following options:
Note this only affects the |
api_key |
Ollama doesn't require an API key for local usage and in most
cases you do not need to provide an However, if you're accessing an Ollama instance hosted behind a reverse
proxy or secured endpoint that enforces bearer‐token authentication, you
can set |
Value
A Chat object.
See Also
Other chatbots:
chat_anthropic()
,
chat_aws_bedrock()
,
chat_azure_openai()
,
chat_cloudflare()
,
chat_cortex_analyst()
,
chat_databricks()
,
chat_deepseek()
,
chat_github()
,
chat_google_gemini()
,
chat_groq()
,
chat_huggingface()
,
chat_mistral()
,
chat_openai()
,
chat_openrouter()
,
chat_perplexity()
,
chat_portkey()
Examples
## Not run:
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about statisticians")
## End(Not run)