To use chat_ollama()
first download and install
Ollama. Then install some models either from the
command line (e.g. with ollama pull llama3.1
) or within R using
ollamar (e.g.
ollamar::pull("llama3.1")
).
This function is a lightweight wrapper around chat_openai()
with
the defaults tweaked for ollama.
Tool calling is not supported with streaming (i.e. when echo
is
"text"
or "all"
)
Models can only use 2048 input tokens, and there's no way to get them to use more, except by creating a custom model with a different default.
Tool calling generally seems quite weak, at least with the models I have tried it with.
chat_ollama(
system_prompt = NULL,
base_url = "http://localhost:11434",
model,
seed = NULL,
api_args = list(),
echo = NULL,
api_key = NULL
)
models_ollama(base_url = "http://localhost:11434")
A system prompt to set the behavior of the assistant.
The base URL to the endpoint; the default uses OpenAI.
The model to use for the chat.
Use models_ollama()
to see all options.
Optional integer seed that ChatGPT uses to try and make output more reproducible.
Named list of arbitrary extra arguments appended to the body
of every chat API call. Combined with the body object generated by ellmer
with modifyList()
.
One of the following options:
none
: don't emit any output (default when running in a function).
text
: echo text output as it streams in (default when running at
the console).
all
: echo all input and output.
Note this only affects the chat()
method.
Ollama doesn't require an API key for local usage and in most
cases you do not need to provide an api_key
.
However, if you're accessing an Ollama instance hosted behind a reverse
proxy or secured endpoint that enforces bearer‐token authentication, you
can set api_key
(or the OLLAMA_API_KEY
environment variable).
A Chat object.
Other chatbots:
chat_anthropic()
,
chat_aws_bedrock()
,
chat_azure_openai()
,
chat_cloudflare()
,
chat_cortex_analyst()
,
chat_databricks()
,
chat_deepseek()
,
chat_github()
,
chat_google_gemini()
,
chat_groq()
,
chat_huggingface()
,
chat_mistral()
,
chat_openai()
,
chat_openrouter()
,
chat_perplexity()
,
chat_portkey()
if (FALSE) { # \dontrun{
chat <- chat_ollama(model = "llama3.2")
chat$chat("Tell me three jokes about statisticians")
} # }