A Chat is a sequence of user and assistant Turns sent
to a specific Provider. A Chat is a mutable R6 object that takes care of
managing the state associated with the chat; i.e. it records the messages
that you send to the server, and the messages that you receive back.
If you register a tool (i.e. an R function that the assistant can call on
your behalf), it also takes care of the tool loop.
You should generally not create this object yourself,
but instead call chat_openai() or friends instead.
A Chat object
new()Chat$new(provider, system_prompt = NULL, echo = "none")providerA provider object.
system_promptSystem prompt to start the conversation with.
echoOne of the following options:
none: don't emit any output (default when running in a function).
output: echo text and tool-calling output as it streams in (default
when running at the console).
all: echo all input and output.
Note this only affects the chat() method. You can override the default
by setting the ellmer_echo option.
get_turns()Retrieve the turns that have been sent and received so far (optionally starting with the system prompt, if any).
get_tokens()A data frame with a tokens column that proides the
number of input tokens used by user turns and the number of
output tokens used by assistant turns.
last_turn()The last turn returned by the assistant.
Chat$last_turn(role = c("assistant", "user", "system"))chat()Submit input to the chatbot, and return the response as a simple string (probably Markdown).
...The input to send to the chatbot. Can be strings or images
(see content_image_file() and content_image_url().
echoWhether to emit the response to stdout as it is received. If
NULL, then the value of echo set when the chat object was created
will be used.
chat_structured()Extract structured data
...The input to send to the chatbot. This is typically the text you want to extract data from, but it can be omitted if the data is obvious from the existing conversation.
typeA type specification for the extracted data. Should be
created with a type_() function.
echoWhether to emit the response to stdout as it is received. Set to "text" to stream JSON data as it's generated (not supported by all providers).
convertAutomatically convert from JSON lists to R data types using the schema. For example, this will turn arrays of objects into data frames and arrays of strings into a character vector.
chat_structured_async()Extract structured data, asynchronously. Returns a promise that resolves to an object matching the type specification.
...The input to send to the chatbot. Will typically include the phrase "extract structured data".
typeA type specification for the extracted data. Should be
created with a type_() function.
echoWhether to emit the response to stdout as it is received. Set to "text" to stream JSON data as it's generated (not supported by all providers).
convertAutomatically convert from JSON lists to R data types using the schema. For example, this will turn arrays of objects into data frames and arrays of strings into a character vector.
chat_async()Submit input to the chatbot, and receive a promise that resolves with the response all at once. Returns a promise that resolves to a string (probably Markdown).
Chat$chat_async(..., tool_mode = c("concurrent", "sequential"))...The input to send to the chatbot. Can be strings or images.
tool_modeWhether tools should be invoked one-at-a-time
("sequential") or concurrently ("concurrent"). Sequential mode is
best for interactive applications, especially when a tool may involve
an interactive user interface. Concurrent mode is the default and is
best suited for automated scripts or non-interactive applications.
stream()Submit input to the chatbot, returning streaming results. Returns A coro generator that yields strings. While iterating, the generator will block while waiting for more content from the chatbot.
Chat$stream(..., stream = c("text", "content"))...The input to send to the chatbot. Can be strings or images.
streamWhether the stream should yield only "text" or ellmer's
rich content types. When stream = "content", stream() yields
Content objects.
stream_async()Submit input to the chatbot, returning asynchronously streaming results. Returns a coro async generator that yields string promises.
...The input to send to the chatbot. Can be strings or images.
tool_modeWhether tools should be invoked one-at-a-time
("sequential") or concurrently ("concurrent"). Sequential mode is
best for interactive applications, especially when a tool may involve
an interactive user interface. Concurrent mode is the default and is
best suited for automated scripts or non-interactive applications.
streamWhether the stream should yield only "text" or ellmer's
rich content types. When stream = "content", stream() yields
Content objects.
register_tool()Register a tool (an R function) that the chatbot can use.
Learn more in vignette("tool-calling").
toolA tool definition created by tool().
register_tools()Register a list of tools.
Learn more in vignette("tool-calling").
toolsA list of tool definitions created by tool().
set_tools()Sets the available tools. For expert use only; most users
should use register_tool().
toolsA list of tool definitions created with tool().
on_tool_request()Register a callback for a tool request event.
on_tool_result()Register a callback for a tool result event.
chat <- chat_openai()
#> Using model = "gpt-4.1".
chat$chat("Tell me a funny joke")
#> Why did the scarecrow win an award?
#> Because he was outstanding in his field! 🌾😄