GitHub (via Azure) hosts a number of open source and OpenAI models. To access the GitHub model marketplace, you will need to apply for and be accepted into the beta access program. See https://github.com/marketplace/models for details.
This function is a lightweight wrapper around chat_openai()
with
the defaults tweaked for the GitHub model marketplace.
chat_github(
system_prompt = NULL,
base_url = "https://models.inference.ai.azure.com/",
api_key = github_key(),
model = NULL,
seed = NULL,
api_args = list(),
echo = NULL
)
A system prompt to set the behavior of the assistant.
The base URL to the endpoint; the default uses OpenAI.
The API key to use for authentication. You generally should
not supply this directly, but instead manage your GitHub credentials
as described in https://usethis.r-lib.org/articles/git-credentials.html.
For headless environments, this will also look in the GITHUB_PAT
env var.
The model to use for the chat (defaults to "gpt-4o"). We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.
Optional integer seed that ChatGPT uses to try and make output more reproducible.
Named list of arbitrary extra arguments appended to the body
of every chat API call. Combined with the body object generated by ellmer
with modifyList()
.
One of the following options:
none
: don't emit any output (default when running in a function).
text
: echo text output as it streams in (default when running at
the console).
all
: echo all input and output.
Note this only affects the chat()
method.
A Chat object.
Other chatbots:
chat_anthropic()
,
chat_aws_bedrock()
,
chat_azure_openai()
,
chat_cloudflare()
,
chat_cortex_analyst()
,
chat_databricks()
,
chat_deepseek()
,
chat_google_gemini()
,
chat_groq()
,
chat_huggingface()
,
chat_mistral()
,
chat_ollama()
,
chat_openai()
,
chat_openrouter()
,
chat_perplexity()
,
chat_portkey()
if (FALSE) { # \dontrun{
chat <- chat_github()
chat$chat("Tell me three jokes about statisticians")
} # }