Interact with Claude AI models via the Anthropic API
claude_chat(
.llm,
.model = "claude-sonnet-4-20250514",
.max_tokens = 2048,
.temperature = NULL,
.top_k = NULL,
.top_p = NULL,
.metadata = NULL,
.stop_sequences = NULL,
.tools = NULL,
.json_schema = NULL,
.file_ids = NULL,
.api_url = "https://api.anthropic.com/",
.verbose = FALSE,
.max_tries = 3,
.timeout = 60,
.stream = FALSE,
.dry_run = FALSE,
.thinking = FALSE,
.thinking_budget = 1024
)An LLMMessage object containing the conversation history and system prompt.
Character string specifying the Claude model version (default: "claude-3-5-sonnet-20241022").
Integer specifying the maximum number of tokens in the response (default: 1024).
Numeric between 0 and 1 controlling response randomness.
Integer controlling diversity by limiting the top K tokens.
Numeric between 0 and 1 for nucleus sampling.
List of additional metadata to include with the request.
Character vector of sequences that will halt response generation.
List of additional tools or functions the model can use.
A schema to enforce an output structure
Character; A vector of file IDs for files that were uploaded to Anthropics Servers
Base URL for the Anthropic API (default: "https://api.anthropic.com/").
Logical; if TRUE, displays additional information about the API call (default: FALSE).
Maximum retries to peform request
Integer specifying the request timeout in seconds (default: 60).
Logical; if TRUE, streams the response piece by piece (default: FALSE).
Logical; if TRUE, returns the prepared request object without executing it (default: FALSE).
Logical; if TRUE, enables Claude's thinking mode for complex reasoning tasks (default: FALSE).
Integer specifying the maximum tokens Claude can spend on thinking (default: 1024). Must be at least 1024.
A new LLMMessage object containing the original messages plus Claude's response.
if (FALSE) { # \dontrun{
# Basic usage
msg <- llm_message("What is R programming?")
result <- claude_chat(msg)
# With custom parameters
result2 <- claude_chat(msg,
.temperature = 0.7,
.max_tokens = 1000)
} # }