R/api_gemini.R
send_gemini_batch.RdReturns a named list (same as input) with batch_id and json attributes.
send_gemini_batch(
.llms,
.model = "gemini-2.5-flash",
.temperature = NULL,
.max_output_tokens = NULL,
.top_p = NULL,
.top_k = NULL,
.presence_penalty = NULL,
.frequency_penalty = NULL,
.stop_sequences = NULL,
.safety_settings = NULL,
.json_schema = NULL,
.grounding_threshold = NULL,
.timeout = 120,
.dry_run = FALSE,
.max_tries = 3,
.display = "tidyllm_batch",
.id_prefix = "tidyllm_gemini_req_"
)List of LLMMessage objects (named or unnamed).
The model identifier (default: "gemini-1.5-flash").
Controls randomness (default: NULL, range: 0-2).
Maximum tokens in the response (default: NULL).
Nucleus sampling (default: NULL, range: 0-1).
Diversity in token selection (default: NULL).
Penalizes new tokens (default: NULL, -2 to 2).
Penalizes frequent tokens (default: NULL, -2 to 2).
Character vector or NULL of up to 5.
Optional list of safety settings (default: NULL).
Optional schema to enforce output structure.
Optional grounding threshold (0-1) to enable Google Search.
Timeout in seconds (default: 120).
If TRUE, returns the constructed request (default: FALSE).
Maximum retry attempts (default: 3).
Display name for this batch (default: "tidyllm_batch").
Prefix for message IDs (default: "tidyllm_gemini_req_").
Named list of LLMMessage objects with attributes batch_id and json