← All one-liners·#023·synthesis·llm cli·power

Gemini structured JSON envelope with -o json

Gemini CLI's `-o json` wraps the response in a metadata envelope. Pair with jq for clean machine-readable extraction.

Setup
  • → npm install -g @google/gemini-cli (or: brew install gemini-cli)
  • → GEMINI_API_KEY in env
Cost per run
<$0.01
The one-liner
$ curl -s "https://en.wikipedia.org/api/rest_v1/page/summary/Transformer_(deep_learning_architecture)" \
  | jq -r '.extract' \
  | gemini -m gemini-3.1-pro-preview \
           -o json \
           -p 'Return JSON: {definition_for_kid: string, one_use_case: string, surprising_fact: string}' \
  | jq -r '.response'
What each stage does
  1. [01] curlcurl … wikipedia.org/api/rest_v1/page/summary/Transformer_…
    Wikipedia REST summary — free, public, no key.
  2. [02] jqjq -r '.extract'
    Pluck the lead paragraph as raw text — gemini's stdin context.
  3. [03] geminigemini -m gemini-3.1-pro-preview
    Explicit model selection. The CLI defaults change over time; pin the model in scripts. Also valid: `gemini-3-flash`, `gemini-3.1-flash-preview`.
  4. [04] gemini-o json
    Wrap response in a JSON envelope: {response, stats: {tokens, latency_ms, model}, errors}. Without this, you get raw text on stdout.
  5. [05] gemini-p 'Return JSON: {…}'
    Headless prompt. Asking for JSON inside the prompt makes the model output JSON; -o json wraps THAT in another JSON layer with metadata.
  6. [06] jqjq -r '.response'
    Extract just the model's response from the envelope.
Expected output (sample)
{
  "definition_for_kid": "A transformer is a kind of computer brain that's really good at predicting the next word in a sentence.",
  "one_use_case": "ChatGPT and Claude use them to write text.",
  "surprising_fact": "They were invented for translating languages, not for writing — that came later by accident."
}
Caveats & tips
  • Stream mode: swap `-o json` for `-o stream-json` to get token-by-token JSONL.
  • Free tier limit: ~1500 req/day on the free Gemini API key.