← All one-liners·#006·synthesis·llm cli·power

HN front page → Gemini 3.1 Pro analysis

Combines #1 (HN top stories) with an LLM CLI to identify the under-discussed story of the day.

Setup
  • → npm install -g @google/gemini-cli (or: brew install gemini-cli)
  • → GEMINI_API_KEY in env (free tier: ~1500 req/day)
Cost per run
<$0.01
The one-liner
$ curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" \
  | jq -r '.[0:30][]' \
  | xargs -P 10 -I {} curl -s "https://hacker-news.firebaseio.com/v0/item/{}.json" \
  | jq -s -r 'sort_by(-.score) | .[0:10] | .[] | "- [\(.score)] \(.title) — \(.url // "discussion")"' \
  | gemini -m gemini-3.1-pro-preview -p \
      "These are today's top HN stories. Identify the one that's quietly important but under-discussed. Justify in 5 sentences an outsider understands."
What each stage does
  1. [01] curlcurl … topstories.json
    Same as recipe #1 — fetch HN top story IDs.
  2. [02] jqjq -r '.[0:30][]'
    Slice + emit IDs one per line.
  3. [03] xargsxargs -P 10 … item/{}.json
    Parallel fetch of each story's JSON.
  4. [04] jqjq -s -r 'sort_by(-.score) | .[0:10] | .[] | …'
    Slurp, sort by score, format top 10 as bullet list — this is the LLM's input.
  5. [05] geminigemini -m gemini-3.1-pro-preview -p "…"
    -p = headless mode (no interactive REPL). The prompt is the flag value; stdin (the bullet list) is appended automatically.
Expected output (sample)
The under-discussed story is "The hidden cost of LLM batching" (score 388). While the M5 Pro benchmarks and the Rust vector DB are getting the upvotes, this one quietly answers a question every team running production LLMs hits within their first month...
Caveats & tips
  • Swap `gemini -m gemini-3.1-pro-preview` for `claude -p` (subscription auth) or `llm -m groq-llama-3.3-70b-versatile` (faster, via `llm install llm-groq`).
  • If you don't have ANTHROPIC_API_KEY exported, don't add --bare to claude — it'll demand the key.