LLM persona via --append-system-prompt
Inject style/role/safety rules at the system level — keeps your user prompt focused on the data, not on persona-prefixing.
Setup
- → claude /login OR export ANTHROPIC_API_KEY=sk-…
Cost per run
<$0.01
The one-liner
$ curl -s -A "oneliner101/1.0" \
"https://www.reddit.com/r/singularity/top.json?limit=15&t=week" \
| jq -r '.data.children[].data | "[\(.score)] \(.title)"' \
| claude -p \
--append-system-prompt "You are a senior analyst at trends101. You reject hype. Cite specific titles. Output ≤120 words." \
"Weekly r/singularity pulse."What each stage does
- [01] curl
curl … reddit.com/r/singularity/top.json?…&t=weekReddit JSON, weekly window. - [02] jq
jq -r '.data.children[].data | "[\(.score)] \(.title)"'Compact `[score] title` lines. - [03] claude
--append-system-prompt "You are a senior analyst …"Adds to the default system prompt without replacing it. Use this 90% of the time. `--system-prompt` replaces entirely (risky — loses claude's safety defaults). - [04] claude
"Weekly r/singularity pulse."User prompt stays minimal — just states the task. Persona/style/length rules live in the system prompt, not muddled into the user message.
Expected output (sample)
Three threads dominate: (1) the M5 Pro benchmarks are getting heat for cherry-picked workloads — "M5 Pro vs H100" is anchored at score 612 but has 230 dissenting comments. (2) The agentic-RAG vs vanilla-RAG debate spilled out of r/MachineLearning, with "Vector search is dead" at 481. (3) Quiet enthusiasm for local-first ML — "Llama 4 on a $400 mini-PC" at 312 is the most-saved post of the week. Skepticism rising overall.
Caveats & tips
- Use `--append-system-prompt-file ./prompts/persona.md` for longer system prompts.
- System-prompt content is cached server-side (better hit rate on repeated runs with same persona).