Great to see you back and blogging. It's always a pleasure to read about your thoughts, they're deep and insightful.
I especially love the way you write about your personal motivators:
- Make changes more intentional, because you want to own them
- The vibe of creating something from scratch, which reminds you of the early DevOps days.
Thanks for the list of MCPs, you inspired me to try them out! :)
Regarding AI slop: regardless of the tools, you still capture requirements, right? Do you still review requirements to validate AI, peer review and knowledge share ... right?
Finally, hope the question is not too sensitive, just out of curiosity: how much do this setup cost? I mean, at least a ball park. Perhaps it deserves a separate post - ROI.
I think I'll explore creating a subagent that I give a branch name to, a prompt that points it at my application structure overview doc (plus others if/when I get around to creating them), auto grant it some non-mutating git commands like `git diff`, and set it up to expect a branch name that it diffs against `main` to see if docs need to be updated.
Went with a slash command. This has been working for me recently. 'Another colleague' ends up being me.
```
---
argument-hint: [PR-number]
description: Perform a code review on of a GitHub pull request
allowed-tools: Bash(gh pr:*), Bash(backlog:*)
model: claude-opus-4-1
---
Use the GitHub CLI (`gh`) to view the GitHub PR #$1. Your goal is to critique how changes in this pull request fit into the larger project.
Review all additions or changes both for code quality as well as adherence to existing project conventions and patterns. Think deeply through things like:
- Does new method belong at the level of the application?
- Is an addition at the correct level of abstraction?
- Is a new parameter name consistent with naming elsewhere in the project?
Prepare an assessment for discussion with another colleague. Use numbering for critiques so conversation about specific points can be referred to by number for faster and more exact referencing.
I invoke that with `/pr-review <pr number>` and off to the races. Telling CC to identify things in a numbered list has been a long time helpful prompting bit for me. I can reply with minimal typing and specificity that way: 'yeah 8 and 4 make sense. do just those'.
Another workflow thing with this: I often find that during implementation, I change my mind slightly on implementation. So a common refrain after I finish a backlog ticket is to do `think hard about task <next task id> and update it with the current changes from the completed ticket in mind. <next task id> should have sufficient context to be picked up by a different developer.'
Then `/clear` to reset the context and `implement <next task id>`
I've been thinking about making a canned prompt in some way that I can do an `@` command with for this sort of cleanup.
Oh, also, another handy thing I've added recently: a statusbar at the bottom of my CC session that shows the current model + cost I've wracked up. You can use the `@agent-statusline-setup` agent that ships with new claude versions to help you add it, but this is what I use:
```
#!/bin/bash
# Read JSON input from stdin
input=$(cat)
# Debug: Log the input to a file for inspection (uncomment to debug)
# echo "$input" > /tmp/statusline_debug.json
# Example shape as of Claude 1.0.96 and 2025/08/28:
At the bottom of all my CC sessions. I keep that comment in there because I couldn't find documentation for the actual shape that is sent to this hook in Anthropic's docs and the statusline agent was hallucinating attributes that weren't actually present on the object sent to jq.
A tool I've been using for about 2 weeks now (in addition to my usual task management), is `backlog`: https://github.com/MrLesk/Backlog.md . CLI operated kanban board is nothing new -- this is entirely about a CLI kanban board _that is intended to be used by CC_. I hardly ever actually interact with the `backlog` CLI itself, other than to do `backlog browser` to glance at the individual tasks in the web based GUI.
I've often found that these JIRA type tasks for humans are just too broad for these quick-hit tasks, and if I try to make more granular small feature tickets it ends up so noisy for other devs.
But I want _something_ that can track AC, descriptions, ticket context, etc. So I want a ticketing system that's silent to the rest of my team (at least at the level of granularity I feel like I need).
A workflow I've been doing a fair bit of is to take a JIRA/human level feature ticket, then locally decompose it into multiple `backlog` tickets with Opus. Then I'll go after each ticket with sonnet, review work, then make a commit if I'm happy with it. Then on to the next ticket. It's often that a human/JIRA ticket ends up being 2-4 little backlog tickets and thus 2-4 git commits.
Once I'm done I'll rebase those intermediate commits together to be a more cohesive addition and push a PR.
I've been using this workflow for ~2 weeks now. It's been allowing cheaper/faster/more recoverable CC sessions since the backlog board serves as a focused 'memory' for CC.
I also read your posts quickly when they come out. Glad to see another one after awhile.
I also use a similar workflow. Something I've been considering recently is setting up a CC agent whose job is entirely to update the documentation context docs that I keep after a new PR. A way I've been thinking about doing this is to have a canned prompt for that agent that has the agent run git diff against my current branch state and main, then consider if my application overview/repo organization/tool listing doc/whatever needs to be updated. Not sure yet; may set it up as a CI step instead with.
Wanted to update -- Claude Code shipped a `/review` slash command into their mainline tool about three weeks ago. It's a bit noisier than my own, but I've already dropped mine and I'm using theirs at this point.
Found it. Feels like the idiomatic Claude Code way to have canned prompts is via slash commands more than sub-agents.
Anyways, this is a `~/.claude/commands/pr-review.md` slash command I have now. I invoke it with `/pr-review <pr number>`:
```
---
argument-hint: [PR-number]
description: Perform a code review on of a GitHub pull request
allowed-tools: Bash(gh pr:*), Bash(backlog:*)
model: claude-opus-4-1
---
Use the GitHub CLI (`gh`) to view the GitHub PR #$1. Your goal is to critique how changes in this pull request fit into the larger project.
Review all additions or changes both for code quality as well as adherence to existing project conventions and patterns. Think deeply through things like:
- Does new method belong at the level of the application?
- Is an addition at the correct level of abstraction?
- Is a new parameter name consistent with naming elsewhere in the project?
Prepare an assessment for discussion with another colleague. Use numbering for critiques so conversation about specific points can be referred to by number for faster and more exact referencing.
```
^ 'Another colleague' is me. I've been using this as a 'pre-review' before I push a PR to real people. Most times I ignore things it brings up and the numbered list of items allows me to respond with something like, 'yeah 6 and 9 make sense. do that.'
Steve, so lovely to see this post! Interestingly, my processes for using AI in non-technical, education-focused work mirror some of your own, albeit with very different implementation tools. Thank you for sharing! ~ Meaghan D.
Great to see you back and blogging. It's always a pleasure to read about your thoughts, they're deep and insightful.
I especially love the way you write about your personal motivators:
- Make changes more intentional, because you want to own them
- The vibe of creating something from scratch, which reminds you of the early DevOps days.
Thanks for the list of MCPs, you inspired me to try them out! :)
Regarding AI slop: regardless of the tools, you still capture requirements, right? Do you still review requirements to validate AI, peer review and knowledge share ... right?
Finally, hope the question is not too sensitive, just out of curiosity: how much do this setup cost? I mean, at least a ball park. Perhaps it deserves a separate post - ROI.
Keep having fun!
Went and read up a little on this because I'm pretty interested for my own work: https://docs.anthropic.com/en/docs/claude-code/sub-agents
I think I'll explore creating a subagent that I give a branch name to, a prompt that points it at my application structure overview doc (plus others if/when I get around to creating them), auto grant it some non-mutating git commands like `git diff`, and set it up to expect a branch name that it diffs against `main` to see if docs need to be updated.
Went with a slash command. This has been working for me recently. 'Another colleague' ends up being me.
```
---
argument-hint: [PR-number]
description: Perform a code review on of a GitHub pull request
allowed-tools: Bash(gh pr:*), Bash(backlog:*)
model: claude-opus-4-1
---
Use the GitHub CLI (`gh`) to view the GitHub PR #$1. Your goal is to critique how changes in this pull request fit into the larger project.
Review all additions or changes both for code quality as well as adherence to existing project conventions and patterns. Think deeply through things like:
- Does new method belong at the level of the application?
- Is an addition at the correct level of abstraction?
- Is a new parameter name consistent with naming elsewhere in the project?
Prepare an assessment for discussion with another colleague. Use numbering for critiques so conversation about specific points can be referred to by number for faster and more exact referencing.
```
I invoke that with `/pr-review <pr number>` and off to the races. Telling CC to identify things in a numbered list has been a long time helpful prompting bit for me. I can reply with minimal typing and specificity that way: 'yeah 8 and 4 make sense. do just those'.
Another workflow thing with this: I often find that during implementation, I change my mind slightly on implementation. So a common refrain after I finish a backlog ticket is to do `think hard about task <next task id> and update it with the current changes from the completed ticket in mind. <next task id> should have sufficient context to be picked up by a different developer.'
Then `/clear` to reset the context and `implement <next task id>`
I've been thinking about making a canned prompt in some way that I can do an `@` command with for this sort of cleanup.
Oh, also, another handy thing I've added recently: a statusbar at the bottom of my CC session that shows the current model + cost I've wracked up. You can use the `@agent-statusline-setup` agent that ships with new claude versions to help you add it, but this is what I use:
```
#!/bin/bash
# Read JSON input from stdin
input=$(cat)
# Debug: Log the input to a file for inspection (uncomment to debug)
# echo "$input" > /tmp/statusline_debug.json
# Example shape as of Claude 1.0.96 and 2025/08/28:
# {
# "session_id": "2b6b04b2-2a9a-482b-aea2-9e0e7d8b125f",
# "transcript_path": "/Users/lirum/.claude/projects/-Users-lirum-projects-vcto-asl-qb-api/2b6b04b2-2a9a-482b-aea2-9e0e7d8b125f.jsonl",
# "cwd": "/Users/lirum/projects/vcto/asl/qb-api",
# "model": {
# "id": "claude-sonnet-4-20250514",
# "display_name": "Sonnet 4"
# },
# "workspace": {
# "current_dir": "/Users/lirum/projects/vcto/asl/qb-api",
# "project_dir": "/Users/lirum/projects/vcto/asl/qb-api"
# },
# "version": "1.0.96",
# "output_style": {
# "name": "default"
# },
# "cost": {
# "total_cost_usd": 0.03645105,
# "total_duration_ms": 8464,
# "total_api_duration_ms": 6453,
# "total_lines_added": 0,
# "total_lines_removed": 0
# },
# "exceeds_200k_tokens": false
# }
# Extract values using jq
MODEL_DISPLAY=$(echo "$input" | jq -r '.model.display_name')
TOTAL_COST=$(echo "$input" | jq -r '.cost.total_cost_usd')
echo "[$MODEL_DISPLAY] || \$$(printf "%.5f" "$TOTAL_COST")"
```
Gives me a little readout like:
```
[Sonnet 4] || $10.48454
```
At the bottom of all my CC sessions. I keep that comment in there because I couldn't find documentation for the actual shape that is sent to this hook in Anthropic's docs and the statusline agent was hallucinating attributes that weren't actually present on the object sent to jq.
A tool I've been using for about 2 weeks now (in addition to my usual task management), is `backlog`: https://github.com/MrLesk/Backlog.md . CLI operated kanban board is nothing new -- this is entirely about a CLI kanban board _that is intended to be used by CC_. I hardly ever actually interact with the `backlog` CLI itself, other than to do `backlog browser` to glance at the individual tasks in the web based GUI.
I've often found that these JIRA type tasks for humans are just too broad for these quick-hit tasks, and if I try to make more granular small feature tickets it ends up so noisy for other devs.
But I want _something_ that can track AC, descriptions, ticket context, etc. So I want a ticketing system that's silent to the rest of my team (at least at the level of granularity I feel like I need).
A workflow I've been doing a fair bit of is to take a JIRA/human level feature ticket, then locally decompose it into multiple `backlog` tickets with Opus. Then I'll go after each ticket with sonnet, review work, then make a commit if I'm happy with it. Then on to the next ticket. It's often that a human/JIRA ticket ends up being 2-4 little backlog tickets and thus 2-4 git commits.
Once I'm done I'll rebase those intermediate commits together to be a more cohesive addition and push a PR.
I've been using this workflow for ~2 weeks now. It's been allowing cheaper/faster/more recoverable CC sessions since the backlog board serves as a focused 'memory' for CC.
I also read your posts quickly when they come out. Glad to see another one after awhile.
I also use a similar workflow. Something I've been considering recently is setting up a CC agent whose job is entirely to update the documentation context docs that I keep after a new PR. A way I've been thinking about doing this is to have a canned prompt for that agent that has the agent run git diff against my current branch state and main, then consider if my application overview/repo organization/tool listing doc/whatever needs to be updated. Not sure yet; may set it up as a CI step instead with.
Wanted to update -- Claude Code shipped a `/review` slash command into their mainline tool about three weeks ago. It's a bit noisier than my own, but I've already dropped mine and I'm using theirs at this point.
Found it. Feels like the idiomatic Claude Code way to have canned prompts is via slash commands more than sub-agents.
Anyways, this is a `~/.claude/commands/pr-review.md` slash command I have now. I invoke it with `/pr-review <pr number>`:
```
---
argument-hint: [PR-number]
description: Perform a code review on of a GitHub pull request
allowed-tools: Bash(gh pr:*), Bash(backlog:*)
model: claude-opus-4-1
---
Use the GitHub CLI (`gh`) to view the GitHub PR #$1. Your goal is to critique how changes in this pull request fit into the larger project.
Review all additions or changes both for code quality as well as adherence to existing project conventions and patterns. Think deeply through things like:
- Does new method belong at the level of the application?
- Is an addition at the correct level of abstraction?
- Is a new parameter name consistent with naming elsewhere in the project?
Prepare an assessment for discussion with another colleague. Use numbering for critiques so conversation about specific points can be referred to by number for faster and more exact referencing.
```
^ 'Another colleague' is me. I've been using this as a 'pre-review' before I push a PR to real people. Most times I ignore things it brings up and the numbered list of items allows me to respond with something like, 'yeah 6 and 9 make sense. do that.'
lol nice. OpenAI's `codex` CLI added this as a feature in the last day: https://github.com/openai/codex/pull/3961
It's exactly how I use the `/pr-review` slash command, except it can more flexibly take a branch name or git commit or whatnot. Nice.
I continue to use that slash command a few times per day so it definitely makes sense to me.
Steve, so lovely to see this post! Interestingly, my processes for using AI in non-technical, education-focused work mirror some of your own, albeit with very different implementation tools. Thank you for sharing! ~ Meaghan D.