Chamber 🏰 of Tech Secrets #55: Issue-Driven Development
From Spec-Driven to Issue-Driven Development
The State of AI Coding in the Chamber 🏰 of Tech Secrets
Over the past 4+ months, I have continued to leverage AI coding tools at an often feverish pace and continue to learn and refine my process for maximizing productivity and minimizing tail-chasing and frustration. While frustrations do occur (see my scornful linkedIn posts), I still find these tools to provide a meaningful productivity boost. 🚀
Claude Code remains an indispensable part of my personal process for…
Building REST APIs (including database SQL, DDL)
Designing and developing front-end web apps
Synthetic Systems (like a synthetic PLC I made recently for testing)
Code Reviews
Managing non-prod infrastructure deployments (iteration through errors)
Doing 60-80% of the scaffolding for edge projects
Changes to my process
Back in July, I wrote about my spec-driven development process for AI coding. That process was designed to achieve the goals of maximizing productivity and minimize the tail-chasing of one-shoting apps or features. You can read that here.
Since my last post, the amount of AI code projects I am working on has 5x’d, resulting in some new learnings and meaningful refinement of my processes.
In my previous post, I wrote about how I used a specification document (architecture.md) and a list of tasks (tasks.md) that needed to be read into context and executed by an AI coding agent.
While this worked out great for a lot of projects, there were also some issues I bumped into…
Design drift: No matter how thorough a design, there will always be some changes as you iterate through execution. Practically, remembering to keep the
architecture.mdup to date through these iterations requires extraordinary discipline. Just one miss and your design will start to drift over time, which poses problems when the old approaches continue to be read in for planning and executing future tasks.Attention Skew: LLMs process all tokens in a sequence (the context window) on every request, calculate the best responses, and return them to the user. A heavy and broad specification has the potential to skew the model’s quality of output on niche task execution, resulting in subpar intent following or poor design outputs. Reading the entire architecture document every time is not ideal. Reading all tasks in a tasks.md is not ideal either. Which leads us to…
Context Window Management: Quality in → quality out. “In” is everything in your context window, so managing it to include only the essential is very important. It is also essential for personal sanity. AI tools like Claude Code perform auto-compaction of the context window as it nears its limit, which is a pain to wait for when you are iterating through an issue. Furthermore, during compaction, most of your “conversation history” is lost and the agent will have to rediscover basic things like where your local database is (again!). I have found that focusing on just enough detail in the context window (with frequent `/clear` commands) works best. Using the minimal MCP tools (not just servers…tools) can also save meaningful space in the context window (every tool comes with some token overhead).

What can be done about these issues?
Issue-Driven Development
To address these challenges, I have shifted to what I call “issue-driven development”. The guts of the process are similar but in practice I find this approach to yield better results and to be cognitively easier to manage.
In an issue-driven development approach, we shift from managing architecture.md and tasks.md files to managing only rich tasks which are stored in an “issue”. In my case, these are GitHub issues, but you can use any issue tracking system you wish.
Rules Files
The first thing we need to ensure is that our agent remembers what we are doing in our project and what the basic structures of our project are. Per the context window management idea above, we want to keep this minimal.
I have found rules files (claude.md, .cursor/rules, etc) to be a good way to inject the minimal “memories” into the context window to prohibit wasting tokens and time understanding the project after a /clear or /compact occurs.
I like to keep the rules files minimal and focus on…
what is the project doing
what is the tech stack
where are the basic things that need to be known / reused (database, env vars, etc)
what are the basics guidelines for development
As your project develops, you will likely have additional things you want to add to your rules file to help the coding agent efficiently navigate the project. That’s great, just keep it to the essential.
Here is a sample from a recent project.
# CLAUDE.md
## Project Overview
What We’re Building...
A simple relationship management system for managing farmer relationships and measuring community impact over time.
## Project Overview
/crm-api - contains the golang REST code
/crm-ui - contains the frontend React/NEXTJS code
## Database
Database: `oicrmdb` running locally in Docker. Find ports and credentials in /crm-api/.env.example.
All migrations are in /crm-api/migrations and should follow the convention 001_overview_of_change.sql. Sample data scripts must follow the same convention and be stored in /crm-api/migrations/sample_data.
Ports:
* REST API runs on 8099
* UI runs on 3039
## Project Overview
Tech Stack:
* Frontend: Next.js with TypeScript, Tailwind CSS + shadcn/ui components
* Backend: Golang REST API using standard HTTP library (no frameworks)
* Database: PostgreSQL with direct SQL queries (no ORM)
* Auth: OIDC via Keycloak for authentication at https://...
Development Philosophy:
* Verbosity and cleverness is bad, simplicity is good
* Idiomatic, simple Go code with minimal dependencies
* Behavior-focused testing over code coverage metrics
* Minimize mocks in tests - prefer integration tests where practical
* API abstracts storage implementation from UI
* Small, incremental changes that can be developed and tested independently
* Reuse UI components (data tables, etc)
* Never disable auth, TLS, or do other similar things to “get it working”. Always work the problem thoroughly until completionI always hand-roll this file to keep it minimal and specific, but may use an AI agent to update it later.
Requirements Development
How do issues get developed and stored?
As in my prior approach, I usually start by thinking through a design with a model… typically Opus 4.1 via Claude Desktop for me, but use whatever you want. No reason not to do this with Claude Code, either (can write files more easily or reference other local projects for patterns).
I still like to create an architecture.md file as an output but I like to keep it minimal.
In light of that architecture context (in the 1M token window w/ Opus) I create a list of tasks or issues. I have the GitHub MCP Server installed with Claude Desktop and Claude Code (using a PAT scoped down to only what I need). Once I am happy with the design and list of tasks, I ask Claude to load these as new Git issues.
Prompt: I like this design. Please take each of these tasks and create a GitHub issue for them one-by-one. Add them to the project named "<project name>". *** I say “one-by-one” because batch loads never seem to work and I want to avoid wasting my time and tokens.
Now we have a set of baseline issues we can work with and a visual on what we are building and where we are (again this would also work with jira, asana, trello, etc. provided they have an MCP server).
In all of this, what we are trying to achieve is the composition of a perfect task that carries all necessary architectural context, all necessary outcome context, and all necessary “how we work” context without being overly complicated or verbose, such that the task can be executed successfully from a single issue. This is hard to get right across a large number of tasks, so in practice, I have another step I typically do.
Prompt in Plan Mode: Pull issue #107, read it thoroughly, and make a plan for how to implement it in light of our architecture (see architecture.md).
<Review the Plan>
Prompt: Please update issue #107 with this plan and move it to state of "Ready". The most important thing in this phase is knowing what you want to build and how you want to build it. Precision really matters.
Task Execution
Assuming we get the requirements stage right and have a robust task, execution becomes very straightforward.
Prompt: Pull issue #107 and read it thoroughly. Write behavioral tests first and write code until they pass. Do the work in a new branch, execute the task completely, and then push the changes and create a Pull
Request. Move the task to "In Review" when complete.If you wish you can avoid the dual issue pull here and just implement the plan directly. I like to log the plan back to the issue for posterity (why did I make that decision again?).
Code Reviews + Additional Issues
We must, of course, review the code generated by our agents.
There are multiple options for Code Reviews. For a live production system, you likely want to review every one of these features with an AI code review and a human eyes review. If you are building a new system from scratch and are working on a lot of issues in a short time, you might be okay with batch code reviewing (AI + human again). For enterprises, go with the former.
For AI code reviews, I prompt a few times (in parallel) with different perspectives.
Simple, clean code
Architecture
Security
I execute these reviews with standard prompts I have created using Claude Code Subagents. Outputs are grouped by priority.
For each, I review the results manually, and then ask the agent to open GitHub issue for the ones I deem legitimate (it is never all of them).
Another thing I like about this issue-driven approach is dealing with bugs and defects I find while reviewing code or the functional system’s behavior. I often find a slew of semi-related or unrelated defects while working an issue. Personally, that is a big cognitive tax… I know there are many things wrong and I feel like I am quickly losing track or getting distracted chasing them down. With an issue-driven approach, we can easily 1) manually in GitHub or 2) via Claude Code open a new issue via MCP. I do both depending on the nature of what I find. Using the agent approach can help inject useful context from code into the issue which can be helpful when you interpret it later.
Cloud Deployments
Most of the work I am doing right now is for a non-profit where I serve as a Volunteer CIO/CTO/Software Engineer. Given that they have a footprint in Google Workspaces, I decided to use Google Cloud for all of my cloud deployments (Google Cloud friends, email me with non-profit credits! 😂).
Claude Code has been really helpful in working through live development environment configuration issues with Cloud Build, Cloud Run, and other services using the gcloud utility locally. I set up secrets and build triggers and such myself, but when build triggers were failing for various reasons early, I was able to have Claude tail the logs and keep iterating on small changes to my cloudbuild.yaml until everything was right.
Final Thoughts
A few final thoughts…
Obviously, review the outputs at some point. These tools are great but not perfect. They are great at getting things working but don’t always follow your design parameters. They will take shortcuts like disabling authentication completely “because that will be a simpler approach”. They will create some slop which you can have them fix, but you have to be aware of it.
For lower level systems work where precision matters, you might get some benefit from scaffolding but may be better off hand-rolling the more important elements of the code (maybe a 60/40 split).
There is no magic in the prompts I shared. I often change them slightly.
I haven’t fully automated my workflow yet but still think that would be awesome.
No approach is perfect. Mine is not. If you know what you want to build and how, it can work quite well and scale quite nicely.
I hope it helps you on your journey. I remain slightly averse to other people’s frameworks (Kiro, etc.) that force me to stop working and learn a new tool… but I think I am just busy and therefore lazy.
What is working for you? I’d love to hear in the comments.


Couldn't agree more! AI coding is such a game changer. I'm curious, with your proces changes, how do you minimize 'tail-chasing' on those complex edge projects, especially compared to the spec-driven approach? That's a real puzzle.