Am I trying to make a glider gun with my prompts and context?

05.07.2025

I'm still searching for a mental model of what I'm doing when I'm tackling a medium-large sized problem with LLM coding tools (Cursor, Claude Code)

I'm doing my best to make one prompt be enough for the entire process, since the more follow up instruction I need to give it, the worse the outcome will be (because I didn't think of it enough). A kind of perfectionist's dream I suppose - or nightmare. Should water cooler conversations be around what your 1-shot percentage is for LLM prompts? Do senior engineers have a lower handicap? I'm rambling...

But I'm also trying to assemble the correct files for it to look at initially, so it doesn't have to look through the codebase like a newborn, as if it hasn't seen every single file a thousand times in the last 72 hours, having authored half of them.

Now I'm writing slash commands for Claude Code, a collection of heuristics and algorithms that are trying to say: "It's this kind of situation, and this kind of situations is resolved this way", and as I get better at making them I presume my slash commands will solve their respective situations reliably, automatically. How should I think of this?

I am reminded of the glider gun in Conway's game of Life:

Is that me and Claude? Are the gliders product features? The moment when the two halves of the mechanism collide certainly look the way it sometimes feels to prompt and give context to these tools. It's chaotic for a moment, but then structure comes out, sometimes to my surprise.

I think the glider gun is what I might be striving for, maybe. To have a system I can reliably use to generate new pieces of the products I'm working on, and that work mostly automatically.

And in Conway, there's always a metabigger gun to be constructed. I hope it keeps being fun.

Game of Life courtesy of https://github.com/Jedidiah/game-of-life-wc