How I’m re-writing 100+ Substack posts by teaching Claude Code to think like me
The system I built the day I stopped writing and started teaching
It was just yesterday, Thursday, April 9, 2026.
I’m sitting in a live Zoom workshop watching as the host explained his AI writing framework to produce consistently engaging newsletter articles.
I was supposed to be paying attention to the demo. But somewhere between the fourth and fifth slide I stopped watching the screen and started thinking about my own articles.
I have to apply this framework to all my posts going forward!
No-Code Exits hit its 100th post on March 7. I wrote a milestone post about it and moved on.
100+ posts just sitting there in the archive collecting dust.
Then it hit me…even better, what if I used it to rewrite all my old posts too?
I realized the key insight was that some just need to be re-written.
So I found myself with another question. The same question I think every writer with more than a year of publishing eventually has, even if they don’t say it out loud:
What do you actually do with 100+ articles you published before you knew what you were doing?
I’ll tell you what I did, but first I need to tell you what I tried before, and why it didn’t work.
100 posts, zero compounding
Here’s the part nobody says about a Substack archive: it’s the single most undervalued asset you own as a writer.
You spent a year, maybe two, building it. Hundreds of hours. And once each post publishes, it just sits there.
New subscribers don’t read it because nothing surfaces it. You don’t link to it because you can’t remember exactly which post said the thing you’d want to reference.
The compounding effect that should be the entire point of writing in public with every post making every other post more valuable doesn’t happen, because the archive doesn’t actually compound. It accumulates.
This is the difference between an asset that grows in value over time vs churning out content for content’s sake.
I’m calling this the undervalued archive. And I think it’s the biggest blind spot writers have about their own businesses.
There’s a real cost too. Every week that goes by where my archive sits unchanged is a week my best ideas are stuck at the quality I had when I first wrote them.
Not the quality I have now. Not the quality I’ll have in six months. Frozen at the version of me who shipped the post one Friday afternoon and never looked back.
Why the obvious fix is worse than doing nothing
So here’s where most writers reach for AI.
I did too.
The thinking goes: I have 100 posts, I have an LLM, the LLM can write, let me just feed it my old posts and ask it to make them better.
This is the worst possible move.
The reason it fails isn’t technical. It’s that you’ve handed your archive to a system that has no idea what you’d write today.
AI used as a faster typist will give you 100 posts that are smoother, more polished, and stripped of the specific voice that made them yours in the first place. They’ll read like every other AI-generated newsletter on Substack. Which is to say, readers will scroll past without knowing it was you at all.
The trap is the way you are teaching AI. “Make my old post better” sounds like a writing problem. It’s not. It’s a judgment problem. The reason your old post isn’t as good as you’d write it today is that you didn’t have the same judgment then that you have now. And no amount of AI typing speed substitutes for the judgment.
This is the second mistake most writers make about AI, and it’s worse than the first one: not “I won’t use AI at all” (which at least preserves your voice) but “I’ll use AI to write for me” (which destroys it).
The thing that sat in front of me at the workshop was a third option. Not “AI writes for me.” Not “AI doesn’t touch my writing.” Something else.
Teaching vs. Prompting
Watching Taylin’s workshop, I noticed something he probably didn’t even mean to demonstrate. The Claude skills weren’t writing his newsletter. The skills were applying his judgment to whatever raw material he gave them.
The idea-finder didn’t generate ideas. It surfaced tensions from material the writer brought in.
The title-generator didn’t invent titles from scratch. It applied a scoring framework about what makes titles work to phrases the writer fed in.
The outline-creator didn’t outline. It took a chosen tension, a chosen title, and source material, and built scaffolding using rules about how good newsletters get structured.
In every case, the AI wasn’t replacing writing. It was executing writing judgment on Taylin’s behalf. Fast, consistently, and without him having to be in the loop for the mechanical parts.
This is a different thing than prompting. Prompting is when you ask an AI to write something and then spend longer editing the output than you would have spent writing it from scratch.
Teaching is when you take the time to externalize the rules you use in your own head and turn them into something the AI can apply on its own.
Prompting scales linearly with your time. Teaching compounds.
I’m calling this teaching vs. prompting, and it’s the move that collapses both of the earlier mistakes. When you teach, you’re not handing your archive to a system that doesn’t know your voice. You’re teaching a system to apply your voice to your archive, one judgment at a time.
I sat in that workshop and the implications started lining up. If Taylin had built three skills that each teach AI a piece of his writing judgment, I could use those same skills to bring my own judgment to my own archive. I could rewrite 100 posts not by handing them to an AI, but by teaching an AI to think the way I think and then letting it work.
This is the moment the article I’m writing right now started forming in my head.
The missing half
I want to be honest about something Taylin’s demo didn’t show, because it took me about an hour after the workshop to notice it.
The three skills he shipped — idea-finder, title-generator, outline-creator — cover the first half of writing a newsletter. They handle the work that happens before you start drafting: extracting the angle, picking the title, building the structure. That’s everything you need if you’re writing a brand new post from scratch.
But that’s not the problem I was trying to solve. I wasn’t writing new posts. I was rewriting old ones.
For a rewrite, you don’t need to invent an angle from nothing, you need to find the one already buried in the original. You don’t need to pick a title from scratch, you need to surface a phrase the writer already wrote and recognize it as the headline. You don’t need to outline from raw material, you need to restructure existing content around a thesis that’s been there all along.
That’s a different goal. And the skills cover only part of it. The second half of drafting from a rewrite outline, removing the generic sentences anyone could have written, doing the voice-check that catches what the AI didn’t.
The day I stopped writing
Here’s what I’m building. It’s a six-stage pipeline that runs in Claude Code installed locally. I’m calling it the rewriter. Totally original don’t steal the name! 😉
Stage 0 — Source intake (parse the original article)
Stage 1 — Idea extraction (Taylin's idea-finder, run in extraction mode)
Stage 2 — Title generation (Taylin's title-generator)
Stage 3 — Outline (Taylin's outline-creator, run in rewrite mode)
Stage 4 — Draft (new — produces the rewritten post from outline + source)
Stage 5 — De-IKEA pass (new — flags generic sentences for human review)
Stage 6 — Voice check (human-in-the-loop final pass)Stages 1, 2, and 3 are Taylin’s skills used drop-in. Stages 0, 4, 5, and 6 are the new orchestration that turns three skills designed for new-post writing into a pipeline that can handle rewrites at archive scale 🤞
Every stage has an explicit stop-and-wait checkpoint where the writer reviews and lives in a single markdown file called CLAUDE.md that Claude Code reads at session start.
Then I had to find out if it actually worked.
The first article I ran it on was a recent No-Code Exits founder post by Karen Spinner about launching her SaaS, CarouselBot. I picked it deliberately: it’s already a strong post (the highest-engagement piece in our recent archive), it’s a founder post and Karen co-authors with us so the relationship covers the demonstration.
The point of the test wasn’t to fix a bad post. It was to find out what the system could do to a post that was already working.
What it produced was interesting.
The strongest move was structural. Karen’s stated thesis — “the code is often the easy part” — was already in her piece. It was sitting in her closing section, where most readers never reach. The pipeline pulled that exact line and promoted it to the title.
Then it restructured her four loose topic sections into three tightly-committed arcs, cut a roadmap section that was weakening the close, and surfaced three other lines Karen had already written (”vibes is not a pricing strategy,” “the math kept bugging me,” “more frustrating than any coding challenge”) as anchor lines for individual sections.
The rewrite didn’t add new stories. It didn’t fabricate numbers. It didn’t imitate her voice. It took Karen’s existing content, found the angle she already had, and rebuilt the scaffolding so the angle could carry the weight of the whole post. Same words, different load-bearing.
That’s the lesson the system taught me about my own archive. The reason my early posts aren’t as good isn’t that I didn’t have the right ideas (I still may not), it’s that the right ideas are buried in section three of post 47, where nobody will ever read them. They need to be re-surfaced, not re-invented.
Then the pipeline taught me a different lesson, one I wasn’t expecting.
In the de-IKEA stage, three sentences in Karen's draft were flagged that felt generic. For each one, it proposed a replacement.
Two of the three replacements were worse than the originals. One used a metaphor that broke halfway through (”a workflow gap doesn’t close” — what does close mean applied to a workflow gap?).
One misdiagnosed Karen’s assumptions. The system suggested she had “treated her excitement as data,” but Karen’s real error was projection (assuming other writers would feel the same enthusiasm she felt), not data confusion. You can’t measure an external data point with an individual internal emotion.
I caught both. Not because I’m better than the system. Because the system is doing pattern matching at the surface level and I was doing semantic judgment about what the original sentence was actually trying to do.
The system can flag suspicious sentences reliably. It cannot reliably propose better ones, because proposing a better sentence requires understanding the failure mode of the current sentence, and that requires the kind of judgment the system was never going to have.
This is the most important thing the rewrite pipeline taught me. The de-IKEA stage isn’t a generator. It’s a dialogue. The system’s job is to point at sentences that feel off. The writer’s job is to decide whether the system has a real catch or whether it just reached for something clever and missed the point.
When that division of labor is clear, the system gets faster every time you use it. When it isn’t clear, the system slowly replaces your voice with its own and you don’t notice until your post sounds like every other AI-generated newsletter on the platform.
The writer you will become
I didn’t expect this part.
When I sat down at the workshop, I thought I was going to learn how to write with AI fast, but I learned how to teach AI to write better, and like me.
By the end of that day, I’d built a six-stage pipeline, run it on a real article, caught two of its mistakes, and started thinking about what it would mean to run it on the other 99 posts in my archive.
But the bigger shift wasn’t the pipeline. It was the insights I am discovering as I build it.
The version of me that wrote those 100+ posts thought of writing as a thing I did by myself on Fridays with a deadline and staring at a blinking cursor. Now I view myself as a teacher and AI as a student.
The version of me building the rewrite pipeline thinks about writing differently. The writing isn’t the thing being scaled. The judgement behind the writing is the thing being scaled.
Every rule I set, every human taste decision I can describe becomes a rule the system can apply to my next rewrites without me being in the loop.
This is uncomfortable to type, because it sounds like I’m describing a writer who has stopped writing. I haven’t. I’m writing this post right now.
I reviewed every line and still made a lot of edits.
What’s changed is how I approach writing with AI.
Writing the article in front of you isn’t the only output of my work this week. The pipeline is also an output. The lessons logged from running it on Karen’s post are also an output.
The decision calls I had to make about what counts as good, what to keep and what to disregard are outputs that didn’t exist before this week that will keep paying off every time I run the pipeline on another article.
The 100+ posts aren’t a problem I have to solve once. I now have a mechanism to review my writing at scale and iterate faster over a larger body of work.
That feels like a much better place to be standing than the one I was in last Friday when I missed yet another publishing date 😓
What’s next
Here’s what I’m doing next week.
I’m running the pipeline on more posts to refine it. I’m logging every time the system breaks and every place it surprises me. Those logs become the second version of the pipeline.
But the bigger thing I want you to take from this isn’t the pipeline.
It’s the thinking behind it. You don’t use AI, you teach AI.
If you have an archive sitting on Substack right now, accumulating instead of compounding, the move isn’t to feed it to an AI and hope.
The move is to get one rule right at a time until the system can apply your thinking faster than you can apply it yourself. Then you point that system at your archive and let it work.
You’ll still be in the loop on every rewrite.
You should be.
But the loop you’re in is a different loop than the one you’ve been in. You’re not generating sentences. You’re designing the system that generates them, and you’re catching the system’s mistakes when it gets them wrong.
That’s what happens when you stop writing and start teaching.
Everything in this post was part of and / inspired by a Cozora workshop with Taylin John Simmonds on April 9, 2026. The Claude skills and framework are his work. The rewrite-mode adaptation and the lessons from running it on a real article are all me.
Want Taylin’s Claude skills and frameworks, live trainings, and resources from 30+ other AI creators?
Join me in Cozora — a live learning community where creators show how they actually build with AI, every week.
Paid Subscribers to No-Code Exits get a bigger Cozora discount of 50% waiting for you below the paywall.




