Breaking Old Patterns
Over the last year, I’ve been forcing myself out of old habits. In startups, we say: what got us here won’t get us there. I’ve started to feel that in how I build software. The patterns that served me for 20 years: DRY Code, API-first designs. etc, still work, but they’re also limiting. Especially with AI.
Seasoned developers get to a kind of “steady state”. We know what works for our brains, our teams, and our preferred tools. We’ve figured out how to deliver. That confidence is earned, but it can also lock us in. I’ve noticed I default to the same patterns, the same IDE, the same plugins. If I’m not careful, I’m coding like it’s five years ago. With AI in the mix, that’s a real risk. The hard part hasn’t been learning new tools—it’s been making space to question the old ones.
What works, but can be improved with a pattern shift.
I’ve also seen this play out with teams I’ve worked with. Recently, I was helping a group of developers at a company in distress. In situations like that, I try to be careful—like a doctor, I aim to do no harm. I’m not there to overhaul everything. So I asked a few simple, low-friction questions: “Hey, have you heard of vibe coding?” and “How much AI do you have in your IDE?” These questions don’t provoke stress because they’re safe and easy to use in any coaching conversation.
Behavior change in teams requires a supportive rather than dictatorial approach.
The responses felt familiar. “I use what I know.” “I’ve got enough AI in my process.” What I heard was: I’m good…let’s move on. That was me, too, not long ago.
It took me a while to process both my own habits and what I was hearing from others. I think it comes down to the cost of investment. We’ve spent years getting good. We’ve worked hard to build reliable processes. And investment, a real investment, takes time, focus, and risk. We all have deliverables. Changing how we work is a risk in itself.
I recently read Thinking, Fast and Slow, and it helped something click. I’ve spent years shaping my process into a kind of rapid-response mode. I need my patterns hardwired. I need repeatability, so I can move fast under pressure. If I have to rewire every decision path, I lose speed.
That might be why I still reach for older libraries. Why I default to Axios instead of using native Node.js fetch. It’s not that the old way is better; it’s that I’ve already paid the cost to get fast with it. Re-investing takes more than just time; it takes breaking working habits.
My final thought was this: ideas have a cost. We all have personal budgets, limited attention spans, finite energy, and too little time. We’re all resource-limited in some way. So we pick and choose where to invest. What to double down on. What to leave alone. It’s not laziness. It’s become a strategy, even if it’s unconscious.
I keep going back to the idea that ignoring a productivity gain is not a strength. I am amazed at how often seasoned coders and builders discount or ignore new tools or approaches.
How I Actually Work with AI
All of this led to a question I got from a few developers on LinkedIn:
“What’s your personal tooling approach to AI?. “How are you getting these personal gains. They sounds like LI AI hype..”
They were hitting a ceiling. Most of them were using Claude or OpenAI. And like me, they’d made early gains, with better autocomplete and faster prototyping, but it had plateaued. Some weren’t sure there was more to be gained.
Before I answered, I had to admit something: I hate tooling. None of what I do now came from chasing tools. Every part of my setup came from someone else—conversations, open source projects, the occasional rabbit hole. Over time, I adapted what worked to my style.
Still, it’s made a real difference. So here’s what I shared.
1. PseudoCode is an AI Prompt
I’ve stopped jumping straight into code. My first move is pseudocode—or even just narrating the solution like I’m pitching it to a stranger. Not a developer. Someone who needs to get it without knowing the stack. That shift alone improves the prompt quality and forces me to externalize the problem.
Once I have the idea's shape, I pick the right voice for the AI. This is where the gains started stacking. I’ll tell Cursor or GPT, “You’re a senior architect at Google. You manage the Calendar team. You understand the scale, the failure patterns, the system constraints.” I did this recently with a brittle calendar integration, and what came back was architecture I wouldn’t have thought to build. Not because I’m not good, but because I haven’t been in those rooms.
2. Capture Pre-Sets at Each Step
I’ve also started to capture the decisions I make in the moment. Cursor lets me save presets: code style, approaches, design choices, edge-case preferences. Every time I lock one in, the following output gets closer to what I actually want. I spend less time fixing AI code and more time accepting it.
3. Prompt to A Role
More and more, I am prompted toward outcomes, not tasks. I think of the AI as either a junior, a peer, or a systems architect, depending on the situation. And I watch for when I’m one-shotting things. I wouldn’t accept a PR that changes a dozen files with no narrative. So I don’t let the AI do that either. Tight, intentional work only.
4. Control Context with different tools
The big unlock for me was context control. I use different tools for different roles. Cursor on local when I am looking at code, on Lovable/Base44 for broader prompts and looser control of context. And I use Warp.dev when I am working on the command line. Cursor lets me feed in just a function, a file, or a folder. That boundary-setting matters. The AI is more responsive and predictable when I define the problem's shape.
I’ve also started adding visuals. Screenshots in prompts, rough wireframes when I can. If I can’t sketch a system, I probably don’t understand it yet. Text + image has been a surprisingly strong combo. The outcomes are clearer, the trade-offs sharper.
5. AI Comments = Human Readable Comments
One small thing with a big payoff: I write human-readable comments. Not for other people—for future me. Especially for edge cases. I want to know why a function exists when I come back six months later.
Testing has shifted, too. I try to have at least one test per function, minimum. Jest in watch mode helps a lot—only re-runs what’s touched. It keeps my local loop fast, even with heavy coverage.
I no longer use the JsDocs format. Defining the argument used in a function can be helpful, but when I talk about the business case for a function, the AI has much more agency to help me match code to outcomes.
6. Just-in-Time Testing
Lately, I’ve been trying to break out of single-repo work. I want my AI context to span across repos—especially for shared infra or template stacks. I’ve been toying with a techstack.json A concept that describes the whole environment, so the AI can better reason about where new code fits.
And then there’s this odd experiment I’ve been doing with virtual focus groups. I’ll simulate users in GPT and run product tickets through them. Each “voice” brings different pushback. Sometimes I have them debate. It’s early, but the ROI insight it surfaces is real.
My big win was setting a preset in Cursor. “When I add a new feature, [AI] reviews the tests and keeps them up to date”. I use a jest command to run the resulting test suite in watch mode. This means that if I edit a function or set of functions in the src folder, only the corresponding test will run. Additionally, I use the cursor feature to link console responses to the Cursor IDE. This enables the AI agent mode to gather more information about testing and errors.
Over the holidays, I plan to binge-watch every Cursor and tooling video I can find.
7. Tell the AI tooling about the ‘Why’
This was one of the approaches I found most surprising. I found I got an improvement in outcomes when I outlined the background reasoning, the business need. Just asking the AI to help me adapt a solution or an approach was fine. But when I explained why I thought a feature would help the business, I got much more detailed and focused results. I have been using the ‘plan’ feature in both Base44 and Cursor to ensure I start with the why.
Would Simon be so proud? Surprised?
What’s Next
Even with all of this, I think I’m only tapping into 30% of what’s possible. I’m still figuring it out. Still adapting.
But the most significant shift? It’s not the tools. It’s the posture. I stopped assuming that what I already know is all I need to know.
One more thing, if you’re curious about the virtual focus group idea, you should check out Ian Venskus. He did a session on Launch by Lunch that dug into this exact topic. Honestly, I think he even surprised himself with the results. The way he used simulated users to test ideas—real pushback, absolute clarity—it was one of the best sessions I’ve seen on how to stretch this tooling into actual product thinking.
My own process isn’t “done” either. This Monday, we’re hosting an in-person session with [@Mention Person] (Dave Strickler), a seasoned CTO who’s gone all-in on AI. He’s built some incredible things, and he’s not just talking theory; he’s showing his tooling and approaches that increase his throughput. If you're curious, here’s the link to join us:
If this resonates and you're trying to bring this thinking to your team, we offer coaching and advisory services on AI-native dev and founder thinking. No playbooks. Just helping teams evolve in ways that actually stick. Feel free to reach out.
One Last Thing: The Tool That Forced All of This
Many of these processes started as side projects. When I was working full-time as a fractional CTO, I needed better tooling, fast. So I vibe-coded a bridge between Google Calendar and an API. That bridge turned into something bigger.
Now it’s a tool designed for fractional CTOs, or anyone who sells time by the hour. The core mission was simple: I didn’t have time to track hours or send invoices. So I built the tool I needed.
I’m now running a small private beta to see if it works for others.
Wall Street Isn’t Warning You, But This Chart Might
Vanguard just projected public markets may return only 5% annually over the next decade. In a 2024 report, Goldman Sachs forecasted the S&P 500 may return just 3% annually for the same time frame—stats that put current valuations in the 7th percentile of history.
Translation? The gains we’ve seen over the past few years might not continue for quite a while.
Meanwhile, another asset class—almost entirely uncorrelated to the S&P 500 historically—has overall outpaced it for decades (1995-2024), according to Masterworks data.
Masterworks lets everyday investors invest in shares of multimillion-dollar artworks by legends like Banksy, Basquiat, and Picasso.
And they’re not just buying. They’re exiting—with net annualized returns like 17.6%, 17.8%, and 21.5% among their 23 sales.*
Wall Street won’t talk about this. But the wealthy already are. Shares in new offerings can sell quickly but…
*Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd.

