From Agile to AI: How the Idea-to-Code Pipeline is Being Rewritten
For years, we’ve had a standardized way of converting ideas into software: Agile. The process was designed to take a user’s need—something often fuzzy and high-level—and distill it into clear, buildable pieces of work. Tickets were the currency of translation. "As a dispatcher at a trucking company, I need to update my delivery logs so I can track delays." Clean, structured, and tied to value.
Then the product owner would step in to break that idea down further. Maybe it was too big. Perhaps it touched too many systems. Engineers would weigh in, debate edge cases, and settle on implementation details. Eventually, code was written. It would then flow back up the chain: the developers demoed the work, the product owner reviewed it, and the loop closed with a yes or a no. Language moved down and back up again.
This process worked, but it wasn’t fast. Translating an idea took time, meetings, documentation, and assumptions. Even in a clean system, a lot of nuance lived in the heads of product managers and developers.
Then came LLMs.
Now, the founder doesn’t need to write a perfect Agile ticket. They can converse with an AI and begin breaking down the idea in real-time. They can explore variations, ask about edge cases, and iterate on the architecture to refine it. The act of breaking down a story isn’t just an engineering task anymore—it’s a prompt engineering task.
In some ways, this makes the process more transparent. Before, I’d write a few bullet points and hope my devs understood the vision. Now, those bullets can turn into 5–10 specific prompts, each one testing a part of the idea. What used to live only in the product owner’s head—or worse, hidden in a dev’s assumptions—is now visible and debuggable.
But this shift brings new challenges. Bad requests create bad prompts. Shallow thinking leads to superficial answers. Experience still matters—a lot. Senior developers have a sixth sense for what features will cost, what might break, and what is likely to explode the scope of a ticket. LLMs can help surface the unknowns, but someone still has to know which unknowns matter.
Here’s an example: I wanted to use Beehiiv as a lightweight CRM, but the default profile editor wasn’t cutting it. Instead of just filing a vague request—"Make the profile page better"—I worked with ChatGPT to break it down. What authentication system would it use? What API calls would I need? What custom fields should I expose? Years of experience helped me ask better questions, but the AI helped turn them into a coherent, testable plan.
The wildcard in most tickets isn’t the obvious stuff—it’s the assumed utility, the one-liner that hides a week of work. AI can smoke those out. It turns the invisible parts of planning into something tangible.
And yet, we’re still early. Can the language of complex ideas really be mapped cleanly into promptable questions? Can LLMs replace the slow, hard-won instincts of seasoned devs? Hard to say.
But one thing is clear: the developer’s role is shifting. It’s no longer just about code. It’s about shaping ideas, fitting them into architecture, and guiding them through ambiguity. That’s the real skill—and it might be the new measure of a developer’s value.
So, where does that leave junior devs? If AI raises the productivity ceiling, will companies still invest in developing their talent? When a senior can deliver 2–5x output, do we skip the messy middle?
We’re not sure yet. But we’re rewriting the playbook.
A One-Shot Prompt Example
Here is a simple, non-technical prompt that I started with. The working pieces assume the user is familiar with Beehiiv's API. For the non-coders, Beehiiv is where I write my newsletter, and it has a process (via a little bit of vibe code) that lets me query my database of subscribers.
One-Shot Prompt: “I want users to be able to update their profile info through a clean, standalone form on my site. The data should sync with the custom fields I’ve set up in Beehiiv, so their subscriber info stays up to date automatically.”
Let’s Deconstruct the Prompt
This is where the decision is made to break it apart. I know how to do each of these. I am aware of the hidden risks and potential issues that may arise. I wanted to see how well I could break a given task, with some technology challenges, to a shopping list format that I can know out.
If it creates what I would have to do, then an experienced developer and an LLM take on a new role in the Agile sprint planning process.
Step 1: Create the Landing Page
Prompt: Create a public-facing React web app for the subdomain profile.launchbylunch.co. Add a hero section with the text: “Update your Launch by Lunch newsletter preferences and profile.” Add an email input and a button labeled “Log In to Update Profile.”
What to Expect: A clean landing page appears with a title and a form asking for the user’s email. Nothing about profiles or settings is shown until login.
Step 2: Handle Email Login with Beehiiv Verification
Prompt: When the user submits their email, POST it to /api/check-and-login. If successful, show “Check your email to log in.” If not found, show: “This email isn’t subscribed.” If error, show: “Something went wrong.”
What to Expect: When a subscribed email is entered, a login link is sent. Invalid or unrecognized emails show a helpful message. This verifies the user exists in Beehiiv before proceeding.
Step 3: Detect Login State and Switch Screens
Prompt: Detect if the user is logged in with Supabase. If not, show the landing page. If logged in, show a dashboard.
What to Expect: After clicking the magic link, users are taken to a dashboard screen. Visitors who are not logged in still see the landing/login page.
Step 4: Fetch the Beehiiv Profile on Login
Prompt: When user is logged in, call /api/get-profile. Show a spinner until it loads. Then display the field data in a form.
What to Expect: A loading animation appears briefly, then the user’s Beehiiv profile fields load and are shown in a form.
Step 5: Handle Errors Gracefully
Prompt: If the profile fetch fails, show: “Could not load profile. Try again later.”
What to Expect: If something breaks in Beehiiv or Supabase, users see a friendly error message instead of a blank or broken page.
Step 6: Show Editable Form with Profile Fields
Prompt: Render a form with input fields for each custom field. Prefill them using the data returned from /api/get-profile.
What to Expect: The user sees a form with their current profile information (e.g., name, preferences) already filled in.
Step 7: Use Proper Input Types
Prompt: Use input types that match the field types (text, select, checkbox). Label each input.
What to Expect: The form is easier to use — it features dropdowns where appropriate, checkboxes for true/false options, and all inputs are clearly labeled.
Step 8: Save Updated Profile Data
Prompt: On submit, call /api/update-profile with updated field values. Show a success or error message.
What to Expect: When the user updates their info and clicks “Save,” their data is updated in Beehiiv, and a success message is shown. Errors are communicated if something goes wrong.
Step 9: Add Logout Functionality
Prompt: Add a logout button that signs the user out of Supabase and shows the landing screen again.
What to Expect: Clicking the logout button ends the session and shows the email login screen again.
Step 10: Secure the Beehiiv API
Prompt: All calls to Beehiiv should go through /api/... endpoints. Never connect directly to the Beehiiv API from the frontend.
What to Expect: Beehiiv API keys are stored only in the backend (Supabase Edge Functions). They are never visible in the browser or exposed to users.
The exciting outcome of this deconstruction is that I discovered features in Supabase that I knew existed, but had not had live code experience with in past projects. The process was educational. I knew I could use a magic link with an external API call, but it’s not at the top of my mind.
No matter how many apps I built, I still find edge cases and features hidden in most of the tooling I thought I already fully understood.
It’s an open question: How is ‘experience’ going to get priced in the post-LLM AI coding world? I'm not sure I want to leave an undergraduate computer science program right now and try to enter this labor market.
If you read the posts on LinkedIn, for and against the use of AI and Vibe Coding, the emotions are visible.
The Gold standard for AI news
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day


