← Previous · All Episodes · Next →
Revolutionizing Development: Harper Reed's Three-Step LLM Workflow for Building Smart Products Episode

Revolutionizing Development: Harper Reed's Three-Step LLM Workflow for Building Smart Products

· 02:24

|

In this article, Harper Reed walks us through his dynamic, three-step LLM code generation workflow that transforms the way developers build small products. Starting with a detailed brainstorming process using conversational LLMs like ChatGPT, he emphasizes honing the idea with iterative, single-question exchanges until a developer-ready specification is achieved. Next, the plan is meticulously broken down into manageable, test-driven steps using reasoning models, and then executed with tools like Claude and Aider via discrete loops. Reed shares his excitement on the efficiency of his process—"it is pretty quick. Wild tbh"—and even highlights his use of repomix for legacy code contexts. Ultimately, this method not only bolsters productivity but also opens the door to exploring new programming languages and innovative coding practices, all while turning those inactive waiting periods into productive brainstorming sessions.

Key Points:

  • Three-Step Workflow:
    • Idea Honing: Engage an LLM (e.g., ChatGPT) in iterative Q&A to build a clear, detailed specification.
    • Planning: Use reasoning models to create a step-by-step blueprint and break it into incremental, testable units.
    • Execution: Implement the plan using tools like Claude and Aider, ensuring smooth integration and regular testing.
  • Tool Recommendations:
    • ChatGPT for brainstorming,
    • Claude.ai for iterative code generation,
    • Aider for automating tests and debugging,
    • Repomix for efficiently managing codebase context, especially in legacy systems.
  • Documentation Practices: The writer suggests saving outputs as spec.md, prompt_plan.md, and todo.md to keep clear, auditable records of progress.
  • Iterative Improvement: For legacy code, use targeted tasks like LLM:generate_missing_tests and LLM:generate_readme via repomix to incrementally improve and debug.
  • Real-World Productivity: Reed reflects on the increased coding throughput and the ability to “play cookie clicker” during LLM processing downtime—an approach that keeps the creative juices flowing.
  • Community Engagement: Despite acknowledging skepticism from some peers, he invites others to explore the potential of LLMs and even offers to collaborate, reinforcing that “the code must flow.”
    Link to Article

Subscribe

Listen to jawbreaker.io using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes · Next →