Welcome to another episode where we break down the latest trends in software development! Today, we’re diving into Martin Fowler's latest memo on the role of developer skills in agentic coding. As generative AI tools like Cursor, Windsurf, and Cline become more powerful, some claim developers will soon be obsolete. But Fowler argues that while these tools can speed up certain tasks, they still require careful oversight from experienced developers to avoid costly mistakes. He categorizes AI missteps into three impact levels—issues that slow down individual coding, disrupt team workflows, or create long-term maintainability problems. His conclusion? AI coding assistants are helpful, but they still need human guidance. Let’s get into the key points!
Key Takeaways
AI Coding Assistants: Impressive but Far from Autonomous
- AI tools can run tests, fix linting issues, conduct web research, and even preview changes in real-time.
- Despite their capabilities, they frequently make mistakes that require developer intervention.
- Fowler’s experience shows that while AI helps in 80% of cases, he always has to review and adjust the output.
Three Levels of AI Missteps
1️⃣ Slowing Down Development (Time to Commit Delays)
- AI sometimes produces non-working code, requiring developers to step in and fix issues.
- It can misdiagnose problems—for example, assuming a Docker build error was due to architecture settings rather than an incorrect
node_modules
folder.
- Sometimes, the AI goes down useless rabbit holes, misapplying fixes or making unnecessary changes.
2️⃣ Disrupting Team Workflow
- AI tends to over-engineer solutions, addressing too many components at once rather than progressing incrementally.
- It often applies brute-force fixes instead of diagnosing root causes, delaying problems until later.
- AI-generated changes can complicate the developer experience, introducing unnecessary workflow modifications and making debugging harder.
- Without proper prompts, AI frequently misunderstands requirements, requiring extensive corrections.
3️⃣ Compromising Long-Term Maintainability
- AI-generated tests might be redundant or overly verbose, leading to fragile test suites.
- It often fails to reuse existing code, duplicating components instead.
- AI outputs can be overly complex or bloated, requiring manual cleanup.
- Example: AI-generated CSS changes typically contain massive amounts of redundant styles that need to be stripped down.
Best Practices for Using AI Coding Assistants
For Individual Developers:
- Always review AI-generated code—an unexamined AI contribution is a ticking time bomb.
- Know when to stop AI sessions if the output becomes too confusing or incorrect.
- Avoid “good enough” solutions that may introduce long-term maintenance costs.
- Pair programming helps—having a human reviewer alongside AI makes a big difference.
For Teams and Organizations:
- Use code quality monitoring tools like SonarQube or CodeScene to detect AI-related pitfalls.
- Leverage pre-commit hooks and automated code reviews to catch errors early.
- Document AI-related mistakes in a “Go-Wrong” journal and review them regularly.
- Set up custom AI rules and prompts to guide the coding assistant's behavior.
- Foster a culture of trust—teams pressured to deliver faster due to AI are at greater risk of quality issues.
Final Thoughts
Fowler’s conclusion? AI won’t replace developers anytime soon. While it can assist with 90% of coding tasks for some teams, true expertise is still necessary to steer, correct, and refine. AI coding assistants speed up repetitive work, but human thinking—especially critical thinking—is still irreplaceable.
That’s it for today! If you enjoyed this breakdown, make sure to subscribe and share this episode. See you next time! 🚀
Link to Article