The past week has seen a flurry of developer-authored content on both sides of this debate, offering a window into how the industry is adapting — and sometimes struggling — with a new generation of AI coding assistants.
The Optimisers: Building the Perfect AI Workspace
Avinash Seethalam, a GenAI Practice Lead, has published a six-part tutorial series on DEV Community detailing how to construct a comprehensive terminal setup specifically designed to work alongside Claude Code, Anthropic's AI coding assistant.
The series covers an elaborate stack: iTerm2 as a terminal replacement, tmux for persistent multi-pane sessions, the Starship prompt, and a curated list of nine Claude Code plugins selected after extensive evaluation. Seethalam describes the guiding philosophy as friction reduction — the less effort developers spend on tooling, the more cognitive energy they can direct at actual problem-solving.
The plugin selection is notably deliberate. Seethalam warns against installing too many plugins, noting that each one injects instructions into a session's context window at startup. "A plugin with 38 agents and 156 skills adds a meaningful token overhead to every single session, whether you use those skills or not," he writes. On paid plans, he argues, an bloated plugin stack becomes "a recurring tax on every conversation."
His curated stack of nine plugins includes tools for persistent memory across sessions, automated code review using parallel AI agents, and a plugin that strips filler language from responses to reduce output tokens by an estimated 65% on coding tasks. The final instalment of the series covers packaging the entire setup into a version-controlled dotfiles repository — ensuring the environment can be reproduced on a new machine in under an hour.
The Cautionary Tale: When AI Becomes a Crutch
In sharp contrast, a separate post published the same day by Manas Kolaskar describes a six-month period of heavy AI dependence that he now views as professionally damaging.
Kolaskar, a student working on his final-year bachelor's project, describes delegating virtually every architectural decision to Claude — including database selection, real-time communication protocols, and system design. The code worked. The apps ran. But during a project presentation, his supervisor asked why he had chosen WebSockets over Server-Sent Events for a particular feature.
"I didn't know," Kolaskar writes. "Claude chose WebSockets. I just accepted it."
Following the experience, Kolaskar paused coding for two weeks and returned to fundamentals: system design principles, CAP theorem, caching strategies, and load balancing. He then redesigned his project from scratch — not because the AI-generated code was broken, but because he had come to understand why certain architectural choices were wrong for his specific context.
His conclusion draws a distinction that has resonated widely in developer communities: "AI is not your senior engineer. AI is your rubber duck that can type really fast."
A Shared Undercurrent
Despite their different orientations, both perspectives share a common thread: the value of intentionality. Seethalam's meticulous plugin selection is itself a form of critical evaluation — he explicitly rejects the impulse to install every available tool. Kolaskar's pivot involves not abandoning AI, but using it with defined scope: "I tell Claude: 'I'm using PostgreSQL because I need strong consistency for transactions. Help me optimise this specific query.' Not: 'Build me a social media app.'"
The debate reflects a broader transition period in software development, where the tools are maturing rapidly but professional norms around their use are still being established.