Change Is Inevitable. Growth Is Intentional not optional anymore

In the middle of the AI revolution in software development, you can find millions of articles, posts, and videos claiming that AI is replacing everything, that developers must adapt, and that everyone will surface their own use cases and references. This post belongs to that same conversation—but it is grounded in what actually happened with my development team and the organizations I work with.

Published
Reading time10 min read
Topics
Change Is Inevitable. Growth Is Intentional not optional anymore

I have been building software professionally for about fifteen years. I am not here to sell you a tool, a course, or a framework. I am here to tell you what I have seen in the last few years—on real client projects, inside real companies, with real budgets and real politics—because the internet is full of hot takes and very short on honest field reports.

If you are new, you might feel pressure to “know AI” overnight. You might feel late. You are not late. You are arriving exactly when the ground is still moving, which is uncomfortable but also an opportunity. What follows is how I have been experimenting, where I got pushback, and what I would want someone to tell me if I were sitting in your chair today.


From Copilot to a full toolbox

When you are junior, every new tool feels like a verdict on your future. If I do not master X immediately, I will fall behind. After a decade and a half, I can tell you something steadier: the pattern repeats. Something ships, it is weak, people dismiss it, it improves, the narrative flips, another thing ships. Your job is not to chase every release. Your job is to develop judgment: where does this actually save time, where does it create risk, and where does it change power dynamics on the team?

I remember that around mid-2022 we started adopting GitHub Copilot at work. When it first launched, it could not do much—it might write a few lines of code. I mostly used it to draft test cases. That was already valuable: tests are repetitive, and getting a first draft meant I could focus on edge cases and naming. Gradually it began to suggest implementation code, which I would always verify and align with business logic. That habit—verify, do not blindly trust—is the single most important thing I can pass on. The tool does not own the outcome; its our responsibility .

Since then I have tried most of the familiar editor- and terminal-based coding assistants. I started with Windsurf, which was strong when it came out. Then Cursor—which was not great at the beginning but improved rapidly. Kiro arrived; after a few attempts I was frustrated enough to uninstall it. None of that makes me “right” about Kiro forever; it means I sampled, I set a bar, I moved on.

I also used Claude Code. At the time it had no direct code editor or GUI; everything went through the terminal. The results were often better than other tools, but I am someone who cares deeply about UI and UX—not only in products we ship, but in how I work. I want to see the code as it is produced. I do not want to parse terminal logs to understand what changed. That preference is not vanity; it is auditability. When you are junior, getting comfortable reading diffs and understanding every line someone (or something) added is how you level up. I pushed toward Cursor as my primary environment for that reason. I still use Claude Code for experiments when the workflow fits.

Why does any of this matter? Even when my day job does not require it, I stay curious about the latest tools so I can apply them whenever the need appears.


Ralph-Wiggum loops and BMAD-style teams: concepts worth understanding

Along the way I followed trending techniques. Two in particular stuck with me: the Ralph-Wiggum method and the BMAD method. They showed up months apart, from different people, and they represent two different intuitions about how to use agents—iteration versus role specialization.

Ralph-Wiggum method

Instead of asking an AI to complete a task once and hoping for perfection, you put the agent in a loop—often a simple shell script—that keeps running until the code passes all tests. The system fails, reads the failure, adjusts, and retries until it succeeds (or until you hit a sane stopping point).

BMAD method

This approach mimics a full agile development team using many specialized AI agents—think Product Manager, Architect, Developer, QA, Scrum Master; often on the order of twelve to twenty-one roles—collaborating on a project.

I wanted to try both in earnest.


Local models, cost, and OpenCode + Ollama

I was concerned about cost if I ran Ralph-Wiggum-style iteration only through cloud assistants, so I looked for alternatives and discovered OpenCode and Ollama, which let me download and run models locally. I paired them with OpenCode and started with a small messaging system application, then expanded into a machine learning experiment I wrote about in an earlier post.

A WhatsApp-like system on AWS: proof of concept

I applied Ralph-Wiggum iteration together with a BMAD-style agent setup. I defined agents such as Product Manager, Analyst, Architect, Developer, QA, Scrum Master, and UX Designer.

The goal was ambitious: design something like a WhatsApp-style messaging system using AWS.

  1. Analyst — scope and requirements
  2. Architect — system design and plans
  3. Scrum Master — detailed stories and scope
  4. UX designer — layouts (not perfect, but usable)
  5. Developer — implementation with Ralph-Wiggum-style iteration for accuracy

I did not finish the project to a releasable state. I lost interest in polishing it further and never published the code. Even so, I verified that both ideas—iterative agent loops and multi-agent “team” workflows—can work in practice.

When innovation meets organizational fear

I spoke with leadership at my company about using a BMAD-like approach on client work. The reaction was defensive. A full agile-style agentic team at near-zero marginal cost can look like a threat to a traditional IT services model—fewer bodies billed, shorter timelines, uncomfortable questions about value. I was effectively asked to stay quiet about it.

In parallel I pushed many experiments with Cursor at work. Some landed immediately; others sit in backlogs. What I could change under my own control moved quickly. The rest often stalled on budget—and on the mindset of people who control priorities.

The pitch I kept making was not about removing humans from decisions, but about reducing repetitive human effort in problem-solving. Framing matters. If people hear “replace me,” they shut down. If they hear “remove toil so we can focus on judgment,” some will listen.

I do not blame anyone without context; people have histories—layoffs, reorganizations, bad projects—that shape how they react.


The two cats: incentives you will see in the wild

There is an old story that captures part of what I have seen:

Two cats were hired to catch rats.

The white cat killed every rat. The problem was solved. The owner sold the cat.

The black cat brought one rat every few days. The owner kept feeding it.

I am not calling anyone a cat. I am pointing at incentives. When job security feels scarce, some rational people optimize for ongoing indispensability rather than permanent fixes. That can show up as resistance to documentation, resistance to automation, or enthusiasm only for innovations that do not reduce reliance on a specific person or team.

Some of the technically strong people I work with on the client side—especially in the years after COVID and repeated layoffs—are survivors. They may welcome innovation that does not make the system or the organization less dependent on them. I will not detail specific incidents; some of those people might read this. The pattern, though, is something you should recognize without becoming cynical: not every “no” to a tool is about technology.


Hackathon contrast: two teams, two trajectories

While I was trying to introduce AI-assisted workflows, someone on the client side invited me to help with an internal hackathon focused on agents on Databricks. I am not a data engineer, data architect, or ML specialist. I joined anyway to apply generative AI to an automation problem that had consumed manual effort for more than eight years. I wanted to see what was possible and what peers would propose.

I did not win. For two months afterward I followed the winners and runners-up to see what they actually shipped—not the slide deck, the work.

The winning proposal used machine learning on historical product data, Databricks, and AI agents to suggest lower-cost materials and improve margins. That was months ago. I watched their GitHub and Confluence activity, assuming they were building production software straight from the hackathon story.

About ten days before I wrote this, it became clear they were on a different track: a project with HR, using years of employee and project data. In plain terms, the application asks:

Do we have enough people for the work we have committed to—and if not, what do we cut or defer?

That is not a toy. That is capacity and commitment intelligence sitting next to people’s careers and project portfolios.

My takeaway for you:

  • My team had as much raw opportunity as theirs in that hackathon. Talent was not the differentiator in the room; engagement and follow-through were.
  • The “black cat” dynamic on my client side did not engage seriously with the same possibility.
  • The other team did more than win a contest—they are building an AI/ML application that can influence planning and, indirectly, the future of teams like my client’s.

Hackathons are not just prizes. They are signals about which groups are allowed to experiment after the pizza is gone. Watch who gets budget afterward.


Why Ralph-Wiggum and BMAD still matter: plans, skills, agents, rules, commands

Here is where the story loops back to your day-to-day.

Cursor and Claude Code have shipped their own abstractions: plans, skills, agents, rules, commands, and similar features. Our client engineering manager noticed industry momentum and asked us to explore and implement them. Because I had already been following Ralph-Wiggum-style iteration and BMAD-style role splits, I could propose something concrete instead of something abstract.

Example to understand:

Traditionally, every new developer reads separate documentation—style guides, developer guides, dos and don’ts—on day one through day thirty. Some teams enforce it in review; some teams hope for the best. With Cursor in the team, we encoded the same expectations into plans, skills, agents, rules, and commands so they preload with our standards. A new teammate does not have to memorize the wiki before being productive; the tooling nudges toward consistency.

Concrete nudges matter more than vague “write clean code” posters:

  • Which button colors and actions belong in our UI surfaces
  • Line-count or complexity limits for components, methods, or HOCs
  • Which functional and automation scenarios must be covered

Senior perspective: this is not about replacing judgment. It is about lowering the activation energy for doing the right thing. The best teams I have seen spend less energy debating settled questions and more energy on product and edge cases.

My company leadership once tried to keep certain approaches from the client’s attention. Reality caught up. You cannot hide something for long. same as in client's team members cannot indefinitely block innovation that the market and the tools are making obvious.


Closing

I remember a quote from John C. Maxwel Change is inevitable. Growth is optional but that is also changed.

Change is inevitable. But Growth is intentional That much is guaranteed.

We can let circumstances push us around, or we can choose to grow through them—embracing difficulty, asking hard questions, and refusing to stay stuck.

Take on the challenges. Make growth the goal. Bring others along when you can. The choice is ours.

Change Is Inevitable. Growth Is Intentional not optional anymore | Ganesan Karuppaiya