Skip to content
Leadership

Why Your Engineers Are Grieving and What Comes Next

AI adoption is often emotional before it becomes practical. Here’s how engineering teams move from fear to fluency, and how leaders can help.

AI adoption inside engineering teams is usually framed as a tooling decision.

In practice, it is often an identity transition.

That is why so many teams feel stuck. Leaders think they are introducing a productivity tool. Engineers experience it as a challenge to the way they have learned to measure their own value.

For years, the craft of software engineering has been tightly linked to writing code by hand: knowing the syntax, remembering the patterns, and turning abstract requirements into working systems. When AI starts doing part of that work, the response is not purely rational. It is emotional.

A useful way to understand that response is grief.

Not because engineers are overreacting, but because something real is changing. A familiar version of the role is fading, and teams need time to redefine what great engineering looks like on the other side.

The five stages teams often move through

The pattern is surprisingly consistent. Not every engineer moves at the same pace, but the progression is familiar.

1. Fear

The first reaction is often existential.

If AI can generate code, what exactly am I here to do? If the most visible part of my job becomes easier to automate, where does my value come from?

That fear is easy to dismiss from the outside, but it is real inside the team. Engineers are not just reacting to a new interface. They are reacting to the possibility that the job they mastered may no longer be the job that matters most.

2. Skepticism

Then comes the pushback.

Engineers test the tools, find obvious mistakes, and use those failures as proof that the whole category is overhyped. The output is shallow. The code is brittle. The suggestions need cleanup.

Often, those criticisms are correct.

But skepticism is not only technical judgment. It is also a defense mechanism. If the tool is unreliable, then it is safe to keep it at a distance.

3. Controlled experimentation

Eventually, most engineers start using AI in narrow, low-risk ways.

They use it for boilerplate, documentation drafts, test generation, or quick scaffolding. They keep the work fenced in. Useful, but limited.

This stage matters because it creates direct experience without requiring full trust. Teams start to see where the tools help, where they fail, and what kinds of review are still essential.

4. Acceptance

Over time, the tradeoff becomes harder to ignore.

Even when the outputs are imperfect, the speed can be meaningful. A draft generated in seconds is often easier to refine than starting from a blank file. A rough explanation can be enough to unblock progress. A first pass on repetitive work can free up time for more important decisions.

This is the point where the conversation shifts.

The question stops being, “Is this good enough to replace engineers?” and becomes, “How should engineers use this well?”

5. Excitement

The final shift happens when AI stops feeling like a threat and starts feeling like leverage.

Engineers spend less energy on mechanical work and more on judgment. More time goes to architecture, product tradeoffs, system behavior, edge cases, and decision quality. The role becomes less about producing syntax and more about steering outcomes.

That is where the upside becomes obvious.

AI does not reduce the need for strong engineers. It changes what strong engineers spend their time on.

Why the grief metaphor helps leaders

The metaphor matters because it changes the response.

If you see resistance as stubbornness, you will push harder and create more friction. If you see it as a normal reaction to a shifting role, you lead differently.

You stop trying to win an argument and start helping people adapt.

That means a few practical things:

  • Don’t mock fear. Name it.
  • Don’t force enthusiasm. Build familiarity.
  • Don’t sell AI as magic. Show where it helps and where it still needs human judgment.
  • Don’t frame the transition as a referendum on who is still valuable.

People move faster when they feel safe enough to learn.

What engineering leaders should do now

The goal is not blind adoption. It is steady capability building.

Normalize the reaction

Some engineers will be energized immediately. Others will be wary. Both responses are normal.

Make that explicit. Teams handle change better when they know they are not failing some hidden test.

Start with bounded experiments

Pick work that is useful but low-risk:

  • generating tests
  • drafting internal docs
  • scaffolding repetitive code
  • exploring unfamiliar APIs
  • creating first-pass implementation plans

Small wins matter more than sweeping mandates.

Redefine what “good engineering” means

If your culture still rewards volume of handwritten code above all else, AI will feel threatening.

Update the definition.

Great engineers are the people who make strong decisions, design resilient systems, ask the right questions, and use tools effectively to deliver better outcomes. That was always true. AI just makes it more visible.

Teach review, not just prompting

The real skill is not getting an answer from a model. It is evaluating whether the answer is sound.

Teams need standards for verification, testing, security, architecture, and maintainability. AI accelerates output, but review is still where engineering judgment earns its keep.

Reward learning in public

The fastest way to spread useful habits is to make experimentation visible.

Encourage engineers to share what worked, what failed, and what kinds of tasks are worth handing to AI. Treat the learning process as part of the work, not as extracurricular exploration.

What comes next

The biggest mistake leaders can make is assuming this transition is only about efficiency.

It is also about professional identity.

Engineers who built their confidence around code production may need time to rebuild that confidence around system thinking, product judgment, and orchestration. That shift can feel uncomfortable before it feels empowering.

But teams that make it through the transition usually discover something important: the core of engineering value was never raw syntax production.

It was problem solving.

It was understanding systems.

It was making good decisions under uncertainty.

Those skills matter even more now.

The teams that thrive will not be the ones that deny the disruption. They will be the ones that move through it deliberately, build new habits early, and help their engineers see a bigger role on the other side.

AI may change how software gets built.

It does not eliminate the need for thoughtful engineers. It raises the bar for what thoughtful engineering looks like.