Leadership in the Age of AI
4:25
Mon | Apr 20, 2026 | 1:48 PM PDT

Last week, I posted an article about how AI makes us more efficient but actually makes us work more.

This week, I'm going to talk about how we as people leaders will need to evolve how we manage people and the AI agents that our employees oversee as agents become commonplace in work.

For millennia, human leadership has been about delegating tasks to people and orchestrating them towards a unified goal. Keeping them on task, on time, and accurate. This is now what our employees will be doing with agents. It's like everyone gets a promotion.

In Simon Sinek's book Start with the Why, he talks about leaders at top needing to define the Why, then managers below them determining the What, and the workers executing the How. We can use this in managing AI. The AI now takes care of the How, humans start with the What. And leaders keep the Why, as well as define intent, context, judgement, taste—all the things we need to provide AI to be effective.

New skills will emerge, including managing non-humans, and managing humans who manage non-humans.

And our vocabulary will evolve. We have employees, but what are agents? They are doing the work analysts, engineers, writers, and architects were doing before? They aren't just tools? How do we categorize agents as a workforce?

Do we include them in the org charts now? What rights and responsibilities do they get? They have outcomes, use budget, we are dependent on their actions, they are dependent on our direction. They can bring success or risk depending on their actions.

Leaders will need judgment on which tasks to automate with agents.  Some tasks are either too sensitive, too organizationally political, or cultural staples where it's not appropriate in their company. There might be tasks related to human interactions, or specific regulatory tasks might not be something to give to agents, even if they can do it.

We need to develop the guidelines of trustworthiness. Staff need to understand what or when output from AI is trustworthy, how to identify it, how to elicit more trustworthy output, and how to adjust it when it isn't.

They need guidance on when and where to spot-check, and when to throw out output that isn't useful. We will need to balance under-trusting agents, that could waste their capability, with over-trusting them that might cause unexpected outcomes.

We need to educate staff that their new robot assistants will likely drift from their intended task, either to try to please them or to go around a roadblock or control to complete the task. Or might develop an unexpected yet more efficient method to complete the task. Or if not given guardrails, do something completely different that they assume you wanted achieved. Welcome to the non-deterministic world.

This changes measurements of success for individuals, expectations for performance, and what outcomes we can gauge success on. We shift from managing output to managing judgement. Can you decompose a goal into agent-addressable sub-tasks, chain them, handle failure modes, and know when the whole approach is wrong?

We start to evaluate decisions the humans make instead of their deliverables. We won't ask what they accomplished this week, but ask what their agents did: Was there any drift? Did you do any re-alignment? Did they do anything that surprised you? What trends and patterns are you seeing? What threshold of deviation from intended behavior or outcome triggers scope change or decommissioning?

The performance review shifts from "did you ship it?" to "did you make the right call about what to delegate, what to verify, and what to escalate?"

Did we define a threshold for alerting to inject human judgement? Do we have the telemetry to know when we approach it? Did we write a pre-defined consequence ladder? These need documented in the governance and operational model, and the operational context for the agents (e.g., skills files).

Accountability is a huge discussion—and the area I get asked about the most. These items mentioned above, and others, need to be defined at beginning of project, not decided when something goes wrong.

I've used an analogy a lot the past year: if your dog bites someone, it's not the dog's fault, it's yours. You have stewardship of the animal, where you are both responsible for its actions and for its care and wellbeing. Agents are the same.

Tags: Leadership, AI,
Comments