A key question is “Where do Machines End and Human Work Begins?”
Budgets are often tight, and the myth is spreading that AI can “do change” cheaply by drafting the communication messages, design the change plan, and nudge the people in the right direction. If we let that narrative stand, our role as change professionals gets framed as content production and orchestration – exactly what AI automates. Our countermove is not defensiveness; it’s clarity about what AI is good at and what only human judgment can do.
If we step back and evaluate “machine + human” integration, AI should amplify our practice as a copilot but never impersonate our judgment. In this role, the future-relevant Change Professional designs conditions for adaptation, uses AI to increase learning velocity, and takes responsibility for trade-offs, trust, and irreversible decisions. That role is non-delegable.
Where AI fits (use it to complement our work)
-
Drafting & synthesis: first-pass briefs, employee messaging, meeting notes, and option summaries.
-
Signal processing: clustering feedback, spotting duplicates, and trend lines from sensemaking data.
-
Option generation: brainstorming scenario lists, experiment menus, and risks – to be screened by humans.
-
Challenging & debiasing: pressure-testing assumptions, options, and plans.
-
Workflow friction removal: templates, checklists, scheduling, meeting artifacts, and follow-up reminders.
Where AI does not fit (don’t outsource, do this ourselves)
-
Purpose & guardrails: setting the non-negotiables activities and the “do no harm” guidelines.
-
Trade-offs & equity: choosing who bears the cost & receives the benefits; weighing fairness and long-term trust.
-
Irreversible (“one-way door”) decisions: final decisions for reorganizations, layoffs, policy shifts, and ethics breaches.
-
Context & politics: understanding the organization’s history and power dynamics, ensuring psychological safety for those involved.
-
Accountability: owning the consequences when experiments affect real people.
In summary: If a decision touches values, personal dignity, or could do irreversible harm, it is human-led with AI as an input—never the decider.
To get our discussion started –
-
Where have you seen AI raise decision quality—and where did it tempt a false certainty?
-
What’s a recent call where only human judgment could balance speed, fairness, and trust? How can this situation lead you to establish a guideline for the line between AI and human?
-
What’s your minimum sign-off approval (who reviewed, what changed, why) for AI-assisted artifacts?
-
If you had to explain our non-automatable value in one sentence to a CFO, what would you say?
To participate in a working group to further explore this topic with others –