AI Intensifies Work. Why?
For this edition of Marketing for You, we’re taking a more investigative look at a question that’s been surfacing repeatedly in recent weeks.
A Harvard Business Review article that’s been making the rounds — “AI Doesn’t Reduce Work—It Intensifies It” by Aruna Ranganathan and Xingqi Maggie Ye — gave us a strong reason to examine what’s actually happening inside organisations.
Their core takeaway is precise: AI can increase work, because it introduces additional layers around the work — coordination, checking, revising, and managing AI-assisted outputs — rather than simply removing tasks.
If you want to read the article first, it’s here.
Why it happens ?
If you’ve spent time around transformations, this pattern is familiar.
New capability arrives. The organisation keeps the old operating model. So the capability gets inserted into existing steps — and the steps multiply.
This is where the “less workload” promise breaks down in real life: AI rarely replaces a workflow by default. It usually lands inside one.
That creates predictable friction:
- You still have the original task, plus prompt management.
- You still need sign-off, plus verification of AI output.
- You still need alignment, plus more iteration cycles.
- You still carry risk, plus a new category of risk (quality, hallucinations, data exposure, compliance).
In change management terms, this is the difference between tool adoption and operating model change. Most teams are doing the first — and expecting results that only the second can deliver.
So that brings us to: AI intensifies work when it’s layered onto an unchanged workflow.
What tends to reduce work is less glamorous, but more reliable: rebuilding the workflow from zero — clarifying outcomes, decision ownership, standards, and what gets removed — and only then integrating AI where it replaces real steps.
Choosing tools when the market is crowded
Another constraint the HBR piece implicitly points to: even if you agree with the diagnosis, execution is hard.
There are hundreds of AI vendors now. Many demos look convincing. Many tools overlap. So selection isn’t about “best AI.” It’s about fit and operability.
A practical filter we use (and we recommend) is to ask:
- Where exactly does this tool sit in the workflow?
- Which step does it eliminate — not just accelerate?
- Who owns the output, and who validates it?
- What is the failure mode — and what’s the cost when it’s wrong?
- Can we measure impact (time, cost, error rate) within 2–3 weeks?
- What changes in roles, approvals, and standards are required for adoption?
If a tool can’t answer these questions clearly, it’s usually not ready for the workflow you’re running — no matter how impressive the model is.
Want us to help you make this practical?
If you’d like to go deeper, our Bulbul AI Integration Pack is built to do the unglamorous work that makes AI actually work: redesigning workflows, selecting tools that fit, and integrating them without creating operational weight.
Want to explore it? Click here
👉️ If you want additional context from what we observed on the ground — where the conversation is shifting from “possibility” to “deployment” — our World AI Cannes Festival recap is here:



Comments are closed.