Blaming AI for mistakes in your work? You're on the wrong side of the great divide

Blaming AI for mistakes in your work? You're on the wrong side of the great divide

Here’s a simple rule of thumb: if I blame AI for mistakes in my work, I’m on the wrong side of a new great divide. So is the CEO who feeds ChatGPT the company’s latest dilemma and forwards the output “as is,” perhaps without even reading it.

The number of managers who don’t use AI at all is shrinking. Some of them have simply migrated into a new category: uncritical users of AI. The New Great Divide is not between people who use AI and those who don’t. It’s between those who understand the limits of LLMs—and those who don’t. Uncritical users tend to share a few traits:

1.     They don’t have to implement what’s being suggested.

2.     They are not the end users of the service.

3.     They see the main output of their work as strategy, or “high-level thinking.”As a result, they miss some basic realities:

1.     No, the model did not actually read your attached spreadsheet.

2.     Yes, it produced numbers—but they may have nothing to do with your data.

3.     Yes, vibe coding can create an interface that looks right, but it may not be doing what you think it’s doing in the background (like pulling emails).

4.     Yes, the output is neatly ordered—but much of it may not reflect your priorities or your constraints.

5.     No, you can’t expect an AI to “find the right answers in your data.” It often doesn’t even know what the real question is.

People who understand what LLMs can and cannot do are sceptical by default. They test. They verify. They check basic claims to see whether the output is helpful—or actively misleading.

A simple example: they ask the model to retrieve a single datum from a chart.Because they understand the technology’s limits, these users are far better at making it work. They know that simply asking an LLM to generate a solution is not enough.

This mindset aligns perfectly with what hands-on middle and junior managers have done for years: define tasks clearly, make them actionable, track execution, give feedback, iterate. Editors excel at this. So do product managers.

It aligns less well with the habits of thinkers, strategists, or managers who see their role as handing the work off to others.The implication is clear: some managers will need to change how they operate. That’s entirely possible—and may not even be hard.

A few simple rules help:

1.     Read before sending 😊

2.     Run basic sanity checks.

3.     Ask yourself: If I had to follow up on this, what would I actually do?

If you don’t have a clear answer, scrap it. If you do, send that—not the raw LLM output.

** The article was first published on Linkedin.