Back-office use case primer

Remember that infamous MIT report that showed how 95% of internal AI initiatives fail? One interesting observation they made: Companies chase flashy AI projects in marketing and neglect the much more promising opportunities in the “back office”. That’s a shame, because there are many low-hanging fruits ripe for automation. Doesn’t even have to be the latest and fanciest agentic AI (maybe on the blockchain and in VR, even? Just kidding.)

So, how would you know if your company has a prime AI use case lurking in the back-office? Here’s my handy check list. More details below.

  • It’s a tedious job that nobody wants to do if they had a choice

  • It has to be done by a skilled worker who’d otherwise have more productive things to do (”I’d rather they do product research but someone has to categorize these contract items”)

  • The act of recognizing that the job was completed properly is easier than the act of actually doing the job

Let’s dig in.

Don’t automate the fun stuff

I mean, if your business could make lots of extra money by automating away the fun stuff, by all means, go for it. This is more of a clever trick to rule out use cases that are unlikely to work well with today’s AI. Chances are, the fun stuff is fun because it involves elements that make us feel alive. The opposite of tedious grunt work. And the reason we feel alive when we do these sorts of jobs is that they involve our whole humanity, which an AI system cannot match. This rule is intentionally a bit vague and not meant to be followed to the letter at all times, but for a first pass when evaluating different use cases, it can work surprisingly well.

Look for tasks with a skill mismatch

Any job that needs to be done by a worker who, while doing the job, doesn’t need to use their whole brain, is a good candidate for an AI use case: It means the stakes are high enough that it’ll pay off, but that the task itself lends itself to the capabilities of AI: It’s probably easier, for example, to automate away all the administrative overhead a doctor has to perform than to develop an AI that correctly diagnoses an illness and prescribes the correct treatment.

Avoid the review trap

I talked about this in an earlier post: For some tasks, checking that they were done correctly is just as much work as doing them in the first place. It’s much more productive to focus on tasks where a quick check by a human can confirm whether the AI did it right. Bonus points if any mistakes are easily fixed manually.

Conclusion: With those three points, you’ll have a good chance building an AI tool that’ll be effective at its task. More importantly, your team will welcome having the bulk of that task handled for them. They just need to kick off the tool and give the end results a final quick check, instead of wading through the whole task themselves.

If that sounds like something you want to have for your company, let’s talk.

Previous
Previous

Agile when AI is involved

Next
Next

Like a hamster in a maze