Click the Subscribe button to sign up for regular insights on doing AI initiatives right.

Clemens Adolphs Clemens Adolphs

Claude Code’s Profanity Scanner

The internet had a field day ever since Anthropic accidentally leaked the code to their coding agent, Claude Code. One bit that stood out: To detect negative user sentiment, Claude Code runs messages through a simple filter that looks for profanity ("wtf", "crap", and the likes).

Folks on social media were quick to point out the irony that a tool that's supposed to be intelligent relies on simple text-matching logic to detect user sentiment. Why don't they just run the user input through their own large language model to detect whether the user is frustrated?

The answer is simple: Because they're not trying to collect perfect data to use in a peer-reviewed sociological study. They're just trying to roughly and cheaply track overall sentiment. In that context, trading off accuracy for efficiency is the right build decision. The simple text match runs near instantaneously. Sending the whole thing to an LLM is massive overkill.

Lesson for builders and founders: Don't get obsessed with finding "the best" solution according to some metric (like accuracy). Find the best solution for the given context. In this case, simple and cheap was way better than complex and expensive.

Read More
Clemens Adolphs Clemens Adolphs

What Even Counts as MVP?

Between POC (Proof of Concept) and MVP (Minimum Viable Product), the latter is more often misunderstood. Let's find out why and what we can do about it.

The misunderstood idea of an MVP is: "Some crappy version of the product you're aiming for, cranked out quickly so we can put it out into the world to see what happens." While this definition contains kernels of truth, it also misses the mark:

Minimum Viable For What?

You can't look at something in isolation and tell whether that's an MVP or not. Without knowing what business goal you're shooting for, it's impossible to say whether what you have is both minimal and viable.

  • Is the goal to get early-stage venture capital?

  • Is the goal to land a paid pilot project with a single client (who'd become a strategic partner)?

  • Is the goal to launch to the public and immediately land paying customers?

  • Or, even earlier, is the goal to suss out whether there's even demand?

What counts as minimal and viable for those goals varies wildly. It can go from a vibe-coded user interface mock-up (with no real underlying functionality) all the way to something that's fully decked out with cloud hosting, user accounts, and credit card payments.

The same product could be vastly overbuilt (viable, but not minimal) for one business goal and woefully under-built (minimal, but not viable) for another.

So what?

Isn't this all just semantic nitpicking? Well, consider a non-technical founder with lots of domain expertise and an idea for a great product. They engage a software development team and ask them to build them an MVP, without ever clarifying what their next business milestone would be. There's real risk that the dev shop takes the running, vague, definition of MVP and builds that "crappy version of the product". But without the business context, it's not clear what can be cut without sacrificing viability and what absolutely needs to be there from day one. The product ends up in a weird state where it's overbuilt in one dimension and underbuilt in another (see the illustration)

Illustration of an MVP (hits the right level of values in dimensions x and y), underbuilt (too small in each dimension), overbuilt (too large in each dimension) and simultaneously over- and underbuilt product (too large in one dimension, too small in the other).

The one takeaway: The same thing can be an MVP or not, depending on the business context. Therefore, without that context, it's likely that the wrong thing will be built, the wrong way. The question that cuts through this: "Minimum viable for what?"

Read More
Clemens Adolphs Clemens Adolphs

Do You Need a POC?

Talking to a few folks recently, and it seems there's still a lot of confusion around terminology when it comes to "that very early initial version of a product." So let me give my personal explanation. Today: The Proof of Concept (POC).

POC: Can It Be Done?

A proof of concept has one job: Determine whether the core piece of your idea is feasible. To do this at the lowest cost, it strips away everything else. For a concrete idea, let's say you want to make an app that lets people virtually try on clothes. The core piece in that example is the image manipulation AI that would take an outfit and apply it to someone's image (or live video from their phone's camera.)

Assuming we don't know for sure that this can even be done, a POC would not concern itself with building a mobile app, integrating with the store's product catalogue, having cloud architecture ready to handle real customer data, etc.

Instead, it'd be as simple as having an app running locally, with static input data, figuring out if that one crucial piece can be done.

The outcome of a POC is not a product that you can ship to users. It's an answer to the question, does it work at all, and if so, how would it work? It's job is to de-risk your overall investment of time and money.

When Do You Need One?

Not every idea needs a proof of concept. If the biggest question isn't whether it can be done but whether it should be done, you should start with validating the market (again as cheaply as possible). So, what are some signs that your idea might benefit from an initial POC?

  • If it relies on new technology where the lack of maturity means nobody has a clear idea of where exactly its limitations are

  • If, at a technical level, nothing like this has been done before

  • If it involves using AI or machine learning beyond standard techniques on standard datasets

Why not just jump straight to coding?

I like to eliminate the biggest risk first. In situations where I advise to start with a POC, the biggest risk is not knowing whether the core piece is feasible. So any work that's not directed toward addressing that risk is potentially wasted:

  • Either it turns out the idea doesn't work, in which case the unrelated work was 100% wasted

  • Or the idea is feasible, but it took us longer to figure out the what and the how because our focus was diluted

(If this sounds like it goes counter to the advice to not delay integration for too long, i.e., not build everything in silos to only connect the pieces at the end, that's a topic for another email.)

For now: Figure out what the biggest risk with your product idea is, and seriously consider the simplest possible proof of concept before spending time and money on anything else.

Read More
Clemens Adolphs Clemens Adolphs

Complaints Are Not Enough Signal

Say you want to make a software product to sell. You do the smart thing and don't rush into building. Instead, you look around the internet to see what sort of things people complain about. Maybe you even use one of those AI Market Research tools that check popular discussion forms, such as Reddit, to see what real people are complaining about.

All well and good (and better than the alternative of building without any validation). But: Complaints are not enough of a signal. You can't assume that someone who complains about X will look for and purchase a solution to X.

In the great book "The Mom Test", it's written that many times, a market validation conversation would go like this:

Complainer: "I really wish there was a tool that did X."
Interviewer: "Have you looked for a tool that does X?"
Complainer: "No, why?"

🤷‍♂️

Startup advice has this trope: Don't sell vitamins; sell painkillers. But many people would rather put up with a vague, dull ache than seek medical advice. Part of customer interviews and market validation needs to answer the question: Does their desire for a solution to problem X stop at the complaining phase, or are they actively looking for solutions?

Read More
Clemens Adolphs Clemens Adolphs

But Does It Move The Needle?

Had this conversation with a friend recently: Where he works, everyone, from the developers to the product managers, is using AI, and he complained that his mind is fried at the end of the day, from all the rapid context switching that ensues.

Then I asked: Okay, so you're all really embracing AI. That's great. But does it move the needle?

He knew exactly what I meant and was very direct in his answer: Nope. It doesn't. Not yet, anyway. Why is that? Because their product relies very much on human connection in the sales process, and that part of the organization hasn't quite caught up yet to their increased pace. It's one thing to produce more features at a rapid pace. It's another thing for those features to translate that into real additional sales.

And there's a real possibility that it will never translate into more sales, because every competitor will also have AI accelerating their feature delivery. Running just to stay in place, basically.

The point here is: Maybe they could take the foot off the gas a bit; clearly at their current stage, developer speed is not the bottleneck in their value stream. It's not worth it getting your team's brains fried making something faster that doesn't move the needle. Find the real bottleneck, solve that, and keep your sanity.

Read More
Clemens Adolphs Clemens Adolphs

Pseudo-work

Today I came across a social media post poking fun at the folks setting up their complex AI notetaking and note-analzying workflows. Hook up Email to Obsidian to OpenClawd, those sort of things. The post asked, are you really that important and busy that you can't go through your own emails and need a super-complex agent setup to do that for you?

It reminded me of Cal Newport's "Pseudo-productivity" idea, where visible activity is misunderstood as productivity even though real outcomes are lacking. In a similar vein, setting up a Rube Goldberg machine is fun. It feels like you're productive. You're building something real, maybe even putting some decent engineering effort into it. But at the end of the day, no effort should have been put into it:

  • For simple tasks, pseudowork is overkill. It's procrastination disguised as building stuff.

  • For hard tasks, pseudowork distracts. Hard tasks stay hard.

It comes back to identifying the true bottlenecks in any value stream. Sure, along that stream there might be steps that are more annoying than they need to be, but the real effort should be spent on the bottleneck, not on whatever can be automated in the most complicated way.

Read More
Clemens Adolphs Clemens Adolphs

YAGNI vs Planning Ahead

YAGNI is a well-known acronym in software engineering. It stands for: "You Ain't Gonna Need It!"

It cautions against overbuilding and overengineering something for an anticipated future that may never arrive. The most concrete example: Your app does not need to be able to handle millions of concurrent users until it turns into the next TikTok, so don't worry about all the crazy complex engineering that would let your app handle millions of users. If you are on the path to becoming the next TikTok, you'll find out soon enough, and then you'll have ample time to ramp up the engineering.

Yet on the other hand, there is value in planning ahead. It's so easy to maneuver yourself into a dead end or suffer unnecessary delays from overlooking simple preventive steps. Deferring some decisions until the need arises is fine. Hitting roadblocks due to failing to anticipate expectations is poor judgment. A concrete example here: If you want to build software for large enterprises, you need to sort out your security certifications, and you don't want to close your eyes to that fact until the moment a potential client walks away over your lack of SOC2 compliance.

So, how to resolve this? When do we plan ahead, and when is YAGNI spot on?

Here's a simple reality check I picked up from a decluttering tip. Specifically, what to do with all those items that you keep around "because you might need them one day." The tip asks to distinguish between two different scenarios:

  • Scenario 1: If life goes according to plan, I will need this in the foreseeable future. Think winter clothes for your kid that are just a bit too big. You'll need them next year.

  • Scenario 2: There's a distant dream, a hypothetical scenario, in which I might need this. Think of supplies for a hobby you'd like your kid to pick up one day.

For scenario-1 items, you don't need them right now, but you need them soon enough that actively planning for them makes good sense. For scenario-2 items, YAGNI applies.

In the end, it's about making smart tradeoffs that preserve your optionality, and that means having a bias toward things you need right now, followed by things you will most definitely need soon and not chasing things you probably won't need for a very long time.

Read More
Clemens Adolphs Clemens Adolphs

Turning Ugh Into Wheee

Here's an underappreciated aspect of using AI for (certain) tasks: it can turn dreadful drudgery into something fun. Never mind concrete calculations of your return on investment, or time spent. If it makes it more delightful to do some parts of your job, it's a win in my book.

Concrete example from my work: Yet another migration and rejigging of code on a client project, where we had to remove one dependency in favour of another. Before AI, this would have involved lots of easy but also easy-to-get-wrong steps paired with a constant worry of having missed something. With Claude Code, I decided on the phases, had Claude come up with a plan, and then executed it. And because my mind wasn't focused on the nitty-gritty, I actually enjoyed the experience and reaping the rewards.

And because of that experience, I see myself more inclined to embark on other "well, someone has to do it eventually" tasks that, before, weren't particularly exciting precisely because the ratio of high-level smart thinking to low-level execution was out of whack. But if the AI takes care of the low-level execution, it's back to pure fun.

So when you're thinking about where you might want to try AI in your life, look for areas with that out-of-whack fun/drudgery ratio. Could they be made to be pure fun?

Read More
Clemens Adolphs Clemens Adolphs

The AI Business Case Scorecard

I promised a wrap of our miniseries on making AI work for you (and your business), and I think here's a neat way to do it: An interactive scorecard that rates your AI initiative's merits on a number of dimensions.

It's likely a bit rough around the edges, and I very much invite your feedback.

Take a look at the quiz here

And if you missed the series, find the first post here

Read More
Clemens Adolphs Clemens Adolphs

Will Anyone Use It? (Make AI Work - Part 5)

Last part of the mini-series. We've covered problem selection, business impact, organizational readiness, technical fit, and scope. All the "hard" stuff. Today: the human side, which is where plenty of otherwise solid AI initiatives quietly go to die.

How Do People Feel About This?

You can have the perfect problem, the right data, a well-scoped pilot, and a clear owner. If the people who are supposed to use the thing don't want it, none of that matters.

There's a spectrum here. On the good end: people are actively requesting or championing the initiative. They feel the pain of the current process and they're eager for help. You'll know this when you see it, because they'll be the ones asking you when the solution is ready.

On the bad end: the affected teams don't even know it's happening. Surprise AI rollouts rarely go well. People who weren't consulted about a change to their workflow tend to resist it, even if the change is objectively beneficial. Especially if it's being done to them rather than with them.

The middle ground, "mixed or skeptical reactions," is actually workable. Skepticism can be healthy. It means people are paying attention and thinking critically. The question is whether the skepticism comes from informed concern ("I don't think AI can handle the edge cases in our process") or from fear ("Are they trying to replace me?"). The first kind is useful. The second kind needs to be addressed head-on.

The Framing Trap

How is AI being talked about in your organization? This matters more than you might think.

If the message, stated or implied, is "AI will help us do more with fewer people," you're going to have a rough time getting buy-in from the very people whose expertise you need to make the initiative work. Remember the earlier email about who owns the initiative: you need senior operators who live the problem every day. If they think they're building the tool that replaces them, they're not going to give you their best ideas.

"AI as a productivity tool" is better, but still a bit vague. The framing that works best in practice: AI frees people for higher-value work. Not because it's a clever spin, but because, when done right, that's what actually happens. The specialist who used to spend half their day on manual data entry can now spend that time on the analysis work they were actually trained for. That's not a threat. That's a promotion in disguise.

Getting the framing right isn't about marketing. It's about telling the truth in a way that makes people want to be part of the change rather than fight it.

AI Theatre

I wrote about this one back in June, but it bears repeating in this context. There's a specific risk where the pressure to "show AI adoption" creates initiatives that look impressive in a demo but never deliver production value. Press-release-driven development. Flashy proofs of concept that were never meant to survive contact with real users and real data.

If the main motivation for your AI initiative is that someone important wants to see AI on a slide deck, that's a problem. Not because visibility is bad, but because optics as the primary driver leads to optimizing for the wrong thing. You end up with something that demos well at the quarterly review but doesn't actually help anyone do their job.

The antidote is simple: focus on outcomes. Does the thing get the job done? Does it save time, reduce errors, free up capacity? If yes, the demos will take care of themselves. If no, no amount of polish will save it.

So, that's a wrap for the mini-series. Stay tuned for a recap next week.

Read More
Clemens Adolphs Clemens Adolphs

Can AI Even Do This? (Make AI Work - Part 4)

Part 4. We've covered picking the right problem, making sure it's worth solving, and checking that your organization is ready. Today: can AI actually do the thing you want it to do, and can you scope it so you find out fast?

The Right Task for the Right Tool

Not everything that sounds like an AI use case is one. And not everything that is one is equally suited.

Some tasks are a natural fit: processing, extracting, or summarizing information. Repetitive decisions with clear (if complex) rules. Tedious manual work following known patterns. These are AI's sweet spots, because they involve pattern recognition over large volumes, which is exactly what the technology is good at.

Other tasks are a stretch: complex judgment calls that even experts struggle with, or creative and strategic decisions where the "right answer" depends on context that's hard to capture. AI can sometimes assist here, but it's a different game. You're no longer automating; you're augmenting. And augmentation is trickier to get right, because the human-AI handoff becomes the design challenge.

A useful litmus test: Is someone skilled currently spending significant time on work that's below their capability? A senior engineer reviewing every document. A specialist manually checking compliance on every transaction. A doctor doing paperwork instead of seeing patients. If the answer is yes, you've likely found a place where AI can free experts to do expert-level work. That's where the leverage is.

The Verification Question

Here's one that doesn't show up on enough checklists: How easy is it to verify if the AI's output is correct?

This matters more than most people think. If verification takes the same effort as doing the task manually, you haven't saved any time. You've just shifted the work from "doing" to "checking," and you've added a new failure mode: rubber-stamping AI output because checking it is tedious.

The ideal: verification is trivially easy. A quick spot-check, a glance at a dashboard, a simple comparison. The further you get from that, the more carefully you need to think about whether AI actually helps or whether it just creates a more convoluted version of the same workload.

I wrote about this before in the context of AI coding: letting AI generate things where reviewing them takes as long as an expert would need to create them saves no time. It just leads to exasperated experts. The same principle applies to any AI use case.

Start Small or Don't Start

You've heard me say this before and I'll say it again. If you can't identify a small, self-contained first version of your AI initiative, that's a red flag.

"All or nothing" projects are where budgets go to die. The beauty of starting small is that you learn fast and cheaply. Can you scope a meaningful pilot that covers just one part of the process, one document type, one team, one region? If not, ask yourself why. Often the reason is that the problem is poorly understood, which brings us right back to the first email in this series.

A related question: Could you see value from solving just 20% of the problem? If partial progress is meaningless, you're looking at a very risky bet. If even a partial solution would be meaningful, because you've identified the 20% that delivers 80% of the value, you've got something workable.

And finally, consider the blast radius. If this project fails, what breaks? If the answer is "critical operations across many teams," you probably want to pick a different starting point. Start with something isolated, something you can roll back without drama. Build confidence and evidence before you tackle the high-stakes stuff.

The best AI projects I've seen didn't start with a grand vision. They started with a single painful task and proved that it could be done better. Everything else grew from there.

Read More
Clemens Adolphs Clemens Adolphs

Are You Ready? (Make AI Work - Part 3)

Part 3 of the mini-series. We've covered picking the right problem and making sure it's worth solving. Today: the unglamorous stuff that determines whether your AI initiative has a foundation to stand on.

Tribal Knowledge Is Not a Foundation

Here's the scenario. You've identified a process you want to improve with AI. You know it's important, you know where the freed-up time would go, you've got metrics in mind. Great. Now, quick question: how does that process actually work?

"Oh, ask Linda. She's been doing it for twelve years."

That's tribal knowledge. It lives in Linda's head, maybe in a few emails, a half-finished wiki page from 2019, and a spreadsheet that only makes sense if you squint. This isn't unique to AI. Any attempt to improve a process that isn't documented is going to struggle. You can't improve what you can't describe.

The good news: you don't need perfect process maps. "Reasonably documented with known variations" is a great place to be. You're not producing a 200-page operations manual before you start. You're getting enough clarity to point at specific steps and say, "This is the part we want to improve, and here's what the inputs and outputs look like."

If your process documentation is severely outdated or nonexistent, that's a valuable discovery. Fix it first. Documenting what people actually do (as opposed to what a process diagram from five years ago says they do) will surface inefficiencies you didn't know existed. Sometimes the best outcome of an AI exploration is that you fixed the process before the AI was even involved.

The Data Question

I've written about the five tiers of data readiness before, from paper documents all the way up to structured databases. Where your data sits on that spectrum tells you a lot about how much work lies between you and a working AI solution.

But format isn't the whole story. The more fundamental questions:

  • Does the data exist at all?

  • Can you get to it, or is it locked in a system that doesn't talk to anything?

  • Is it reasonably clean, or full of inconsistencies, duplicates, and gaps?

"Data exists but is scattered across systems" is a surprisingly common answer, and not a deal-breaker. It means there's integration work ahead, and you should budget for it. What is a deal-breaker is pretending the data situation is better than it is. I've seen AI projects kick off with a cheerful "we have tons of data!" only to discover, three months in, that most of it is unusable.

Be honest about where you stand. If the data needs cleanup, fine. If it doesn't exist yet, fine too, but that changes the scope and timeline significantly.

Who Owns This Thing?

Every successful AI initiative I've seen has one thing in common: clear ownership by someone who understands the business problem, not the technology.

"IT owns it" sounds reasonable, and IT needs to be involved. But if the initiative is driven purely by the technical team, without a business stakeholder who feels the pain of the current process and can make decisions about trade-offs, you end up building technically impressive solutions to problems nobody has.

The strongest setup: a business stakeholder who owns the problem and the outcomes, with technical support for implementation. Even better if there's executive sponsorship to remove roadblocks. There will be roadblocks.

No clear owner means no one to make the tough calls when priorities conflict. And they will. "Should we optimize for accuracy or speed?" "Should we handle the edge cases now or later?" "This integration is harder than expected; do we simplify or push through?" These aren't technical decisions. They're business decisions that need someone with context and authority.

If your AI initiative is an orphan bouncing between departments with no single person accountable for its success, that's your most important problem to solve. Fix that, and everything else follows.

Read More
Clemens Adolphs Clemens Adolphs

Is It Worth It (Make AI Work - Part 2)

Part 2 of the mini-series on making AI work for your business. Last time, we covered problem selection: pick the right problem, define what a solution looks like, and avoid vague mandates. Today: once you have a problem, how do you know it's worth solving?

The Cost of Doing Nothing

Here's a question that doesn't get asked enough: What happens if this problem remains unsolved for another year or two?

If the answer is "minor inconvenience, we'd adapt," that's a signal. Not necessarily to stop, but to be honest about priorities. Plenty of AI projects get greenlit because they sound impressive, not because the underlying problem is urgent. And urgency matters. The organization's willingness to push through the inevitable rough patches of an AI initiative is directly proportional to how much the problem actually hurts.

On the other end, if the problem is blocking a strategic priority, you already know that. The tricky cases are in the middle: ongoing operational friction that's become so normal, nobody questions it anymore. "That's just how things work here." These are the problems worth digging into. They've been quietly draining resources for so long that people have stopped noticing.

One signal to look for: Has anything been tried before to solve this? If not, that tells you a lot about urgency. If yes, great. What was tried, and why did it fail?

Measuring Success (Before You Build Anything)

I've written before about how even soft outcomes can be measured. If something bothers your organization enough to warrant action, it must create observable consequences. Higher turnover, more complaints, slower delivery, missed deadlines. Find the observable effect and measure that.

The worst time to figure out your success metrics is after you've built the thing. "Did it work?" shouldn't be a philosophical question. Agree on what the numbers need to look like before you start, and a lot of the ambiguity around AI ROI disappears.

No metrics, no baseline, no business case.

Where Does the Freed-Up Time Go?

This is the one that trips people up the most. Say your AI initiative succeeds beyond your wildest dreams. It saves your team ten hours a week. Fantastic. Now what?

If nobody has a good answer, you don't have a value story. You have a cost story. And cost stories are weak, because the math never looks as impressive as you'd like. I've shown this before: if someone earns $100k and you free up 25% of their time, the naive calculation says that's worth $25k. But if that person generates $200k in economic value at full capacity, the freed-up time is worth $50k. You're unlocking their ability to do the high-value work they were hired for.

That only works if there is high-value work waiting for them. "General productivity improvements" is code for "we haven't thought about it." Specific, identified, revenue-generating activities that are currently neglected because everyone's drowning in busywork? That's a real answer.

And don't forget the Theory of Constraints angle. Freeing up time in a part of the workflow that isn't the bottleneck doesn't speed up the overall system. It gives someone more idle time. Make sure the problem you're solving sits on or near the constraint.

Before you invest in making things faster, know where the time goes and whether the system can absorb it.

Read More
Clemens Adolphs Clemens Adolphs

Seeing Results with AI: Start Here

Welcome to part 1 in this (maybe) mini-series, inspired by last week's post on lacklustre experiences with AI. Over the next few posts, we'll dig into what it takes to make AI work for you, or to verify, with confidence, that it doesn't.

Problem Selection

This is the very first step where things go off the rails. The most common ways:

  • Picking no problem. A vague blanket mandate to "do AI" from the board, the CEO, or plain fear of missing out.

  • Picking a vague problem. You need to be able to explain what success looks like. Not necessarily with hard numbers; qualitative goals are fine. As long as you can articulate a difference between the status quo and the desired end state, we've got something to work with.

  • Picking the wrong problem.

That last one deserves unpacking. What makes something "the wrong problem"? If we assume you need to find something to do with AI, then a wrong problem is one where AI can't help. But even among things AI can do, some are pointless:

  • In a workflow with a bottleneck (and they all have one!), speeding up anything other than the bottlenecked part is pointless, AI or not.

  • Letting AI generate things where reviewing them takes as long as it would take an expert to create them saves no time. It just leads to exasperated experts.

  • And don't forget: maybe the process you're looking to automate shouldn't exist at all.

(If the main bottleneck can't be fixed with AI, that's fine. It just means you fix that one first before looking for AI solutions elsewhere.)

Where to Look for Good Problems

A few keywords to get your creative juices flowing. Chances are, you already have a good intuition for which parts of your organization fit these. If not, a value stream mapping exercise can surface them more rigorously.

  • Repetitive, frequent, high volume.

  • Requires skilled workers but not all their mental faculties. ("Senior engineer must review every document...")

  • Narrow context. The input itself, some internal documentation, maybe a few specified sources; that's enough to perform the task.

  • Clear downstream impact. Speeding up the task, or freeing the people who do it for higher-value work, has a demonstrable positive effect on the business.

Beyond existing workflows, dig deeper. In any scenario involving the intake and processing of information, imagine what you could do if AI handled much higher volumes. Any task that involves "scouting," scanning sources and surfacing relevant finds for human follow-up, can benefit massively from AI doing the legwork at scale.

Armed with these pointers, you can come up with AI use-cases that hit harder than "I dunno, maybe write emails faster and summarize them?"

Read More
Clemens Adolphs Clemens Adolphs

“I Tried AI And I Didn’t Like It”

Fair.

But did you…

  • use the paid/pro version, or just the free one?

  • provide enough context to the model?

  • pick the right task to try it out on?

Despite the hype, AI is neither a silver bullet nor a magic wand and it certainly isn't a mind reader. It has real power and promise, but only if it's wielded well.

Now, for personal fun-based exploration, it's fine to putter around a bit. Try out Sora, Suno, Nano Banana. Have fun with it, but don't overthink it.

But if you want to bring AI into your organization, a quick spot check by a voluntold individual isn't going to cut it: "Hey Sarah have a quick check if Gemini is good for making marketing copy."

"Well, boss, I asked it to make marketing copy and it's bland, generic, doesn't have our voice and—worst of all—uses the em-dash where a comma would do just fine."

No, if you're serious about exploring if and how AI can be useful to your business, you need to be serious about doing a thorough and honest evaluation.

I plan to share an in-depth breakdown of what's required here in the next couple of days. Stay tuned!

Read More
Clemens Adolphs Clemens Adolphs

AI the Unblocker, AI the Gatekeeper

For the next issue in my "how do you use AI at work" series, I spoke with a skilled technician who installs and maintains a specific type of high-end scientific equipment. Two examples came up, one for a great use case, one for an antipattern.

The good: AI that helps experts unblock themselves

The company for which my friend works has a ton of documentation on lots of rare and specialized equipment. But finding just what you need, in the moment, while installing or repairing such a piece, is hard. Here, the company has built an AI-enabled search system. Now, instead of sifting through countless scanned PDFs, questions can be answered right in a chat interface.

The bad: AI that sits between you and what you need to get ahead

For particularly tricky cases, where the technician in the field might be stumped, or requires approval to order a particularly expensive part, the company maintains a panel of experts. In the past, a technician would reach out to them directly (behind a ticketing system).

These days, the company has added an extra step in between: Before the request gets routed to the expert panel, an AI reads the ticket and comes back with "helpful" suggestions. You can see where this is going: According to my friend, in more than 90% of the cases, the suggestion is something the expert has already tried. They know where the obvious stuff has stopped working and don't need an AI to ask whether they've considered turning the thing off and on again.

I can guess at the company's rationale here: protect the valuable expert time by handling routine requests before they reach them. But that's looking for a technological solution to a cultural problem, especially when, in the vast majority of cases, the requests still need to be routed to the expert panel. I bet that a much cheaper solution would be to identify the few individuals sending the most "frivolous" requests and investing in upskilling and educating them.

Not everything needs an AI, and some things are made actively worse by it. Make sure yours is an enabler, not a blocker.

Read More
Clemens Adolphs Clemens Adolphs

AI Native

Remember when "cloud native" was the hot term and nobody could quite pin down what it meant? Same thing is happening with "AI native." It's one of those phrases that's easy to let slip by and unconsciously substitute something vague: "something something AI." But precise language matters, especially when you're deciding what to build.

The cloud era offers a useful parallel. There's a clear distinction between software that can run on the cloud and software that takes full advantage of its abstractions. One is cloud-enabled. The other is cloud-native. Databases make a good example. You can grab PostgreSQL and install it on your own computer. Presto, a database. Or you can rent a server in the cloud and install it there. But it's still a single piece of software running on a single machine. In contrast, sign up for AWS Aurora and all the server stuff is abstracted away. That's the cloud-native version.

For AI, the distinction is more subtle than the vast architectural choices of cloud services. It comes down to where AI shows up in the product, how deeply it shapes the workflow, and how essential it is to delivering value. The clearest sign a product is AI-native: without the AI, it doesn't just become less useful. It stops making any sense at all.

Take Todoist. I love the AI features in the task manager, but the core workflow of creating and managing tasks doesn't need them. Remove the AI and the product still works fine.

Decidedly AI-native in an obvious (and therefore uninteresting) way are apps that are a user interface on top of a model. ChatGPT makes no sense without the underlying AI, but it doesn't add any sophisticated orchestration to it either.

Where it gets interesting: AI-native products that bring workflow orchestration and thoughtful user experience design to the table. Think Claude Code. It's far more than an interface to the underlying model. But without that model, it's nothing.

One is not inherently better than the other. For certain tasks, users prefer a mostly traditional workflow with the occasional AI assist. For others, they'll love the hands-off way an AI-native product performs work on their behalf.

What matters: pick the one that's right for your problem and apply it intentionally. Don't build an AI-native solution where AI-enabled would do fine. And don't slap a chatbot into your app and call it AI-native.

Read More
Clemens Adolphs Clemens Adolphs

Warren Buffet’s Textile Woes

There's a story related by legendary investors Warren Buffett and Charlie Munger. At a time when they owned a textile company, an inventor approached them with a loom that could do twice as much work as the old one. Buffett replied that he hoped it didn't work. Because if it did, he'd have to close the mill.

Sounds strange, but it's an important lesson in microeconomics. The problem with Buffett's textile business was that it was a commodity business. Buyers of textiles really don't care from which mill they get it. If your textile mill buys machinery that cuts your costs in half, every competitor will do the same, and all the alleged savings will get competed away and passed on to the customers. All that heavy capital investment for zero improvement on the margins. Great for the customers, terrible for the business owners.

If you are in a structurally bad business, i.e., one where you're selling an undifferentiated product into a price-competitive market with low barriers to entry, innovation is bad for you. You have to buy it just to stay in the game, but you can never use it to get ahead.

Given that innovation is all around us, what does that mean for your business? It means all the promised efficiency gains will evaporate and get competed away if they are not serving a differentiated business, where you sell something people want from you, and there are at least some barriers to entry. Sound strategy is becoming more important than ever: What is the unique value your company brings into the world? If you have an answer to that, you can profitably bring all sorts of innovation (including AI) to bear.

Innovation without differentiation is just a more expensive way to break even.

Read More
Clemens Adolphs Clemens Adolphs

Forklift To The Gym

Writer Cal Newport made this analogy a while ago: Using AI to do your thinking for you is like taking a forklift to the gym.

It doesn't matter that a forklift is better than us at lifting heavy things. The entire point of lifting heavy things is to make us stronger. It's true that when we lift things in a work setting, we absolutely should use machines for help. The problem is just that we're losing something important in the process. Prior to the rise of the office job, nobody needed to designate special time to exercise. The job was the exercise. Then, in the course of the 20th century, doctors noticed that a lot of people were dying of heart attacks and realized that it was due to the new sedentary lifestyle. So now, people have to make time to run, bike, swim, or lift.

Will something similar happen with cognitive work? If we allow AI to do all the thinking for us—write that email, summarize that request, implement that bug fix—do we need to schedule gym time for the brain so it remains functional? And given how we're experts at skipping our workouts when we don't feel like doing them, will the same happen with our brain workouts?

That's one reason I don't write this newsletter with AI. It would mean skipping an important brain workout that helps me stay focused and in the thick of things. How easy it would have been to prompt Claude: "Hey, I need another newsletter article. Make it about taking a forklift to the gym. No em-dashes!"

And then I wouldn't have internalized what I'm trying to express.

Assistance is fine. Grammarly catches (most of the) funny mistakes that happen when I slice and dice and rearrange the sentences. Spell-check is great, too. Applied to other cognitive areas, let AI handle the mundane by all means, just make sure to stay in control of the high-level cognition.

Read More
Clemens Adolphs Clemens Adolphs

Nice Camera!

Have a friend who's into photography and want to make them mad? Next time they show you a great picture, tell them: "Wow, you must have a nice camera!"

They'll be quick to point out, rightfully so, that, in addition to the gear, taking great pictures requires tremendous skill. After all, is the camera selecting the subject for you? Does it tell you what composition and framing would work well here? Does it tell you whether this shot would be better made with a deep or shallow depth of field?

Technical progress raises the floor in certain regards. Better sensors mean photos are less grainy when shot in low light, for example. Great, but more often than not it's the pros who will really take advantage of such advances.

So that's my thinking about AI's impact on jobs: At the low end, it will raise what's possible at the consumer level. Throwaway apps, throwaway songs, throwaway images will be cheaper to produce, to the point that it even makes sense to make them in the first place. But wherever it counts, you'll still want to hand it to the professionals.

After all, you don't let your cousin with an iPhone shoot your wedding.

(Funny enough, my friend Alex Jukes touched on the same topic, using photography as an example, in his last post, just as I was drafting this one. Check out his newsletter!)

Read More