There’s a lot of noise right now around AI replacing experts.
Depending on who you ask, it’s either overhyped or underestimated. Talk to anyone who’s been in their domain of tax, law, medicine, marketing, software for a decade or more and you’ll hear something surprisingly consistent.
“It’s about as good as a 2-3 year associate. Sounds confident. Often useful. But still needs checking.”
This may sound underwhelming, but here’s where it gets interesting.
Imagine this:
- A marketing expert trying to summarize the key clauses of a legal contract
- A tax consultant trying to extract dietary advice from a lipid profile
- A software engineer trying to write high-converting copy for a landing page
In the real world, each of these attempts would probably end in frustration or a half-baked result. Now throw a modern AI model into the mix. All three experts are suddenly able to get to a usable draft in a domain they don’t fully understand.
Not necessarily perfect or publish-ready but enough to move forward. Enough to avoid context switching and mostly enough to ship.
Where AI truly shines today
We often measure AI’s progress by comparing it to experts in specific fields but that may not be the most useful lens. Its real leverage is showing up elsewhere.
Here’s what I think AI is actually good at today:
- Helping non-experts get unblocked in unfamiliar domains
- Turning a vague idea into a structured first draft
- Bridging the gap between “I don’t know how to do this” and “I know enough to move forward”
- Giving individuals and small teams the ability to operate across more surfaces
The most measurable outcome: cross-functional velocity. No need to wait for someone else’s bandwidth and you can get started, test, and learn faster.
What’s still missing
This is not to say AI is flawless or ready to replace expertise.
In the real world there’s a real gap between “good” and “great.” And while that difference may not always show up on model leaderboard, it matters deeply in real-worl tasks:
- When compliance is non-negotiable
- When human trust is at stake
- When decisions are irreversible
- When taste and nuance define quality
These are the moments when real expertise still wins.
How I’m thinking about it
I’m don’t have to be the smartest expert in the room anymore; I’m more interested in asking the right questions, and interested in the range of possibilities and in speed which is the big leverage.
And for that, “good enough” AI is actually a winning combination.
So for now:
- I’ll use AI to go further outside my domain
- I’ll trust experts when it really counts
- And I’ll try not to overrate either
Curious to hear from others:
How does this look in your own field of expertise? Where does AI feel “good enough” and where does it still fall short?