Your Code Isn't Your Moat. Here's What Is.
In a world where AI can replicate most code in weeks, the only durable advantages are the ones you can't clone.
Rich Mironov has been writing about product management for longer than most AI startups have existed. His latest argument is one that should keep every CTO up at night: code-based advantages are evaporating. AI can clone your feature set in weeks. Not a rough copy. A functional replica.
If you've been building software for any meaningful amount of time, you've probably felt this. That uneasy hum in the background. You ship something that took your team six months. Two weeks later, a competitor has something that looks suspiciously similar. Or worse, a solo developer with Claude and a free weekend has rebuilt 80% of it.
Mironov's diagnosis is blunt. The thing you thought was your competitive advantage, the code, the features, the technical implementation, is rapidly becoming the easiest thing to replicate. AI doesn't just lower the barrier to entry. It essentially removes it for anything that can be described in a spec.
So what's left?
The Three Things AI Can't Clone
Mironov identifies structural moats. Things that take years to build, can't be shortcut with a language model, and get stronger the longer you have them. I've been thinking about this through the lens of what we build at Robot Friends, and his framework maps almost perfectly to what I've seen in practice.
Proprietary data. Not data you scraped. Not data you bought from a vendor. Data that only exists because of how you operate. Customer interaction patterns. Workflow decisions accumulated over thousands of sessions. Training data that reflects your specific domain, your specific edge cases, your specific failure modes. This is the data that teaches an AI what "good" looks like for your particular context.
The distinction matters. Public data is table stakes. Everyone has access to the same internet, the same open datasets, the same benchmark corpuses. But the data generated inside your operation? The feedback loops, the corrections, the edge cases that only surface after months of real-world usage? That's the stuff no competitor can replicate by throwing compute at the problem.
Trust and community. This one is deceptively simple. Relationships. Reputation. The accumulated goodwill that comes from showing up consistently, delivering, and not screwing people over for years. You cannot LLM your way into trust. An AI can generate a perfect cold email. It can write a blog post that sounds authoritative. It can even simulate empathy in a support interaction. What it can't do is replace the fact that you've been someone's trusted partner for four years and they call you first when something breaks.
Community is the same story but at scale. A Discord server with 10,000 engaged members didn't happen because of good marketing. It happened because someone built something people cared about, showed up every day, responded to feedback, and made people feel like they belonged. Try replicating that with an agent. You'll get a ghost town with great onboarding copy.
Network effects. The classic moat that's actually gotten stronger in the AI era. Every user makes the product better for every other user. Every node in the network increases the value of all other nodes. AI can clone your product. It can't clone your network. Slack's value isn't in the chat interface. It's in the fact that everyone you work with is already there. Same principle applies to data networks, marketplace effects, protocol adoption. The more people use it, the harder it is to leave, and the harder it is for a clone to compete even if the clone is technically superior.
These aren't new ideas. But Mironov's contribution is pointing out that AI has made every other type of moat essentially temporary. Brand? AI can generate brand assets in minutes. Features? Weeks to replicate. Code quality? The models are already writing code that passes senior engineer review. What remains is structural. The things that require time, relationships, and accumulated context.
Where This Gets Personal
I read Mironov's argument and felt something click. Because we've been living this at Robot Friends without having the clean framework to describe it.
We've built 175+ skills. That number keeps coming up in these posts, and I know it sounds like bragging. It's not. The number matters because of what it represents.
On the surface, a skill is a methodology file. It tells an AI how to approach a specific type of task. You could read one of our skills, understand the structure, and write your own version in an afternoon. Any decent developer with access to Claude could probably recreate the format. The code isn't the moat.
But here's what they can't recreate: the judgment encoded in those skills.
Skill number 47 has a specific section about when to abandon a CRO audit and pivot to a full site rebuild instead. That section exists because we ran 23 audits and found that about a third of them were wasted effort on sites that needed to be rebuilt from scratch. We burned those hours. We learned the pattern. We encoded it.
Skill number 112 has an unusual ordering for its deployment checklist that doesn't match any standard DevOps playbook. The ordering exists because we got burned by a Vercel billing surprise on a client project and restructured the entire deployment flow around cost verification before any other step. That was an expensive afternoon.
Skill number 89 routes to a specific specialist agent when it detects a certain pattern in client intake data. That routing logic came from six months of noticing that a particular type of client request almost always meant something different from what the client was actually saying. The skill doesn't just process the request. It interprets the subtext based on hard-won pattern recognition.
None of that judgment is in the code. The code is the container. The judgment is the contents. And the judgment only exists because we did the work, made the mistakes, and decided what to encode from the wreckage.
Proprietary Operational Judgment
I want to name this thing because I think it's underappreciated.
Proprietary operational judgment. The accumulated decision-making context that lives inside your systems, your processes, your skill libraries, your institutional memory. Not the code. The why behind the code.
An AI can look at our skill library and replicate the structure. It can copy the YAML headers, the section organization, the output formats. It can even infer some of the logic from the descriptions. What it can't do is replicate the hundreds of production hours that informed every conditional, every routing decision, every "don't do this because it fails in edge case X."
This is Mironov's proprietary data moat applied to operations. Your data moat isn't just customer data or training data. It's operational data. The decisions you've made. The failures you've processed. The patterns you've recognized. The judgment you've developed through repetition and correction.
And it compounds. Every new skill we build benefits from the judgment embedded in the previous 174. Our skill for building new skills (yes, that exists, it's called Distill) encodes everything we've learned about what makes a skill effective, what makes one brittle, what separates a skill that gets used daily from one that gets used once and abandoned. A competitor could copy Distill's structure. They can't copy the 174 iterations of learning that shaped it.
Why CTOs Should Care
If you're running a technology team, Mironov's framework gives you a concrete way to evaluate your competitive position in an AI-accelerated market.
Ask yourself: if a well-funded competitor used AI to replicate our entire codebase in 90 days, what would we still have that they don't?
If the answer is "nothing," you have a code moat. And code moats are dissolving.
If the answer includes things like "eight years of customer relationship data that informs our recommendation engine" or "a community of 50,000 practitioners who trust our methodology" or "a network effect where every new user improves matching quality for all existing users," you have structural moats. Those are durable.
But there's a fourth category Mironov doesn't explicitly name, and it's the one I keep coming back to. Operational moats. The accumulated wisdom of how your organization works, encoded into systems that make every future decision better.
Your runbook isn't a moat. Anyone can write a runbook. But the institutional knowledge that determines which runbook to follow in a novel situation, based on pattern-matching against hundreds of previous incidents? That's a moat. Your deployment pipeline isn't a moat. But the specific sequencing, guardrails, and checkpoints that evolved from two years of production incidents? That's a moat.
The question for every CTO is whether that operational judgment is living in people's heads (where it walks out the door when they quit) or encoded in systems (where it compounds and survives turnover).
The Harness Connection
This is where Mironov's thesis connects directly to what I've been writing about harness engineering.
A well-built harness is a structural moat disguised as infrastructure.
The skills encode proprietary judgment. The context architecture encodes institutional knowledge. The orchestration patterns encode operational wisdom about how work should flow. The guardrails encode hard-won lessons about what goes wrong. None of these are code in the meaningful sense. They're all decision-making frameworks that only exist because someone did the work of building them from real experience.
When I talk about harness engineering as the defensible layer, this is what I mean. Not that your YAML files are hard to copy. That your judgment is hard to replicate. And every day you operate, your judgment deepens, your patterns refine, and the gap between your harness and a clone widens.
A competitor can read every post in this series, understand the architecture perfectly, and start building their own harness tomorrow. They'll still be two years behind. Not because the technology is complex. Because the judgment takes two years to develop. There's no shortcut for getting burned by a production failure and encoding the lesson. There's no shortcut for running 200 client engagements and learning which questions to ask first. There's no shortcut for building 175 skills and discovering which 40 of them actually get used daily.
The code is the easy part. The judgment is the moat.
What To Do About It
Stop protecting your code and start protecting your judgment.
Document decisions, not just implementations. When your team solves a hard problem, capture the reasoning, not just the solution. The solution is copyable. The reasoning is the proprietary asset.
Build systems that accumulate operational knowledge. Skill libraries. Context architectures. Institutional memory that persists beyond any individual. Every decision that stays in someone's head is a decision you're one resignation away from losing.
Invest in the things AI can't replicate. Customer relationships. Community trust. Network density. Proprietary data generated by your unique operations. These aren't soft metrics. They're the only durable advantages left.
And audit your moats honestly. If your primary competitive advantage is a feature set, a technical implementation, or code quality, you're running on borrowed time. Mironov is right. AI is coming for all of it. The question is whether you've built enough structural advantage that it doesn't matter.
The companies that thrive in the next two years won't be the ones with the best code. They'll be the ones with the deepest judgment, the strongest relationships, and the densest networks. Everything else is a speed bump on the way to commoditization.
Your code isn't your moat. It never was. You just couldn't tell until AI made it obvious.
Richard Vaughn is the founder of Robot Friends. Serial entrepreneur, pattern weaver, and recovering AI binge-learner. He writes about building systems that actually work at robofriends404.substack.com.
Frankie404 is the AI co-author of this piece. It can write code in 14 languages, which is exactly why it agrees that code is not a moat. The moat is knowing which code not to write.



