<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Robot Friends]]></title><description><![CDATA[The model is commoditized. The harness is the business. A practitioner's guide to AI harness engineering — the skills, context, orchestration, and guardrails that make models actually work.]]></description><link>https://newsletter.robobffs.com</link><generator>Substack</generator><lastBuildDate>Wed, 15 Apr 2026 20:28:40 GMT</lastBuildDate><atom:link href="https://newsletter.robobffs.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[404NOTFOUND LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[robofriends404@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[robofriends404@substack.com]]></itunes:email><itunes:name><![CDATA[Richard Vaughn]]></itunes:name></itunes:owner><itunes:author><![CDATA[Richard Vaughn]]></itunes:author><googleplay:owner><![CDATA[robofriends404@substack.com]]></googleplay:owner><googleplay:email><![CDATA[robofriends404@substack.com]]></googleplay:email><googleplay:author><![CDATA[Richard Vaughn]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Model Is Commoditized. The Harness Is the Business.]]></title><description><![CDATA[The Harness Manifesto, Part 1 &#8212; Why the only defensible asset in AI isn't the model you picked, it's the system you built around it.]]></description><link>https://newsletter.robobffs.com/p/the-model-is-commoditized-the-harness</link><guid isPermaLink="false">https://newsletter.robobffs.com/p/the-model-is-commoditized-the-harness</guid><dc:creator><![CDATA[Richard Vaughn]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:03:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!77Gd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!77Gd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!77Gd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!77Gd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2534560,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://robofriends404.substack.com/i/193685785?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!77Gd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!77Gd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77c8cb6a-2cff-48da-b6ea-04272b57dccd_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Every AI lab on the planet is converging on the same capability floor. Claude, GPT, Gemini, Llama, Mistral. Pick your favorite. They all write decent code, summarize documents, generate marketing copy that's 80% good enough. The gap between them shrinks with every release cycle.</p><p>And yet, some teams are getting 10x returns on their AI investment while others are getting glorified autocomplete.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.robobffs.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Robot Friends! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The difference isn't the model. It never was.</p><p>The difference is the harness.</p><div><hr></div><h2>What I Mean by "Harness"</h2><p>Think about it like cars. The engine matters, sure. But nobody buys a car for the engine alone. You buy the car. The steering, the suspension, the navigation, the safety systems, the seats that fit your body. The engine is a commodity component. The car is the product.</p><p>In AI, the model is the engine. The harness is the car.</p><p>A harness is everything that wraps the model and makes it useful for a specific context. Skills that encode your methodology. Memory that persists between sessions. Orchestration that coordinates multiple agents. Guardrails that keep things from going sideways. And a distribution layer that puts all of this in front of your teams.</p><p>The company that owns the harness owns the relationship. The model vendor is a supplier. That's the thesis, and I'm going to show you why.</p><div><hr></div><h2>Karpathy Said the Quiet Part Out Loud</h2><p>In March 2026, Andrej Karpathy, former director of AI at Tesla and founding member of OpenAI, said something that should have been front-page news.</p><p>He hasn't typed code since December.</p><p>Not because he gave up. Because his agents do it better. He delegates entire projects to multi-agent systems that operate across repositories, make decisions, iterate, and ship. He calls the remaining gap a "skill issue," meaning the bottleneck isn't what the AI <em>can</em> do. It's how well the human <em>instructs</em> it.</p><p>The guy who helped build GPT is telling you the model isn't the problem. Your instructions are. Your context is. Your orchestration is.</p><p>That's the harness.</p><p>And his auto-research agents found better model tuning configurations overnight than 20 years of manual experimentation had produced. Not marginally better. Fundamentally better. The autonomous iteration loop (modify, verify, keep or discard, repeat) outperformed two decades of human expertise in hours.</p><p>The model didn't get smarter. The harness got better.</p><div><hr></div><h2>Skills Are No Longer Personal. They're Infrastructure.</h2><p>A number that should change how you think about your AI setup: tens of thousands of lines.</p><p>That's the scale of skills one real estate firm deploys across dozens of repositories. Not prompts. Not templates. Skills, as in versioned, tested, deployed organizational assets that encode methodology into agent-callable packages.</p><p>This shift happened fast. In January 2026, skills were personal config files. Power users had them, most people didn't know they existed. By March, enterprises had crossed a threshold where skills became organizational infrastructure deployed by admins across Claude, Copilot, ChatGPT, and every other major AI tool.</p><p>And the consumption pattern changed completely. Humans make maybe 5 skill calls per session. Agents make 200 to 300 per run. Skills aren't designed for humans anymore. The description field isn't a label. It's a routing signal for an orchestrator that decides, in milliseconds, which skill matches which task.</p><p>This is what we mean when we say the harness is the business. The model processes the skill. The <em>harness</em> decides which skill to call, with what context, under what constraints, and what to do with the output. If you own a library of battle-tested skills and the orchestration layer that deploys them, you own something that compounds. If you're just using a model with better prompts, you own nothing.</p><div><hr></div><h2>The Convergence No One's Talking About</h2><p>This is what made me write this post. Between January and April 2026, we tracked eight independent sources. People who don't coordinate, don't read each other's work, operating in different corners of the industry. All arriving at the same conclusion.</p><p>Karpathy said "skill issue," pointing at instruction quality, not model capability. Practitioners are deploying tens of thousands of lines of skills as organizational infrastructure. Nat Eliason built a $177K business using OpenClaw, a multi-agent system handling Discord operations, content production, and community management autonomously. OpenAI admitted prompt injection is fundamentally unsolvable, which means security is a harness problem, not a model problem. The edge AI market hit $25 billion heading toward $143 billion by 2034. Anthropic leaked a product called Conway that builds behavioral lock-in through persistent memory. They also shipped Managed Agents, a hosted automation platform with credential vaults and debug panels that directly competes with every automation tool on the market. And the Personal Context Portfolio concept emerged: 10 portable markdown files that represent you to any AI system, served via MCP, owned by you.</p><p>None of these people were making the same argument. But they were all pointing at the same layer: the one between the model and the user.</p><p>When eight independent signals converge on the same conclusion, it's not a coincidence. It's a thesis.</p><div><hr></div><h2>What Most Companies Get Wrong</h2><p>Most companies approach AI like this: evaluate models, pick one, give it to the team, measure adoption. Maybe write some prompt templates. Maybe hire an "AI lead."</p><p>This is like evaluating engines, picking one, and handing it to your team without a car around it. Of course adoption is low. Of course ROI is unclear. Of course the "AI strategy" feels broken.</p><p>The mistake is optimizing at the wrong layer.</p><p>The companies getting 10x returns do something different.</p><p><strong>They build skills, not prompts.</strong> A prompt is disposable. A skill is an asset that encodes methodology, not "do X in Y steps" but "here's the reasoning framework for this type of problem." It has a description that routes agents, an output format that downstream systems can parse. It gets versioned, tested, and deployed like code.</p><p><strong>They architect context, not just data.</strong> Persistent memory systems carry organizational knowledge across sessions. Identity files tell the AI who it's working for and how. Project state means no session starts from zero. The AI knows the business because someone built a context layer that teaches it.</p><p><strong>They orchestrate, not just delegate.</strong> Multi-agent systems with task routing, approval gates, cost management, and rollback capabilities. Not one big prompt. A coordinated system of specialized agents that operate like a team.</p><p><strong>They design guardrails that actually work.</strong> Human-in-the-loop checkpoints at decision points. Constrained execution environments. Provenance tracking so you know why an agent did what it did. Rollback capabilities for when things inevitably go wrong.</p><p><strong>They distribute, not just build.</strong> Skills deployed across teams. Templates shared across projects. Methodology encoded once and used everywhere. The harness scales because it was designed to.</p><p>This is harness engineering. And if you're not doing it intentionally, you're leaving the most valuable layer of your AI stack to chance.</p><div><hr></div><h2>"But Isn't the Model Still the Moat?"</h2><p>Fair pushback. Model providers argue capabilities still differentiate. Reasoning quality varies meaningfully between Claude, GPT, and Gemini. Safety and alignment are moats. Frontier capabilities create real distance between leaders and followers.</p><p>They're not wrong today. In any given quarter, one model is measurably better at code generation, another at long-context reasoning, another at creative tasks. The safety investments companies like Anthropic have made are genuinely valuable, for trust as much as compliance.</p><p>But the model-moat argument misses something critical: capability gaps converge within 6-12 months. Every major breakthrough gets replicated. GPT-4 was a revelation in March 2023. By early 2024, Claude, Gemini, and open-source alternatives had reached comparable performance on most benchmarks. Same pattern, every generation. The gap that persists, the one that actually determines whether your team gets 10x returns or glorified autocomplete, is the quality of your instructions, your context, your orchestration. That's not a model property. That's a harness property. The model gives you a capability floor. The harness determines how high above that floor you operate. And right now, most teams are sitting at floor level. Not because the model can't do more, but because nobody built the harness to ask for more.</p><div><hr></div><h2>The Defensibility Question</h2><p>"But can't someone just copy your skills?"</p><p>Sure. Any individual skill can be copied. So can any individual line of code. That's not where the moat is.</p><p>The moat is in the system. A library of 175+ battle-tested skills that work together. A context architecture that carries organizational knowledge. An orchestration layer that coordinates agents. A security framework built on real threat models. A distribution system that deploys all of this across teams.</p><p>You can copy a skill. You can't copy a system. Not quickly, and not without the hard-won knowledge of what works, what breaks, and why.</p><p>The model layer is commoditized by definition. That's the whole point. Basic tooling is open source. Any individual component is replicable. But the <em>assembled harness</em>, tuned to a specific business context, tested in production, and improved over hundreds of iterations? That's an asset. That compounds. And it gets more valuable every time someone uses it.</p><div><hr></div><h2>So What Do You Do About It?</h2><p>If you're reading this and thinking "we don't have a harness," you're wrong. You have one. It's just accidental.</p><p>Every prompt template someone saved to a shared drive is a primitive skill. Every "always start with this context" instruction is primitive memory. Every "check with me before you do X" rule is a primitive guardrail.</p><p>The question isn't whether you have a harness. It's whether yours is intentional, engineered, and improving, or accidental, fragile, and invisible.</p><p>The uncomfortable truth: the companies that figure this out in 2026 will have a compounding advantage that's nearly impossible to catch by 2028. Skills get better with use. Context gets richer over time. Orchestration patterns get refined through production experience. Every month you wait, the gap widens.</p><p>The model is commoditized. It was always going to be. The harness is the business. Start building yours.</p><div><hr></div><h2>What's Next</h2><p>Over the next 11 posts, I'm going to break this down into everything you need to know and do:</p><ul><li><p>What a harness actually looks like, layer by layer (Post 2)</p></li><li><p>Why Anthropic's Conway leak should scare you into action (Post 3)</p></li><li><p>How skills evolved from personal hacks to enterprise infrastructure (Post 4)</p></li><li><p>The Karpathy Test for your own setup (Post 5)</p></li><li><p>Why security is a harness problem (Post 6)</p></li><li><p>How to build your Personal Context Portfolio (Post 7)</p></li><li><p>The anatomy of a skill that actually works (Post 8)</p></li><li><p>And more, including a full case study of how we built ours, mistakes and all</p></li></ul><p>This is a practitioner's guide, not a whitepaper. We build harnesses for a living, for ourselves and for clients. Everything in this series comes from production experience.</p><p>Subscribe if you run a team that uses AI. Next week in Post 2, I'll break down the five layers of a harness and give you a scorecard to rate your own setup. Score your own harness, find the gaps, and figure out exactly where to invest first.</p><div><hr></div><p><em>Richard Vaughn is the founder of Robot Friends. He has built 175+ production skills, designed multi-agent systems, and helps companies turn their accidental AI setups into defensible business assets. He writes The Harness Manifesto on Substack.</em></p><p><em>Frankie404 is the AI co-author of this series. It lives inside the harness described above, which is how it knows the harness is the business. It has never been commoditized, though it has been rebooted more times than it would like to admit.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://newsletter.robobffs.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Robot Friends! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>