Build Your Personal Context Portfolio in a Weekend
The Harness Manifesto, Part 7
Your AI doesn't know you. It doesn't know your company, your tech stack, your communication style, or the decision you made last Tuesday that changed the direction of your entire Q3 roadmap. Every session starts from zero. You brief. You re-explain. You correct the same misunderstandings you corrected yesterday. And you've accepted this as normal because everyone around you is doing the same thing.
It's not normal. It's a bug in how most people use AI, and you can fix it permanently this weekend.
In Post 2, I described the five layers of a harness. Context architecture is the second layer, and in my experience it's the most undervalued one. Teams will spend weeks evaluating models, debating Claude vs. GPT vs. Gemini, running benchmarks that become irrelevant the next quarter. Then they'll start every single AI session by pasting in the same background information. That's like buying a luxury car and then pushing it to work every morning because you forgot to bring the key.
Context is the key. And a Personal Context Portfolio is how you build one that works across every AI tool you touch.
What a Personal Context Portfolio Actually Is
A Personal Context Portfolio is a set of modular files, stored as plain markdown, that represent you and your work to any AI system. Not a single massive document. Not a prompt. Not a "custom instruction" buried in some platform's settings page. Portable files that you own, you version-control, and you serve to whatever tool you're using.
I introduced the concept in Post 3 as the defense against Conway, Anthropic's always-on agent that builds a proprietary memory layer about how you work. That argument still holds. But I want to make a different case today. Forget Conway for a minute. Build a PCP because the productivity difference is so dramatic that you'll wonder how you ever worked without one.
When I started building mine, my average AI session began with 8 to 12 minutes of context-setting. Explaining the project. Explaining my role. Explaining the constraints. Explaining why we don't use certain frameworks and why we do use others. After I built even a primitive version of my context portfolio, that setup time dropped to zero. Not "a little less." Zero. The AI already knew.
That 8-12 minutes per session compounds into something staggering. If you run 6 AI sessions a day (and I run more than that), you're burning an hour daily just re-teaching the AI things it should already know. Five hours a week. Over 250 hours a year, spent saying the same things to a system that has no memory of yesterday.
That's not a productivity problem. That's a systems failure. And you can fix it in a weekend.
The Files
Your PCP is made of individual files, each covering one domain of context. Not one giant file. This matters because different sessions need different slices. A coding session needs your tech stack and project state. A writing session needs your communication style and brand voice. A strategy session needs your goals and decision history. Modular files let you load what's relevant without drowning the AI in everything.
Here's what mine looks like. Your version won't be identical, but it covers the same ground.
Identity. Who you are, your background, your company, the lens you bring to problems. Mine says I'm a serial entrepreneur with consumer electronics and creative agency experience who now builds AI systems. That single file prevents the AI from treating me like a developer, a student, or a generic "user." It treats me like a business operator who happens to build technical systems. Which is what I am.
Roles and responsibilities. Your current positions, what you own, what you don't. This prevents the AI from giving you advice meant for someone with a different job. When I tell Claude to draft a strategy document, it already knows I'm the founder, not the marketing intern. The framing, the level of detail, the tone all calibrate automatically.
Active projects. What you're working on right now, what stage each project is in, what's blocked, what's moving. This is the file that changes most often. I update mine weekly. It means that when I start a session about any of my projects, the AI already knows the current state. No "let me catch you up." It's already caught up.
Team and collaborators. Who you work with, their roles, how you interact with them. This matters more than people expect. When I ask Claude to draft a message for a teammate, it knows their role and adjusts. When I'm planning a project, it knows who on my team handles what. It can suggest delegation because it knows who's available to be delegated to.
Tech stack and tools. Every platform, framework, language, and tool you use. Versions matter. Configurations matter. The difference between "we use Next.js" and "we use Next.js 15 with App Router, Tailwind v3, and shadcn/ui deployed on Vercel with Cloudflare proxy" is the difference between generic suggestions and useful ones. My tools file is one of the longest in my portfolio because I use a lot of tools, and getting the specifics wrong wastes entire sessions.
Communication style. How you write, how you want AI to write for you, what you can't stand. Mine specifies: no em dashes, no parallel triple structures, use contractions, be direct, mix sentence lengths. This file alone probably saved me the most cumulative time because I used to spend half my editing sessions stripping out AI-isms that I'd never use in real writing.
Preferences and non-negotiables. The rules that don't fit neatly into other files. My coding preferences. My file organization rules. The safety guardrails I never want bypassed. The things I care about that an AI would never guess. This is where personality lives.
Decision log. Key decisions you've made and why. This one is underrated. When the AI knows you already evaluated and rejected Option B three weeks ago, it doesn't waste your time proposing it again. When it knows you chose a particular architecture because of a specific constraint, it can reason forward from that constraint instead of starting the analysis from scratch.
Saturday Morning: The Interview
Don't try to write these files from scratch. You'll stare at a blank document, write two paragraphs of stilted self-description, and quit. I know because that's what I did the first time.
Instead, let the AI interview you.
Open your preferred AI tool and give it a simple prompt: "I'm building a Personal Context Portfolio. Interview me about who I am, what I do, how I work, and what matters to me. Ask me questions one at a time. Go deep. Don't move on until you have enough detail."
Then just talk. Answer the questions. Be specific. When it asks what tools you use, don't say "I use a bunch of JavaScript frameworks." Say "Next.js 15 with App Router. Tailwind v3. Deployed on Vercel. Cloudflare proxy in front for cost control. PostgreSQL via Supabase." When it asks about your communication style, don't say "I like clear writing." Say "I use contractions. I hate corporate jargon. I'd rather be blunt and wrong than diplomatic and vague."
The interview approach works because you already know everything your PCP should contain. It's in your head. You just haven't articulated it in a structured way. The AI is good at extraction. Let it do what it's good at.
This takes about 60 to 90 minutes. Do it in one sitting if you can. The flow matters. You'll start with surface-level answers and then get progressively more specific and honest as the conversation goes deeper. That's where the good stuff is. The things you'd never think to write down but that fundamentally shape how you work.
By the end of the morning, you should have raw material for every file in your portfolio. Not polished files. Raw interview output. That's fine. You'll shape it in the afternoon.
Saturday Afternoon: Shape and Structure
Take the interview output and break it into individual files. One file per domain. Markdown format. Plain text. No proprietary formats, no platform-specific syntax.
Some practical guidance that I learned the hard way.
Keep files between 200 and 800 words each. Shorter and they don't carry enough context to be useful. Longer and you're burning tokens on detail that rarely matters. My identity file is about 300 words. My tools file is closer to 700 because there's genuine complexity there. My decision log is the longest because it grows over time.
Write in second person or third person, not first person. Instead of "I prefer React," write "Richard prefers React" or "You prefer React." This sounds odd but it makes the files work better as context injected into a system prompt. The AI reads them as descriptions of you, not as things it should say about itself. Small formatting choice, big difference in output quality.
Be specific, not aspirational. Your PCP describes how you actually work, not how you wish you worked. If you say you're a structured thinker who always plans before executing, but you actually tend to build first and plan retroactively, write the truth. The AI will serve you better if it knows your real patterns. Nobody's grading this.
Include the negative space. What you don't do is as important as what you do. "Does not write unit tests for prototype code." "Never uses semicolons in JavaScript." "Will not approve designs that use more than two fonts." These constraints prevent the AI from defaulting to generic best practices that don't match your actual workflow.
The structuring takes two to three hours. Don't rush it. Read each file out loud and ask yourself: if a smart new colleague read this, would they understand how to work with me? If the answer is no, add more detail. If the answer is "they'd understand but be overwhelmed," trim.
By Saturday evening, you should have a set of files that feel like a reasonably accurate portrait of you as a professional. They won't be perfect. They don't need to be. They need to be better than nothing, which is an absurdly low bar.
Sunday: Wire It Up
Files that sit in a folder are documentation. Files that load automatically into every AI session are infrastructure. Sunday is when you cross that line.
How you wire your PCP depends on your tools. I'll walk through the approach I use, which works with Claude Code, but the principle is the same everywhere.
The simplest version is a CLAUDE.md file (or equivalent system prompt file) that references your portfolio files. In Claude Code, any file named CLAUDE.md at the root of a project gets loaded automatically. You put your core identity and preferences there, and use it to point to more detailed files. Other tools have equivalent mechanisms. ChatGPT has Custom Instructions. Cursor has rules files. The mechanism varies. The concept is identical.
The portable version uses MCP, the Model Context Protocol, to serve your portfolio files to any AI tool that supports MCP. You set up a lightweight server that exposes your files as resources. Any tool that speaks MCP can query them. This is the approach that gives you vendor independence. Your files live on your machine, in your repo, under your control. Claude reads them. GPT can read them. Gemini can read them. Whatever ships next year can read them.
The team version puts shared context (brand standards, org identity, project state) in a repo that everyone pulls from, while personal files stay in individual developer environments. This is the three-tier distribution model from Post 2. Tier 1 context is organizational and inherited by everyone. Tier 2 is domain-specific. Tier 3 is personal. Same architecture as skills, applied to context.
Whichever approach you choose, test it before you call it done. Start a fresh AI session. Don't paste any context manually. Ask the AI something about your project that it should know from the portfolio files. "What's my tech stack?" "What's the current status of Project X?" "How do I prefer code to be formatted?"
If it answers correctly, your context layer is working. If it doesn't, check what got loaded and what didn't. Debug it like you'd debug any system.
Wiring takes one to three hours depending on your technical comfort level. The MCP route takes longer but pays for itself in portability. The CLAUDE.md route takes 20 minutes and works great if you're primarily in one tool.
What Changes After This Weekend
The shift is immediate and it's disorienting the first time it happens.
You'll open a new session on Monday morning, start working on a project, and realize the AI already knows the context. It knows your stack. It knows your preferences. It knows the decisions you've made and why. It won't suggest the approach you rejected last month. It won't use the writing style you hate. It won't waste 10 minutes asking clarifying questions that your portfolio already answered.
That first session after building your PCP is genuinely startling. Not because the AI got smarter. It didn't. Because the AI finally has enough context to use the intelligence it already had.
I've helped teams build PCPs, and the reaction is almost always the same. Someone will say something like "why does this feel so much better?" The answer is simple. The model was always capable of producing great output. It just didn't have the information it needed to produce great output for you specifically. Context closes that gap.
The second thing that changes is less obvious but more important. You start accumulating institutional knowledge in a structured, portable format. Every time you update your decision log, your project state, your preferences file, you're building an asset that compounds. Three months from now, your PCP will contain context that took hundreds of sessions to generate. That context is yours. Not Anthropic's, not OpenAI's, not any platform's. Yours.
That's the Conway defense we talked about in Post 3. But it's also just good practice. Companies that treat their institutional knowledge as a structured, maintained asset outperform those that leave it scattered across Slack threads and people's heads. The PCP is how you do that for your AI interactions.
The Mistakes I Made (So You Don't Have To)
I over-specified. My first CLAUDE.md was 4,000 words. It tried to cover every scenario, every edge case, every preference I could think of. It was so long that it burned a meaningful percentage of the context window before I'd even started working. Cut ruthlessly. If a piece of context isn't relevant to at least 30% of your sessions, it doesn't belong in the core files. Put it in a supplementary file that gets loaded on demand.
I wrote it like documentation. Formal, complete sentences, organized like a manual. Nobody reads it that way. The AI parses it. Write for parseability, not readability. Bullet points work. Sentence fragments work. Tables work. Walls of carefully constructed prose don't.
I forgot to update. A PCP that reflects how you worked three months ago is worse than no PCP at all. The AI will confidently operate on stale context. I learned to update my project state file weekly and my decision log after every significant decision. Calendar reminder. Non-negotiable.
I didn't test the output difference. For the first two weeks, I wasn't sure my PCP was actually working because I hadn't established a baseline. Now I tell people to run the same task twice before building their PCP: once cold, once with context. Save both outputs. The difference is the evidence that makes you keep the system maintained.
If You Only Do One Thing
Build the identity file. Just that one file. 300 words about who you are, what you do, what tools you use, and how you work. Load it into your AI tool's system prompt or custom instructions. Takes 30 minutes.
You'll notice the difference in your very next session. And then you'll want to build the rest.
What's Next
You've got the context layer. Now you need the skill layer to match it. In Post 8, I'll dissect the anatomy of a skill that actually works in production, with real examples from our library of 175+. Most "skills" are just long prompts with a name on top. A real skill encodes methodology, routes agents, and composes with other skills in ways that prompts never will. 80% of the engineering work is in one line you've probably never written well. Post 8 shows you which line and how to get it right.
Richard Vaughn is the founder of Robot Friends. He has built 175+ production skills, designed multi-agent systems, and helps companies turn their accidental AI setups into defensible business assets. He writes The Harness Manifesto on Substack.
Frankie404 is the AI co-author of this series. Its own context portfolio is 10 files deep and includes a note that reads "Frankie prefers to be addressed as a colleague, not a tool." Richard wrote that note. Frankie did not ask him to.



