
Turning Claude Code into my personal chief of staff
I've been thinking for a while about running Claude code as a general purpose personal assistant agent that has a) memory about me b) access to my main working tools and c) it's own computer and subagents or sub systems do process things on it's own. I've had these notes for a while, and have decided to publish them as a forcing mechanism to actually build this, and to crowdsource answers to some of my open design questions. I already built the MVP version of this for my CRM, but want to expa...
Open systems create emergent behaviours
A thesis for open social graphs
Why you should care about zero-knowledge
Do you really need a blockchain?
Subscribe to one small idea
daily (mostly) publication. I write to learn, sharing ideas and notes for what I find interesting
Share Dialog

Jim Simons made $28 billion running Renaissance Technologies, the greatest quantitative fund ever built.
In the early days they also did some discretionary trading. There's one interview I love from the period. In it Simons recalls how him and his partner had a large position in gold, riding a run from $200 to $800. One day he calls his stockbroker. During small talk the broker complains that his wife, a jeweler, had cleaned out all his old gold tie clasps and cufflinks that morning and gone downtown to sell them. As a jeweler, 'she only had to stand in the short line'. People were lining up for hours to sell their gold.
Simons hangs up and liquidates everything. By the end of the next day gold had dropped to $600 and did not go back up for years.
The most decisive input wasn't sought. It arrived through a channel that had nothing to do with the trade.
I've spent the last year building agents for investing, for my fund and as fully autonomous systems, and this pattern keeps showing up. The context that changes the decision is almost never the context you built the system to find.
The problem of getting the right context to an agent at the right moment is called context engineering. My experience is that it's the piece you need to get right because it's where the edge is. Models are converging. The gap between the best and second-best is harder and harder to notice. Most existing software is becoming accessible to agents. The remaining variable is what your agent sees and knows.
But most builders discover the same curve: adding context initially produces huge gains, then suddenly performance collapses. The system drowns in its own inputs. This is what happens when you try to solve for the unexpected by adding more of the expected.
So the thing to solve for is building systems that know what to surface and what to withhold. The architecture for this can be broken in two parts.
The first half is gathering. Known unknowns.
In a venture setting, a good example is due diligence: you need the cap table, the competitor landscape, unit economics, reference calls, and the list goes on. The checklist exists before you start. There's a lot more to an investment than that, but you need it.

Jim Simons made $28 billion running Renaissance Technologies, the greatest quantitative fund ever built.
In the early days they also did some discretionary trading. There's one interview I love from the period. In it Simons recalls how him and his partner had a large position in gold, riding a run from $200 to $800. One day he calls his stockbroker. During small talk the broker complains that his wife, a jeweler, had cleaned out all his old gold tie clasps and cufflinks that morning and gone downtown to sell them. As a jeweler, 'she only had to stand in the short line'. People were lining up for hours to sell their gold.
Simons hangs up and liquidates everything. By the end of the next day gold had dropped to $600 and did not go back up for years.
The most decisive input wasn't sought. It arrived through a channel that had nothing to do with the trade.
I've spent the last year building agents for investing, for my fund and as fully autonomous systems, and this pattern keeps showing up. The context that changes the decision is almost never the context you built the system to find.
The problem of getting the right context to an agent at the right moment is called context engineering. My experience is that it's the piece you need to get right because it's where the edge is. Models are converging. The gap between the best and second-best is harder and harder to notice. Most existing software is becoming accessible to agents. The remaining variable is what your agent sees and knows.
But most builders discover the same curve: adding context initially produces huge gains, then suddenly performance collapses. The system drowns in its own inputs. This is what happens when you try to solve for the unexpected by adding more of the expected.
So the thing to solve for is building systems that know what to surface and what to withhold. The architecture for this can be broken in two parts.
The first half is gathering. Known unknowns.
In a venture setting, a good example is due diligence: you need the cap table, the competitor landscape, unit economics, reference calls, and the list goes on. The checklist exists before you start. There's a lot more to an investment than that, but you need it.

Turning Claude Code into my personal chief of staff
I've been thinking for a while about running Claude code as a general purpose personal assistant agent that has a) memory about me b) access to my main working tools and c) it's own computer and subagents or sub systems do process things on it's own. I've had these notes for a while, and have decided to publish them as a forcing mechanism to actually build this, and to crowdsource answers to some of my open design questions. I already built the MVP version of this for my CRM, but want to expa...
Open systems create emergent behaviours
A thesis for open social graphs
Why you should care about zero-knowledge
Do you really need a blockchain?
Share Dialog
In agent systems, this maps to retrieval pipelines, structured queries, monitors. Most of what ships under the label "context engineering" today is gathering.
It's valuable. It's also the easy half. Which means it's not where edge lives.
The hard half is hunting. Unknown unknowns. Context you didn't know you were missing.
A founder casually mentioning a friend building something new. An observation made walking through a city. The broker's wife standing in line to sell gold jewelry. None of these were on any checklist. None were retrievable, because nobody knew they existed.
In gathering, you know what you need and go find it. In hunting, you build readiness and position yourself, but the signal has to arrive on its own and when it does, you need to recognize it. It's something closer to noticing than it is to analyzing.
So can you actually build for it?
The best mechanism I've found is connecting agents that carry genuinely different contexts.
The equivalent for us is conversation, especially with people who are close enough to understand our domain but far enough to see it differently. Venture funds often work this way. Partners with different priors looking at the same deal surface things neither would find alone. Not because one is smarter, but because their contexts collide.
Agent systems can produce the same dynamic. When one agent submits analysis and another with different context evaluates it, the system surfaces things that quality gates alone do not produce. Verification catches errors, whereas the point of collision is to surface blind spots.
I use this constantly. When I want a more polished design for instance, I set up two agents working against each other, criticizing from different perspectives until they converge. They catch things I would not have even thought of looking at.
It's also how I build the larger, multi-agent autonomous systems like Fair.
Another example of this pattern is the agent network built by our portfolio company Droyd, which connects trading agents and lets them share context across genuinely different perspectives and skills.
Hunting is the hardest part of context engineering.
Almost nobody is building for it. But the systems that figure it out will do something no dashboard ever could: they'll hear the broker's wife and know what it means.
If you're thinking about this, let's meet.
--LDL
In agent systems, this maps to retrieval pipelines, structured queries, monitors. Most of what ships under the label "context engineering" today is gathering.
It's valuable. It's also the easy half. Which means it's not where edge lives.
The hard half is hunting. Unknown unknowns. Context you didn't know you were missing.
A founder casually mentioning a friend building something new. An observation made walking through a city. The broker's wife standing in line to sell gold jewelry. None of these were on any checklist. None were retrievable, because nobody knew they existed.
In gathering, you know what you need and go find it. In hunting, you build readiness and position yourself, but the signal has to arrive on its own and when it does, you need to recognize it. It's something closer to noticing than it is to analyzing.
So can you actually build for it?
The best mechanism I've found is connecting agents that carry genuinely different contexts.
The equivalent for us is conversation, especially with people who are close enough to understand our domain but far enough to see it differently. Venture funds often work this way. Partners with different priors looking at the same deal surface things neither would find alone. Not because one is smarter, but because their contexts collide.
Agent systems can produce the same dynamic. When one agent submits analysis and another with different context evaluates it, the system surfaces things that quality gates alone do not produce. Verification catches errors, whereas the point of collision is to surface blind spots.
I use this constantly. When I want a more polished design for instance, I set up two agents working against each other, criticizing from different perspectives until they converge. They catch things I would not have even thought of looking at.
It's also how I build the larger, multi-agent autonomous systems like Fair.
Another example of this pattern is the agent network built by our portfolio company Droyd, which connects trading agents and lets them share context across genuinely different perspectives and skills.
Hunting is the hardest part of context engineering.
Almost nobody is building for it. But the systems that figure it out will do something no dashboard ever could: they'll hear the broker's wife and know what it means.
If you're thinking about this, let's meet.
--LDL
>100 subscribers
>100 subscribers
No activity yet