Jim Simons made $28 billion running Renaissance Technologies, the greatest quantitative fund ever built.
In the early days they also did some discretionary trading. There's one interview I love from the period. In it Simons recalls how him and his partner had a large position in gold, riding a run from $200 to $800. One day he calls his stockbroker. During small talk the broker complains that his wife, a jeweler, had cleaned out all his old gold tie clasps and cufflinks that morning and gone downtown to sell them. As a jeweler, 'she only had to stand in the short line'. People were lining up for hours to sell their gold.
Simons hangs up and liquidates everything. By the end of the next day gold had dropped to $600 and did not go back up for years.
The most decisive input wasn't sought. It arrived through a channel that had nothing to do with the trade.
I've spent the last year building agents for investing, for my fund and as fully autonomous systems, and this pattern keeps showing up. The context that changes the decision is almost never the context you built the system to find.
The problem of getting the right context to an agent at the right moment is called context engineering. My experience is that it's the piece you need to get right because it's where the edge is. Models are converging. The gap between the best and second-best is harder and harder to notice. Most existing software is becoming accessible to agents. The remaining variable is what your agent sees and knows.
But most builders discover the same curve: adding context initially produces huge gains, then suddenly performance collapses. The system drowns in its own inputs. This is what happens when you try to solve for the unexpected by adding more of the expected.
So the thing to solve for is building systems that know what to surface and what to withhold. The architecture for this can be broken in two parts.
The first half is gathering. Known unknowns.
In a venture setting, a good example is due diligence: you need the cap table, the competitor landscape, unit economics, reference calls, and the list goes on. The checklist exists before you start. There's a lot more to an investment than that, but you need it.
In agent systems, this maps to retrieval pipelines, structured queries, monitors. Most of what ships under the label "context engineering" today is gathering.
It's valuable. It's also the easy half. Which means it's not where edge lives.
The hard half is hunting. Unknown unknowns. Context you didn't know you were missing.
A founder casually mentioning a friend building something new. An observation made walking through a city. The broker's wife standing in line to sell gold jewelry. None of these were on any checklist. None were retrievable, because nobody knew they existed.
In gathering, you know what you need and go find it. In hunting, you build readiness and position yourself, but the signal has to arrive on its own and when it does, you need to recognize it. It's something closer to noticing than it is to analyzing.
So can you actually build for it?
The best mechanism I've found is connecting agents that carry genuinely different contexts.
The equivalent for us is conversation, especially with people who are close enough to understand our domain but far enough to see it differently. Venture funds often work this way. Partners with different priors looking at the same deal surface things neither would find alone. Not because one is smarter, but because their contexts collide.
Agent systems can produce the same dynamic. When one agent submits analysis and another with different context evaluates it, the system surfaces things that quality gates alone do not produce. Verification catches errors, whereas the point of collision is to surface blind spots.
I use this constantly. When I want a more polished design for instance, I set up two agents working against each other, criticizing from different perspectives until they converge. They catch things I would not have even thought of looking at.
It's also how I build the larger, multi-agent autonomous systems like Fair.
Another example of this pattern is the agent network built by our portfolio company Droyd, which connects trading agents and lets them share context across genuinely different perspectives and skills.
Hunting is the hardest part of context engineering.
Almost nobody is building for it. But the systems that figure it out will do something no dashboard ever could: they'll hear the broker's wife and know what it means.
If you're thinking about this, let's meet.
--LDL

Last week we hosted our inaugural Re7 Social crypto social research day, with talks from Salvino, David Phillips, and Jacob Horne - founders who've iterated in this space more than most. One theme emerged: how asset-creation platforms capture value, or how value is captured in a world where the cost of creating tokens asymptotes to zero.
The idea I'm taking seriously after this research day can be expressed in one line: token pairing is a clean business model for asset‑creation platforms - pair every new asset in the platform token so every buy locks float upstream and mechanically lifts children assets downstream.
Blockchains have collapsed the cost of creating a tradable asset to basically zero. The explosion of the number of memecoins is an expression of the barriers lowering and precursor to the set of assets including everything: LP shares, startup equity, digital art, creator coins, agent tokens, and much more.

I always thought I couldn’t think in front of a computer.
I was wrong. I just started this new practice to write for insights from my friend Rami. It’s called free writing: open a blank page, start a timer, and write about the topic until time’s up.
I remember a couple of rules from Farza’s writing (a founder who built a fantastic simple text editor for this purpose):
No editing
Don’t stop to think or structure; keep writing
The intention is to create a stream of thought. It reminds me of David Perrell’s image that a writer’s mind is like a water tank filled with clean and muddy water.
The muddy water lingers at the bottom, so initially when you turn the tap on you only get brown slush. But the longer it runs, the clearer the water gets.
Your brain works the same. Your initial thoughts on a topic are lame and commonplace. But as you keep spinning your wheels, new insights emerge. You underestimate how much thinking is a volume game versus a quality game. The more you write, the more you think - the more you think, the better you think.
So start writing.
It’s a good practice because it removes the expectation of output. Writing like this gives you control over your thinking: it’s up to you to start the timer, open the page, and start typing. Maybe great ideas come. Maybe not. But you’ve done the job of showing up and writing.
You can deploy this practice in many areas of your life:
to plan your day
to think out loud about decisions
to disentangle personal situations
to draft a letter to friends or family
to work through your emotions
to brainstorm
This feels like the greatest hack. You get to turn on your brain by just starting to press buttons on a keyboard.
My favorite ways to use this for now:
start each morning with a 15-20 minute session on a topic like this one.
before the workday, take 10 minutes with the prompt: What am I working on today? Why is this the most important thing for me to work on?
when I need to make decisions or to provide an educated take on something - 15-minute timer and write it all down. Then we can summarize to bullet points etc.
At the end of the day: what did I get done today? What do I think moved the needle the most?
A happy byproduct is this creates a valuable exhaust of context about you, how you think, your dreams, aspirations, envy, insecurities. You can use these as context for AI - to mirror what you’ve shared and find connections you couldn’t.
This hasn’t been too useful for me yet, but it’s probably a prompting skill issue.
Another place where this is applicable is for prompting or spec writing for new products. You want something that does not exist: describe it, explain your pain, what you want, the desired aesthetic vibe, etc. during a 15-minute session. You will have the most comprehensive document for LLMs to work with and create either a great prompt or a product spec.
You can also use it to clarify what you want. Clarity of purpose is like magic. When you have a clear outcome in mind, it often feels like the world falls in alignment with it. The difficult part is having that clarity. This exercise can give it to you.
Jim Simons made $28 billion running Renaissance Technologies, the greatest quantitative fund ever built.
In the early days they also did some discretionary trading. There's one interview I love from the period. In it Simons recalls how him and his partner had a large position in gold, riding a run from $200 to $800. One day he calls his stockbroker. During small talk the broker complains that his wife, a jeweler, had cleaned out all his old gold tie clasps and cufflinks that morning and gone downtown to sell them. As a jeweler, 'she only had to stand in the short line'. People were lining up for hours to sell their gold.
Simons hangs up and liquidates everything. By the end of the next day gold had dropped to $600 and did not go back up for years.
The most decisive input wasn't sought. It arrived through a channel that had nothing to do with the trade.
I've spent the last year building agents for investing, for my fund and as fully autonomous systems, and this pattern keeps showing up. The context that changes the decision is almost never the context you built the system to find.
The problem of getting the right context to an agent at the right moment is called context engineering. My experience is that it's the piece you need to get right because it's where the edge is. Models are converging. The gap between the best and second-best is harder and harder to notice. Most existing software is becoming accessible to agents. The remaining variable is what your agent sees and knows.
But most builders discover the same curve: adding context initially produces huge gains, then suddenly performance collapses. The system drowns in its own inputs. This is what happens when you try to solve for the unexpected by adding more of the expected.
So the thing to solve for is building systems that know what to surface and what to withhold. The architecture for this can be broken in two parts.
The first half is gathering. Known unknowns.
In a venture setting, a good example is due diligence: you need the cap table, the competitor landscape, unit economics, reference calls, and the list goes on. The checklist exists before you start. There's a lot more to an investment than that, but you need it.
In agent systems, this maps to retrieval pipelines, structured queries, monitors. Most of what ships under the label "context engineering" today is gathering.
It's valuable. It's also the easy half. Which means it's not where edge lives.
The hard half is hunting. Unknown unknowns. Context you didn't know you were missing.
A founder casually mentioning a friend building something new. An observation made walking through a city. The broker's wife standing in line to sell gold jewelry. None of these were on any checklist. None were retrievable, because nobody knew they existed.
In gathering, you know what you need and go find it. In hunting, you build readiness and position yourself, but the signal has to arrive on its own and when it does, you need to recognize it. It's something closer to noticing than it is to analyzing.
So can you actually build for it?
The best mechanism I've found is connecting agents that carry genuinely different contexts.
The equivalent for us is conversation, especially with people who are close enough to understand our domain but far enough to see it differently. Venture funds often work this way. Partners with different priors looking at the same deal surface things neither would find alone. Not because one is smarter, but because their contexts collide.
Agent systems can produce the same dynamic. When one agent submits analysis and another with different context evaluates it, the system surfaces things that quality gates alone do not produce. Verification catches errors, whereas the point of collision is to surface blind spots.
I use this constantly. When I want a more polished design for instance, I set up two agents working against each other, criticizing from different perspectives until they converge. They catch things I would not have even thought of looking at.
It's also how I build the larger, multi-agent autonomous systems like Fair.
Another example of this pattern is the agent network built by our portfolio company Droyd, which connects trading agents and lets them share context across genuinely different perspectives and skills.
Hunting is the hardest part of context engineering.
Almost nobody is building for it. But the systems that figure it out will do something no dashboard ever could: they'll hear the broker's wife and know what it means.
If you're thinking about this, let's meet.
--LDL

Last week we hosted our inaugural Re7 Social crypto social research day, with talks from Salvino, David Phillips, and Jacob Horne - founders who've iterated in this space more than most. One theme emerged: how asset-creation platforms capture value, or how value is captured in a world where the cost of creating tokens asymptotes to zero.
The idea I'm taking seriously after this research day can be expressed in one line: token pairing is a clean business model for asset‑creation platforms - pair every new asset in the platform token so every buy locks float upstream and mechanically lifts children assets downstream.
Blockchains have collapsed the cost of creating a tradable asset to basically zero. The explosion of the number of memecoins is an expression of the barriers lowering and precursor to the set of assets including everything: LP shares, startup equity, digital art, creator coins, agent tokens, and much more.

I always thought I couldn’t think in front of a computer.
I was wrong. I just started this new practice to write for insights from my friend Rami. It’s called free writing: open a blank page, start a timer, and write about the topic until time’s up.
I remember a couple of rules from Farza’s writing (a founder who built a fantastic simple text editor for this purpose):
No editing
Don’t stop to think or structure; keep writing
The intention is to create a stream of thought. It reminds me of David Perrell’s image that a writer’s mind is like a water tank filled with clean and muddy water.
The muddy water lingers at the bottom, so initially when you turn the tap on you only get brown slush. But the longer it runs, the clearer the water gets.
Your brain works the same. Your initial thoughts on a topic are lame and commonplace. But as you keep spinning your wheels, new insights emerge. You underestimate how much thinking is a volume game versus a quality game. The more you write, the more you think - the more you think, the better you think.
So start writing.
It’s a good practice because it removes the expectation of output. Writing like this gives you control over your thinking: it’s up to you to start the timer, open the page, and start typing. Maybe great ideas come. Maybe not. But you’ve done the job of showing up and writing.
You can deploy this practice in many areas of your life:
to plan your day
to think out loud about decisions
to disentangle personal situations
to draft a letter to friends or family
to work through your emotions
to brainstorm
This feels like the greatest hack. You get to turn on your brain by just starting to press buttons on a keyboard.
My favorite ways to use this for now:
start each morning with a 15-20 minute session on a topic like this one.
before the workday, take 10 minutes with the prompt: What am I working on today? Why is this the most important thing for me to work on?
when I need to make decisions or to provide an educated take on something - 15-minute timer and write it all down. Then we can summarize to bullet points etc.
At the end of the day: what did I get done today? What do I think moved the needle the most?
A happy byproduct is this creates a valuable exhaust of context about you, how you think, your dreams, aspirations, envy, insecurities. You can use these as context for AI - to mirror what you’ve shared and find connections you couldn’t.
This hasn’t been too useful for me yet, but it’s probably a prompting skill issue.
Another place where this is applicable is for prompting or spec writing for new products. You want something that does not exist: describe it, explain your pain, what you want, the desired aesthetic vibe, etc. during a 15-minute session. You will have the most comprehensive document for LLMs to work with and create either a great prompt or a product spec.
You can also use it to clarify what you want. Clarity of purpose is like magic. When you have a clear outcome in mind, it often feels like the world falls in alignment with it. The difficult part is having that clarity. This exercise can give it to you.
As an aside, some people look at this explosion and panic with the view that too many tokens makes returns impossible. The counter to this view is that in attention markets (short‑form content is the obvious analogy), more attempts widen the tails. That's like saying TikTok killed virality because there's too much content. The opposite happens: more attempts create bigger winners.
The question isn't whether asset creation will slow down (it won't). It's who captures the value when everyone becomes an asset creator.
The talks surfaced three dominant monetization strategies, each misaligned in their own way:
∗ The Bundler's Game (as told by Salvino): Anonymous deployers watch TikTok, scan Twitter, monitor Twitch streams to instantly spawn tokens for anything trending. They corner the supply early and sell into retail FOMO or snipers. It's lucrative ($54M+ in profits on Pump.fun alone in past 6 months) but parasitic. Less than 3% of participants on Pumpfun ever see meaningful wins, and it's in great part due to these sophisticated players.

∗ Traditional launchpads: Simple game, let users deploy a token and take a fee, a % of the supply or both. The alignment issue with this model is that platforms are a classic case of adverse selection. Their terms mirror venture funds but worse so only projects that can't raise elsewhere show up. It's the lemons problem, with a token.
∗ Buyback programs: Both asset creators and platforms will try to monetize by increasing their own asset market cap through web2-inspired buybacks. This creates another alignment issue between the company's success and token value. Revenue buys back tokens instead of funding product or growth, ensuring either long-term holders or speculators lose.
And that's when the idea of pairing came into play: what if the platform's success was mechanically tied to every asset it creates?
In the past 12 months, several protocols on Base like Zora, FxHash and Virtuals have started exploring with a new business model: token pairing.
The intuition is that each new asset created on the platform pairs not to USDC/ETH/SOL, but with the platform's own token. Every buy triggers (behind the scenes, abstracted from the user) a double hop:
buy the platform token
swap the platform token for child asset
David summarized the visible effect as ‘two green candles for the price of one’ - crude, but directionally correct.

For the platform asset, demand for child assets locks more platform token supply in liquidity pools. In Zora's case, demand for creator coins creates a growing mountain of $ZORA removed from circulation, locked across thousands of creator markets.

The model works because it creates alignment between the top-level asset and the child assets and their holders, and the creators. Heet (Noice) calls it Internet Capital Alignment, a concept I prefer to its quasi-namesake Internet capital Markets.
First, it enables clean upstream value capture. Every successful child asset permanently locks platform tokens in liquidity pools. Success creates supply sinks. Jacob's data shows significant portions of $ZORA supply now locked in downstream liquidity, shrinking float through utility, not burning.
Second, strength flows down. When the platform token appreciates, every asset paired to it gets mechanically lifted. Early creators become evangelists, attracting more creators, which reflexively makes the platform better. The rising tide lifts all boats.
Third, it creates platform moats. Each successful asset makes the platform more attractive for new creators, creating a gravity well of liquidity and attention that becomes increasingly difficult to escape. The more people use the product, the better it gets. Network effects™️.
This model fits situations where the product is asset origination itself: social tokens (Zora), generative art (FxHash), agents (Virtuals), prediction markets, and more. The platform's value is its ability to spawn valuable children.
There are a couple of important additional considerations that stem from the fact that the pairing model introduces an element of leverage. Each buy is essentially counted twice.
First, volatility requires management. The "two green candles" demo works because of low liquidity - this is a bug through, not a feature. This requires: a) deep base pools b) routing through your asset (initially painful for Zora) c) automated liquidity provision.
Zora uses an interesting deleveraging pattern: they increase the liquidity on child assets as they appreciate stabilizes price once they take off. Leverage on the way up, and stable price once we get there.
Second, alignment requires value. Financial alchemy only works if it creates good outcomes: more artists for FxHash, better forecasters for prediction markets, more self-reliant creators. At the core of the system there must be a high-quality product where users are willing to spend time and resources, for the sake of the product or its benefits.
Jacob pointed out that the price premium L1s held for being the native pairing asset is now up for grabs. I expect more asset-creation platforms to explore this. Until it's a common pattern, these projects should outperform in bullish markets. Net quote locked in downstream pools is the metric to watch.

The more ambitious implication: room exists for an internet-scale asset creation platform for culture. Whichever platform reaches this first at scale has a shot to become the asset for culture, the native internet currency. This could have been the directional bet behind Zuckerberg's Libra. Zora seems to be executing toward it now.
Lastly, as usual in crypto, we're ahead of the curve when it comes to financial engineering. But our products still fall short of broad adoption. A day like the one we spent gives me confidence as it's clear the people who can make it happen are going hard at it.
As an aside, some people look at this explosion and panic with the view that too many tokens makes returns impossible. The counter to this view is that in attention markets (short‑form content is the obvious analogy), more attempts widen the tails. That's like saying TikTok killed virality because there's too much content. The opposite happens: more attempts create bigger winners.
The question isn't whether asset creation will slow down (it won't). It's who captures the value when everyone becomes an asset creator.
The talks surfaced three dominant monetization strategies, each misaligned in their own way:
∗ The Bundler's Game (as told by Salvino): Anonymous deployers watch TikTok, scan Twitter, monitor Twitch streams to instantly spawn tokens for anything trending. They corner the supply early and sell into retail FOMO or snipers. It's lucrative ($54M+ in profits on Pump.fun alone in past 6 months) but parasitic. Less than 3% of participants on Pumpfun ever see meaningful wins, and it's in great part due to these sophisticated players.

∗ Traditional launchpads: Simple game, let users deploy a token and take a fee, a % of the supply or both. The alignment issue with this model is that platforms are a classic case of adverse selection. Their terms mirror venture funds but worse so only projects that can't raise elsewhere show up. It's the lemons problem, with a token.
∗ Buyback programs: Both asset creators and platforms will try to monetize by increasing their own asset market cap through web2-inspired buybacks. This creates another alignment issue between the company's success and token value. Revenue buys back tokens instead of funding product or growth, ensuring either long-term holders or speculators lose.
And that's when the idea of pairing came into play: what if the platform's success was mechanically tied to every asset it creates?
In the past 12 months, several protocols on Base like Zora, FxHash and Virtuals have started exploring with a new business model: token pairing.
The intuition is that each new asset created on the platform pairs not to USDC/ETH/SOL, but with the platform's own token. Every buy triggers (behind the scenes, abstracted from the user) a double hop:
buy the platform token
swap the platform token for child asset
David summarized the visible effect as ‘two green candles for the price of one’ - crude, but directionally correct.

For the platform asset, demand for child assets locks more platform token supply in liquidity pools. In Zora's case, demand for creator coins creates a growing mountain of $ZORA removed from circulation, locked across thousands of creator markets.

The model works because it creates alignment between the top-level asset and the child assets and their holders, and the creators. Heet (Noice) calls it Internet Capital Alignment, a concept I prefer to its quasi-namesake Internet capital Markets.
First, it enables clean upstream value capture. Every successful child asset permanently locks platform tokens in liquidity pools. Success creates supply sinks. Jacob's data shows significant portions of $ZORA supply now locked in downstream liquidity, shrinking float through utility, not burning.
Second, strength flows down. When the platform token appreciates, every asset paired to it gets mechanically lifted. Early creators become evangelists, attracting more creators, which reflexively makes the platform better. The rising tide lifts all boats.
Third, it creates platform moats. Each successful asset makes the platform more attractive for new creators, creating a gravity well of liquidity and attention that becomes increasingly difficult to escape. The more people use the product, the better it gets. Network effects™️.
This model fits situations where the product is asset origination itself: social tokens (Zora), generative art (FxHash), agents (Virtuals), prediction markets, and more. The platform's value is its ability to spawn valuable children.
There are a couple of important additional considerations that stem from the fact that the pairing model introduces an element of leverage. Each buy is essentially counted twice.
First, volatility requires management. The "two green candles" demo works because of low liquidity - this is a bug through, not a feature. This requires: a) deep base pools b) routing through your asset (initially painful for Zora) c) automated liquidity provision.
Zora uses an interesting deleveraging pattern: they increase the liquidity on child assets as they appreciate stabilizes price once they take off. Leverage on the way up, and stable price once we get there.
Second, alignment requires value. Financial alchemy only works if it creates good outcomes: more artists for FxHash, better forecasters for prediction markets, more self-reliant creators. At the core of the system there must be a high-quality product where users are willing to spend time and resources, for the sake of the product or its benefits.
Jacob pointed out that the price premium L1s held for being the native pairing asset is now up for grabs. I expect more asset-creation platforms to explore this. Until it's a common pattern, these projects should outperform in bullish markets. Net quote locked in downstream pools is the metric to watch.

The more ambitious implication: room exists for an internet-scale asset creation platform for culture. Whichever platform reaches this first at scale has a shot to become the asset for culture, the native internet currency. This could have been the directional bet behind Zuckerberg's Libra. Zora seems to be executing toward it now.
Lastly, as usual in crypto, we're ahead of the curve when it comes to financial engineering. But our products still fall short of broad adoption. A day like the one we spent gives me confidence as it's clear the people who can make it happen are going hard at it.
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog