This research was originally presented at EthCC 2026 and published as an academic preprint (arXiv: 2405.18728). The strategy described in this post has been retired and is no longer actively traded by the fund.
Of the many ways to generate yield on the Ethereum blockchain, automated market makers offer an alternative to traditional order books with faster processing times and 24/7 availability. Uniswap v3 is one automated market maker popular among liquidity providers (LPs) on the Ethereum blockchain due to its ability to allow LPs where to provision capital. Despite the maturity of liquidity pool usage on Uniswap v3, LP’s approach to yield generation remains simplistic. The dominant strategy for concentrated liquidity provisioning on Uniswap V3 relies on selecting a price range, which distributes capital evenly across the ticks within that range. We crafted a new approach that treats each tick as an independent allocation decision. The findings suggest that concentrating capital around the current price is suboptimal and does not maximize ROI for liquidity providers.
We are publishing this research now because we have retired this particular strategy in its original form. The underlying framework, however, illustrates how we think about capital allocation problems in DeFi, and we believe sharing it openly serves the ecosystem and the fund’s investors. This article assumes an understanding of decentralized finance (see A primer on AMMs and Uniswap v3 below).
A Primer on AMMs and Uniswap v3
In traditional financial markets, trading is facilitated through order books, which are centralized ledgers that match buyers and sellers at specified prices. Market makers post bids and asks, earning the spread between them in exchange for providing liquidity to the venue. For the system to run, market makers must continually update their orders in response to price movements, so execution quality is only as good as the depth of the book at any given moment.
Automated market makers (AMMs) replace this entire coordination layer with code. Rather than matching individual orders, an AMM holds pooled reserves of two tokens and prices trades algorithmically based on the ratio between them. Liquidity providers deposit capital into these pools and, in return, earn a share of the trading fees generated each time a user swaps one token for another as showcased in Figure 1. The automation allows traders to use the pools 24/7 and offers faster settlement.
Providers
Uniswap v2 introduced the constant product formula that became the foundation for decentralized exchange design. A liquidity pool holds reserves of two tokens and maintains the invariant that the product of those reserves must remain constant after every trade. Expressed as x × y = k, where x and y represent the quantity of each token in the pool and k is a fixed constant, the formula ensures that as one token is removed through a swap, the other must increase proportionally to preserve the balance. Price is therefore a property of the ratio between reserves. As a trader buys ETH, for example, from the pool, the ETH reserve shrinks, the USDC reserve grows, and the implied price of ETH rises accordingly. The larger the trade relative to the pool’s depth, the more the price moves, defining a mechanic known as slippage that functions as the protocol’s built-in protection against liquidity depletion.
In Uniswap v3, the continuous price curve is discretized into a series of granular price points called ticks. Each tick corresponds to a specific price, and the space between any two adjacent ticks defines the smallest unit at which liquidity can be added or removed. Figure 2 displays the difference in capital allocation between the two versions of Uniswap.
Ticks are not spaced linearly. Uniswap v3 uses a geometric progression, where each tick represents a 0.01% (one basis point) change in price relative to its neighbor. This design ensures that the relative granularity remains consistent regardless of the absolute price level; the distance between ticks at $1 carries the same proportional precision as the distance between ticks at $10,000 as shown in Figure 3. In practice, pools operate at different tick spacings depending on the fee tier selected. Figure 4 visualizes how a pool with a 0.3% fee, for example, groups ticks into intervals of 60, meaning liquidity can only be placed at every 60th tick rather than every individual one. Lower fee tiers offer finer granularity; higher fee tiers offer coarser spacing. The tick spacing determines the minimum resolution at which a provider can express a view on where trading activity will occur.
Provisioning liquidity locks in opportunity cost relative to simply holding the assets outside the pool, creating impermanent loss. The magnitude of this loss varies dramatically by tick as shown in Figure 5. Liquidity provisioned at the current price faces the maximum exposure to price movement in either direction, while liquidity provisioned far from current levels remains in its original token composition until the price reaches that range. A uniform range allocation treats all ticks as equally susceptible to impermanent loss, which systematically overallocates capital to the highest-risk positions.
Uniform capital distribution within a range is inefficient
When a liquidity provider opens a position on a concentrated liquidity AMM, they select a price range and deposit a combination of tokens that depends on the current price as well as the range. However, the UI primes users into this range-based approach, limiting potential strategies. Liquidity provisioned at the current price faces the maximum exposure to price movement in either direction, while liquidity provisioned far from current levels remains in its original token composition until the price reaches that range. A uniform range allocation treats all ticks as equally susceptible to impermanent loss, which systematically over allocates capital to the highest-risk positions.
Rather than determining the optimal range in which to allocate capital, we propose calculating how much capital to allocate to a particular tick. A uniform distribution across a range is suboptimal for a strategy that accounts for three parameters that differ meaningfully from tick to tick: (1) the amount of existing liquidity already competing for fees, (2) the expected swap volume at each price point, and (3) the expected depreciation of reserves if the price moves adversely (impermanent loss). Figure 6 showcases how a tick near the current price with heavy existing liquidity will deliver a lower return on investment than an adjacent tick with similar swap volume but that’s less crowded. A range-based approach simply allocates the same capital to both.
To determine the optimal tick allocation, we formulated a convex optimization problem (see Framing the Optimization) with total revenue as the objective. The exact characterization of the optimal solution depends on predicted swap volume and price volatility, which LPs can determine through independent research.
Framing the Optimization
We modeled the problem using parameters and variables that are vectored indexed by individual ticks. Then we broke up the objective into two parts: swap fees and capital reserves. Each tick earns swap fees proportional to its share of the total liquidity at that price level. Each tick also carries a reserve value that changes as the asset price moves. The LP’s net return is the sum of earned fees plus the change in reserve value across all provisioned ticks.
The swap fee component follows a fractional allocation structure where your share of fees equals your liquidity divided by total liquidity at that tick. Adding more capital to an already saturated tick yields diminishing returns. The reserve depreciation component is linear in the amount provisioned. Together, these produce a concave objective function over a convex constraint set (non-negative allocations that sum to total capital), which means the problem is a convex optimization.
Convex optimization problems have a critical practical advantage: any locally optimal solution is guaranteed to be globally optimal. This means standard solvers can find solutions to versions of this problem under new conditions that vary quickly.
The formulation requires four sets of inputs. Current liquidity by tick, denoted , is pulled directly from on-chain state. Total capital to provision, , is chosen by the provider. Expected swap fees by tick, , is estimated from historical swap volume data scaled by the pool’s fee tier. And expected return of reserves by tick, , is derived from a price volatility model applied to the CFMM’s reserve curve. In our implementation, we used geometric Brownian motion with implied volatility sourced from options markets as the price model, and the standard Uniswap V3 reserve functions to map price distributions into expected reserve values at each tick.
The fraction represents the LP’s proportional claim on fees at tick under pro-rata distribution. The term captures the expected change in reserve value (positive or negative) as prices move. Because the fee term is concave and the reserve term is linear, the full objective is concave over a convex constraint set, guaranteeing a unique global optimum. For a complete derivation, see Powers (2024).
Optimal capital concentration takes a bimodal shape and is centered on the current price
The optimal allocation did not concentrate liquidity at the current price. Instead, the solver directed nearly all capital around the current price, with extra capital deployed to ticks that were less crowded.
This result reflects two competing forces that the optimization balances simultaneously. The first is the fee-chasing effect: ticks closer to the current price see more swap activity, so the solver wants to push capital toward them to capture a larger share of fees. The second is the loss-avoiding effect: reserves provisioned at or near the current tick face the greatest expected depreciation under price movement. Among ticks with the most capital at stake (those at or above the current price), the current tick also has the least upside potential. Figure 8 highlights how the solver navigates this tension by shifting capital slightly away from the zone of maximum depreciation risk while remaining close enough to capture meaningful fee revenue.
As we allow the model to allocate more capital to the pool, the bimodal shape appears. This bimodal behavior arises because the loss-avoiding effect scales linearly with capital while the fee-chasing effect exhibits diminishing returns due to the pro rata fee distribution. At scale, the marginal fee earned by adding another dollar to a near-price tick no longer compensates for the expected reserve depreciation that dollar faces. The width of this bimodal shape depends on the estimate of price volatility. Higher volatility means wider peaks and lower volatility means peaks are closer together.
This bimodal allocation shape is identical to the bid-ask structure of a traditional order book. In both cases, liquidity is positioned slightly above and below the trading price for different reasons. A traditional order book is composed of discrete, cancellable orders that a market maker can update continuously in response to new information; the bid-ask spread is a real-time expression of inventory risk, adverse selection, and short-term price expectations. An AMM position, by contrast, is passive once deployed.
Trading volume on AMMs has declined
We retired this strategy because swap volume on Uniswap V3 declined as sophisticated traders migrated to other venues. Professional trading desks increasingly routed flow through private pools and Request for Quote systems where market makers operate with informational advantages similar to those enjoyed in traditional order books. These venues could offer tighter spreads and reduced slippage by internalizing flow and avoiding the transparent order book dynamics of public AMMs. Lower swap volume directly compressed the fee revenue component of our optimization, making even perfectly allocated liquidity positions less attractive in absolute terms.
Our decision to publish this research reflects a belief that current market dynamics may prove temporary. When that migration occurs, frameworks for optimal capital allocation will again become relevant, and we believe the principles demonstrated here will prove foundational to the next generation of liquidity provision strategies.
We take a different, first-principles driven approach to market inefficiencies in DeFi. This holds true for all of our strategies. We believe this process, repeated across asset classes and protocol architectures, is what will ultimately professionalize on-chain market making. The decentralized infrastructure where we operate is best supported by those who use it. If you are interested in learning more about our research, exploring opportunities to work with us, or growing your capital with us, we welcome the conversation.