This is a draft and a work in progress
Why do paper portfolios consistently turn in spectacular performances,
even after adjusting for the visible costs of trading and
after adjusting for risk - while actual portfolios strain
to beat the market averages?”
Implementation shortfall results from the
costs of trading as well as the opportunity costs of not
executing all of the positions in the paper portfolio. His
insights into the challenges that equity managers faced
in translating investment ideas into realized portfolio
performance illustrated the difference in performance
between a paper and an actual portfolio. It had the great
benefit of getting people to start thinking about all the
different parts of the order implementation workflow
where inefficiencies could generate additional costs
detracting from the overall performance of the fund.
the measurement of the difference between the
execution price and the time stamped benchmark price
between any two points in the implementation process.
The benchmark price represented an alternative
decision price of another participant in the investment
lifecycle, for example the trader.7 This re-definition was
needed in order to shift the focus of implementation
shortfall from explaining the differences between
paper and actual returns to using it as a benchmark for
measuring trading performance.
what happened was a linguistic
confusion equating fund performance (the change of
price from one point in time to another) with quantifying
the quality of performance in accomplishing the activity
and tasks of trading. Once this piece of reductionism
was accomplished, the effectiveness of a trader (or a
broker or a strategy) was defined as and reduced to the
ability to minimize a change in asset price.
^2
The Use and Abuse of Implementation Shortfall:
Examining the Dominance of Implementation Shortfall as a Trading Benchmark
- Henry Yegerman, Markit Transaction Cost Analysis
Imagine you're building a simulation, and you want to compare results of a bunch of different strategies.
At any given time, a strategy has 4 key characteristics:
If you were simulating the strategy against historical data, it may be practical to just graph these values vs time. But we don't want to know how it would have performed. We want to know how it might perform in the messy uncertain future. This means we simulate the strategy across hundreds of thousands of potential price trajectories. So unless you want hundreds of thousands of time-series graphs, there's gotta be another way of judging a strategy
At the end of the day, what we care about is whether a given strategy will make money or lose money in the long run – essentially the third bullet point in the list above. And since we only care what happens in the long run, we can ignore time and just look at the strategy's $ value at the very end of the sequence. This brings us to the asymptotic wealth growth rate, [G]
:
[G]
We didn't invent G (we found out about it from https://research.paradigm.xyz/uniswaps-alchemy, which you should def read), but it's a super useful tool. We can use it to see how strategies behave in various market conditions, broadly defined by the drift (mu
) and volatility (sigma
) parameters for Geometric Brownian Motion.
.NOTE: When graphing these results the surfaces actually intersect, when using certain graphing libraries such as matplotlib, they can't handle occlusion and may misrepresent the results visually.
quantity [G]
A quote from the Paradigm article:
This quantity [G] is important because strategies that optimize it perform better than strategies that don’t almost surely as time goes on
Let's say the Uniswap v2 ABC/XYZ trading pair holds 100 ABC and 300 XYZ. Assuming sufficient arbitrage takes place, the mechanics of constant product market making ensure that the market value of 100 ABC is equal to the market value of 300 XYZ. So if we look at LP's total holdings, 50% is denominated in ABC and 50% in XYZ. This is obvious when you deposit, but it's interesting that it remains true as prices shift.
This constant 50/50 inventory ratio is an example of portfolio rebalancing.
These papers show how you can use rebalancing to harvest volatility and earn a premium over buy-and-hold
strategies:
https://www.gestaltu.com/2012/02/volatility-harvesting-and-the-importance-of-rebalancing.html/
GestaltU
Volatility Harvesting and the Importance Of Rebalancing - GestaltU
We have now written about the importance of observing historical volatility when making rebalancing decisions (see here, here and here) and the importance of keeping portfolio volatility low (see here). This post will discuss the benefits inherent in volatility itself through the concept of “harvesting” the volatility of individual positions
Focused research on strategies that maintain constant inventory ratios (50/50 or otherwise).
Some examples of how you might do this:
A. Rebalanced your portfolio with manual trades every 24 hours. Between trades, your portfolio may drift away from 50/50. You're harvesting volatility directly.
B. Deposit to Uniswap v2 / Sushiswap. In these markets you always have 50/50 ratio, and harvest volatility from swap fees
C. Deposit to Uniswap v3 over the full range (most similar to B)
D. Deposit to Uniswap v3 in a smaller range, but intelligently choose amounts (most similar to B)
E. Deposit to Uniswap v3 in a smaller range, and intelligently reinvest fees to target 50/50 ratio (most similar to A)
The thought behind this is first, we restrict the analysis to assets in Uniswap and ignore earned fees. The derivative of the inventory ratio ® with respect to price § is always <= 0.
The only way to get dR/dP=0
is to use an infinitely-wide range. More concentrated; more negative dR/dP
; faster IL.
Fortunately, the more concentrated positions also earn fees faster. Fees are denominated in the token that's being sold (losing value) AND we're gaining more of that token because of IL.
To counteract this, we can take the fees and place them in a limit order such that any small price retracement will convert them to the token that we've been losing. This brings the ratio back towards 50/50. Note that doing this with principal rather than earned fees would constitute selling low + buying high.
So the idea is to concentrate your liquidity and harvest extra vol by putting earned fees into limit orders. And every time you hit 50/50, recenter your liquidity around the current price. The tricky thing is that for a given concentration and price drift, there's a minimum amount of volatility required to reach 50/50 and be profitable:
This makes it impossible for an autonomous, permissionless system to run the strategy – it doesn't know price drift (mu
) and volatility (sigma
) ahead of time, so it can't figure out the optimal liquidity concentration.
This strategy also requires just-in-time (JIT)
limit order management by bots, and in a Black Swan event it gets wrecked.
NOTE. Comparison is not as robust when compared to existing liquidity infrastructure.
NOTE. Such
just-in-time
liquidity can be first referenced in Possibility & Impossibility of Liquidity Adaptation in Prediction Markets, Rafael Frongillo, Harvard University[^3]
That said, we may explore it in the future. And I'm 90% sure that Charm's strategy is an approximation of this. They just bake in a position width based on historical price action and avoid the JIT aspect in favor of 24-hour cycles.
A toy example of a typical yield opportunity. There is a certain amount of yield distributed prorata to "stakers". There is some non-yearn stakers, so yearn will never have a 100% prorata share.
There are sliders to control perf fees, mgmt fees, emissions size of the yield source, and the non-yearn tvl farming the source
<iframe src="https://www.desmos.com/calculator/3wck8ny2xt?embed" width="500" height="500" style="border: 1px solid #ccc" frameborder=0></iframe>
The purpose of models is not to fit the data but to sharpen the questions. —Sam Karlin
Represent crosstalk
(coherence) penalty by
coefficient k
The system get less work done as it gets more load
Doesn’t it seem odd to assume that crosstalk is a constant?
A: It’s not, the amount of crosstalk-related work is a function of N
load is concurrency
Concurrency is the number of requests in progress
It’s surprisingly easy to measure: sum(latency)/interval
i.e. Forecast Workload Failure Boundary
The USL reveals amount of
serialization vs crosstalk
Scalability is formally definable, and black-box observable
Scalability is nonlinear; this region is the failure boundary
Scalability is a function with parameters you can estimate
$$
P_{e x}=P_{V W A P}^{0} \exp \left( \sigma _{V W A P} \varepsilon _{a} \sqrt{t}\right)+ \sigma _{H} \varepsilon _{b}
$$
$P_{e x} \quad$ : expected execution price at the end of the holding period
$P_{V W A P}^{0} \quad:$ VWAP on the risk evaluation day (Sep 1996)
$\sigma_{V W A P} \quad:$ historical volatility of VWAP over the observation period
$\sigma_{H} \quad:$ SE of distribution of daily trade prices accumulated over the observation period (Oct. 1995-Sept. 1996) and standardized by each-day VWAP
$t \quad$ : holding period (one day in this simulation)
$E _{a}, E _{b} \quad:$ standard normal random numbers$
[^4]
^4
"Measurement of liquidity risk in the context of market risk calculation"
Jun Muranaga and Makoto Ohsawa*
Institute for Monetary and Economic Studies, Bank of Japan