Expected Partial Moments
If you haven't already, please view my recent posts for more on partial moments in a behavioral finance and statistics context:
- The Elements of Variance
- Nonlinear Nonparametric Statistics Using Partial Moments
- Behavioral Finance and Partial Moments
I often get questions regarding the stability of partial moments out of sample, subtly translated as, “How well can partial moments predict the future?” This is a misplaced question for many, many reasons. We cannot even reliably forecast means, let alone higher moments (and don’t even start on the weather!). So while partial moments exhibit greater stability than means or variances, they, along with every other historical metric are useless in projecting future risks and rewards.
POPULATIONS
The predominant purported solution’s line of reasoning is borrowed from classical statistical inference and the physical sciences. They argue that we are estimating true population parameters, and the samples we have observed need to be resampled (over & over & over again) in order to glean a more representative measure of this population parameter.
“If electrons had feelings, imagine how much harder physics would be.”~ Richard Feynman
You, I, and every other market participant are not the same individual across time. We are people, and people change, as reflected in our risk profiles (see “path-dependent utility”). This reality has disastrous consequences for the statistical notion of populations in finance. Even if we had the same physical population of market participants, these dynamic risk profiles make it very unlikely that two samples will ever represent the same population. Now if you remove the constraint of the same physical population and size thereof, we are left with unique results with no possibility to generate a reliable confidence interval or distribution around these estimated parameters. You are left with nothing of semblance to project these unique inferences onto; thus placing the onus of prediction squarely on unjustifiable assumptions.
But, nothing can be something…
When measuring darkness, we measure the amount of light present, however dim.
UNCERTAINTY FROM CERTAINTY
If we know the future is uncertain, then greater certainty today directly runs counter to this future state. Frank Knight famously discerned this future uncertain state from the common notion of risk. There is a parallel line of reasoning with information theory & maximum entropy to Knight’s uncertainty. Open systems tend towards maximum entropy, where maximum entropy is total unpredictability of information content. This is represented by an equal probability of occurrence as in a fair coin flip. Here is a graph associated with this introductory example.
Entropy (H) is defined as:
Where the probability of an outcome is multiplied by the information content of its outcome, giving us the average amount of information per outcome. We can see that the information content declines as the probability of an outcome increases due to the definition of I(x):
So under this entropy definition, a probability of 0 has the same effect as the probability of 1. They both represent certainty, which in our case is the furthest point we wish to be!
By accounting for entropy, we demonstrated significant out of sample efficiency that other metrics could not match (Mean/Variance, Jenson's Alpha, Mean/Semivariance, Sharpe Ratio)...for all utility profiles across the risk-aversion spectrum.
It is imperative to note we are not offering a point prediction; rather scrutinizing the relevance of the historical data set with its entropy proxy. Thus, we avoid any “estimation errors” associated with projecting a parameter (mean, standard deviation) out of sample which was derived from a sample of a “population” that cannot exist.
THE EFFECT OF TIME
There is yet another layer underlying this certainty argument - Brownian motion. When we annualize standard deviation, we typically do so by multiplying by the square root of time. It is only an approximation. However, from Brownian motion we do know volatility should increase with the passage of time...by what exact order isn't really of consequence. Historical metrics do not cater to this natural increase in volatility (uncertainty), and eventually fail out of sample.
BEHAVIORAL EXPLANATION
This is a very counter-intuitive exercise. People love certainty and go to great lengths to avoid ambiguity. This speaks further to a behavioral finance component of this argument. Maurice Allais famously demonstrated how powerful certainty is, and we recently replicated his findings in this paper. When certainty is prevalent, the risk-aversion of the constituents is higher, creating a fragile situation with less overall information content.
PROPOSED SOLUTION
By objectively penalizing this certainty in our method we achieved our desired results, namely to compensate & uphold Knightian uncertainty (commonly characterized by a maximum entropy state) while offering a more accurate representation of the future state. We present “a” not “the” way to accomplish this in the following paper:
Predicting Risk/Return Performance Using Upper Partial Moment/Lower Partial Moment Metrics
"I skate to where the puck is going to be, not where it has been." ~Wayne Gretzky
If you’d like to learn more, feel free to reach out. Email is best due to the truncated comments and technical underlying arguments.
Thought provoking.
From a scarcity perspective, a decision maker can handle only a finite amount of uncertainty. When the future one is deciding on (or past if one is explaining) has increasing uncertainty, the implication is that the decision maker will start ignoring (putting zero regard/weight ) that which overwhelms their capacity. Whether an intuitive investor, or the most advance modeler, at some point we start ignoring more information and variables in order to make a choice in the present. There will be a subjective element to what we prefer to ignore. Maximum entropy is still based on an assumption of feasible alternatives. For something to be "totally unpredictable" goes beyond an equal probability for all outcomes, it is not even being able to imagine all the outcomes.
> Historical metrics do not cater to this natural increase in volatility (uncertainty), and eventually fail out of sample. True. That's one of the reasons MPT is not only dull but also wrong.
I need to read more of what you reference here. I don't you gave me some of this.