Nonlinear Nonparametric Statistics Using Partial Moments
I recently published the post, "The Elements of Variance". If you haven't already, please give it a read. It explains in general terms what partial moments are and why they are relevant for behavioral finance and statistics. This current post may also help place the prior post into better perspective.
I will explain the use of partial moments in each function while avoiding an overly technical discussion.
CORRELATION AND DEPENDENCE
The correlation coefficient is the positive co-movements (CUPM and CLPM quadrants) less the divergent movements (DUPM and DLPM quadrants) divided by the total of all four quadrants, measured either by frequency or area.
Dependence is measured by zooming in on each quadrant and generating a correlation coefficient for just those observations. When X and Y are perfectly correlated, they are dependent upon one another. However, when X and Y are perfectly inversely correlated, they are still dependent upon one another! By using the absolute value of each quadrant's correlation coefficient, we can determine the dependence between variables.
NONLINEAR NONPARAMETRIC REGRESSION
Regression analysis is another method to determine the relationship between variables. Using the means from the partial moment quadrants identified to the left above (clusters), we can approximate this relationship. Increasing the number of sub-quadrants yields more clusters we can fit our observations with.
The image above illustrates the progression of increasing the number of partial moment quadrant clusters in the regression on a nonlinear function.
ANALYSIS OF VARIANCE (ANOVA)
ANOVA is a statistical test which determines whether the means of multiple variables are equal. The lower partial moment (LPM) ratio has a very special feature that is quite amenable to comparing means across distributions.
Recommended by LinkedIn
The degree 1 lower partial moment from its mean is denoted LPM(1,u,X). Its ratio at the mean (u) is always equal to 50% of its distribution, without fail. No matter the shape of the distribution or the number of observations, the continuous cumulative distribution function (CDF) the LPM(1,u,X) ratio represents, retains this feature. Using this knowledge, we can compare any number of distributions and their means via LPMs from a shared target as shown above.
NONLINEAR SCALING NORMALIZATION
Normalization is a technique designed to align and then compare variables. This would be a trivial task if the variables in question were linearly related. However, this is rarely if ever the case. By compensating for the amount of nonlinearity present between two variables, we can align variables with orders of magnitude differences while retaining their original distribution characteristics. Comparing divergent and co-partial moment matrices as illustrated above in the correlation coefficient provides this nonlinearity measure.
BENEFITS & FEATURES
The purpose of this post was to demonstrate some of the capabilities partial moments offer for various statistical analyses, especially when dealing with nonlinearities. This ubiquity is not surprising when partial moments are viewed through the "Elements of Variance" lens. It should not be difficult to extrapolate the advantage of partial moments in a behavioral finance context, where upside and downside variance have distinct unique interpretations.
We offer these partial moments based methods (and more!) in a convenient R-package, "NNS".
NNS offers:
If you'd like to learn more, feel free to reach out, or roll up your sleeves with our NNS Vignettes available on CRAN, or detailed examples with accompanying R-code available here:
Please check out the following post on the relevance of partial moments to behavioral finance. Thank you for your interest in NNS!