Forecasters#

Work in progress.

This module contains classes that make forecasts.

It implements the models described in Chapter 7 of the book (Examples).

For example, historical means of market returns, and covariances, are forecasted here. These are used internally by cvxportfolio objects. In addition, some of the classes defined here have the ability to cache the result of their computation online so that if multiple copies of the same forecaster need access to the estimated value (as is the case in MultiPeriodOptimization policies) the expensive evaluation is only done once. The same cache is stored on disk when a back-test ends, so next time the user runs a back-test with the same universe and market data, the forecasted values will be retrieved automatically.

class cvxportfolio.forecast.HistoricalMeanReturn#

Historical mean returns.

This ignores both the cash returns column and all missing values.

class cvxportfolio.forecast.HistoricalVariance(kelly: bool = True)#

Historical variances of non-cash returns.

Parameters:

kelly (bool) – if True compute \(\mathbf{E}[r^2]\), else \(\mathbf{E}[r^2] - {\mathbf{E}[r]}^2\). The second corresponds to the classic definition of variance, while the first is what is obtained by Taylor approximation of the Kelly gambling objective. (See page 28 of the book.)

class cvxportfolio.forecast.HistoricalStandardDeviation(kelly: bool = True)#

Historical standard deviation.

class cvxportfolio.forecast.HistoricalMeanError#

Historical standard deviations of the mean of non-cash returns.

For a given time series of past returns \(r_{t-1}, r_{t-2}, \ldots, r_0\) this is \(\sqrt{\text{Var}[r]/t}\). When there are missing values we ignore them, both to compute the variance and the count.

class cvxportfolio.forecast.HistoricalLowRankCovarianceSVD(num_factors: int, svd_iters: int = 10, svd: str = 'numpy')#

Build factor model covariance using truncated SVD.

class cvxportfolio.forecast.HistoricalFactorizedCovariance(kelly: bool = True)#

Historical covariance matrix, sqrt factorized.

Parameters:

kelly (bool) – if True compute each \(\Sigma_{i,j} = \overline{r^{i} r^{j}}\), else \(\overline{r^{i} r^{j}} - \overline{r^{i}}\overline{r^{j}}\). The second case corresponds to the classic definition of covariance, while the first is what is obtained by Taylor approximation of the Kelly gambling objective. (See page 28 of the book.) In the second case, the estimated covariance is the same as what is returned by pandas.DataFrame.cov(ddof=0), i.e., we use the same logic to handle missing data.