# Large-Scale Portfolio Allocation Under Transaction Costs and Model Uncertainty^{†}^{†}thanks: Nikolaus Hautsch (), Department of Statistics and Operations Research, Faculty of Business, Economics and Statistics, University of Vienna, as well as Vienna Graduate School of Finance (VGSF) and Center for Financial Studies (CFS), Frankfurt. Stefan Voigt (), Vienna Graduate School of Finance (VGSF).
We thank Gregor Kastner, Allan Timmermann and participants of the Vienna-Copenhagen Conference on Financial Econometrics, 2017, the 3rd Vienna Workshop on High-Dimensional Time Series in Macroeconomics and Finance, the Conference on Big Data in Predictive Dynamic Econometric Modeling, Pennsylvania, the Conference on Stochastic Dynamical Models in Mathematical Finance, Econometrics, and Actuarial Sciences, Lausanne, 2017, the 10th annual SoFiE conference, New York, the FMA European Conference, Lisbon, the 70th European Meeting of the Econometric Society, Lisbon, the 4th annual conference of the International Association for Applied Econometrics, Sapporo, and the Annual Conference 2017 of the German Economic Association for valuable feedback.
Moreover, we greatly acknowledge the use of computing resources by the Vienna Scientific Cluster.

###### Abstract

We theoretically and empirically study large-scale portfolio allocation problems when transaction costs are taken into account in the optimization problem. We show that transaction costs act on the one hand as a turnover penalization and on the other hand as a regularization, which shrinks the covariance matrix. As an empirical framework, we propose a flexible econometric setting for portfolio optimization under transaction costs, which incorporates parameter uncertainty and combines predictive distributions of individual models using optimal prediction pooling. We consider predictive distributions resulting from high-frequency based covariance matrix estimates, daily stochastic volatility factor models and regularized rolling window covariance estimates, among others. Using data capturing several hundred Nasdaq stocks over more than 10 years, we illustrate that transaction cost regularization (even to small extent) is crucial in order to produce allocations with positive Sharpe ratios. We moreover show that performance differences between individual models decline when transaction costs are considered. Nevertheless, it turns out that adaptive mixtures based on high-frequency and low-frequency information yield the highest performance. Portfolio bootstrap reveals that naive -allocations and global minimum variance allocations (with and without short sales constraints) are significantly outperformed in terms of Sharpe ratios and utility gains.

JEL classification: C58, C52, C11, G11 Keywords: Portfolio choice, transaction costs, model uncertainty, regularization, high frequency data

## 1 Introduction

Optimizing large-scale portfolio allocations remains a challenge for econometricians and practitioners due to (i) the noisiness of parameter estimates in large dimensions, (ii) model uncertainty and time variations in individual models’ forecasting performance, and (iii) the presence of transaction costs, making otherwise optimal turnover strategies costly and thus sub-optimal.

Though there is a huge literature on the statistics of portfolio allocation, the literature is widely fragmented and typically only focuses on partial aspects. For instance, a substantial part of the literature concentrates on the problem of estimating vast-dimensional covariance matrices by means of regularization techniques, see, e.g., Ledoit and Wolf (2003), Ledoit and Wolf (2004), Fan et al. (2008) and Ledoit and Wolf (2012), among others.
This literature has been boosted by the availability of high-frequency (HF) data which opens an additional channel to increase the precision of covariance estimates and forecasts.^{1}^{1}1The benefits of high-frequency data to estimate covariances have been documented by a wide range of studies, e.g., Andersen and Bollerslev (1998), Andersen et al. (2001), Barndorff-Nielsen (2002), Barndorff-Nielsen and Shephard (2004).
Another segment of the literature studies the effects of ignoring parameter uncertainty and model uncertainty arising from changing market regimes and structural breaks.^{2}^{2}2The effect of ignoring estimation uncertainty is considered, among others, by Jobson et al. (1979), Jorion (1986), Chopra and Ziemba (1993), Uppal and Wang (2003) and DeMiguel et al. (2009). Model uncertainty is investigated, for instance, by Wang (2005), Garlappi et al. (2007) and Pflug et al. (2012).

Further literature is devoted to the role of transaction costs in portfolio allocation strategies. In fact, in the presence of transaction costs, the benefits of re-allocating wealth may be smaller than the costs associated with turnover.^{3}^{3}3See Brandt et al. (2009) for an excellent review on common pitfalls in portfolio optimization. While this aspect is studied theoretically in mathematical finance, see, e.g., Davis and Norman (1990), there are only very few empirical approaches, which explicitly account for transaction costs, see, for instance, Acharya and Pedersen (2005) and Gârleanu (2009).

In fact, in most empirical studies, transaction costs are incorporated *ex post* by analyzing to which extent a certain portfolio strategy would have survived in the presence of transaction costs of given size, see, e.g., Hautsch et al. (2015) or Bollerslev et al. (2016). In financial practice, however, the costs of portfolio re-balancing are taken into account *ex ante* and thus are part of the optimization problem.

The objective of this paper is to theoretically and empirically analyze large-dimensional portfolio allocation problems under transaction costs and model uncertainty. Our contribution is twofold: On the one hand, we show that the explicit inclusion of transaction costs in the optimization problem implies a regularization of the estimation problem. In particular, *quadratic* transaction costs can be interpreted as shrinkage towards a diagonal matrix, implying a trade-off between transaction costs and potential diversification benefits. Transaction costs *proportional* to the amount of re-balancing imply a regularization of the variance-covariance, which acts similarly as the least absolute shrinkage and selection operator (Lasso) by Tibshirani (1996) in a regression problem. It thus implies to put more weight on a buy-and-hold strategy.
The regulatory effect of transaction costs results in better conditioned covariance estimates and moreover implies a turnover penalization, which significantly reduces the amount (and frequency) of re-balancing. These mechanisms imply strong improvements of portfolio allocations in terms of Sharpe ratios or utility-based measures compared to the case where transaction costs are neglected.

On the other hand, we perform a reality check by empirically analyzing the role of transaction costs in a high-dimensional and preferably realistic setting. We take the perspective of an investor who is monitoring the portfolio allocation on a daily basis while accounting for the (expected) costs of re-balancing. The underlying portfolio optimization setting accounts for parameter uncertainty and model uncertainty, while utilizing not only predictions of the covariance structure but also of higher-order moments of the asset return distribution. Model uncertainty is taken into account by considering time-varying combinations of predictive distributions resulting from competing models using optimal prediction pooling according to Geweke et al. (2011). In this way, we analyze to which extent the predictive ability of individual models changes over time and how suitable forecast combinations may result into better portfolio allocations.

We are particularly interested in the question whether the power of HF data for global minimum variance (GMV) allocations, as recently documented by Liu (2009), Hautsch et al. (2015) and Lunde et al. (2016), among others, still pays out in such a framework. The high responsiveness of HF-based predictions is particularly beneficial in high-volatility market regimes, but may create (too) high turnover. Our framework allows us to analyze to which extent HF data are still useful when transaction costs are explicitly taken into account.

The downside of such generality is that the underlying optimization problem cannot be solved in closed form and requires (high-dimensional) numerical integration. We therefore pose the econometric model in a Bayesian framework, which allows us to integrate out parameter uncertainty and to construct posterior predictive asset return distributions based on time-varying mixtures. Optimality of the portfolio weights is ensured with respect to the predicted out-of-sample utility net of transaction costs. The entire setup is complemented by a portfolio bootstrap by performing the analysis based on randomized sub-samples out of the underlying asset universe. In this way, we are able to gain insights into the statistical significance of various portfolio performance measures.

We analyze a large-scale setting based on all constituents of the S&P500 index, which are continuously traded on Nasdaq between 2007 and 2017, corresponding to 308 stocks. Forecasts of the daily asset return distribution are produced based on three major model classes. On the one hand, utilizing HF message data, we compute estimates of daily asset return covariance matrices using blocked realized kernels according to Hautsch et al. (2012). The kernel estimates are equipped with a Gaussian-Wishart mixture in the spirit of Jin and Maheu (2013) to capture the entire return distribution. Moreover, we compute predictive distributions resulting from a daily multivariate stochastic volatility factor model in the spirit of Chib et al. (2006). Due to recent advances in the development of numerically efficient simulation techniques by Kastner et al. (2017), it is possible to estimate such a model based on high-dimensional data and makes it one of the very few sufficiently flexible parametric models for return distributions covering several hundreds of assets.^{4}^{4}4So far, stochastic volatility models have been shown to be beneficial in portfolio allocation by Aguilar and West (2000) and Han (2006) for just up to 20 assets. As a third model class, representing traditional estimators based on rolling windows, we utilize the sample covariance and the (linear) shrinkage estimator proposed by Ledoit and Wolf (2004).

To our best knowledge, this paper provides the first study evaluating the predictive power of state-of-the-art high-frequency and ”low-frequency” models in a large-scale portfolio framework under such generality, while utilizing data of trading days and more than 73 Billion high-frequency observations. Our approach brings together concepts from (i) Bayesian estimation for portfolio optimization, ii) regularization and turnover penalization, (iii) predictive model combinations in high dimensions and (iv) HF-based covariance modeling and prediction.^{5}^{5}5Bayesian estimation for portfolio optimization has been applied within a wide range of applications, starting with Brown (1976) and Jorion (1986). Imposing turnover penalties is related to the ideas of Brodie et al. (2009) and Gârleanu and Pedersen (2013). Tu and Zhou (2010), Tu and Zhou (2011) and Anderson and Cheng (2016) emphasis the benefits of model combination in portfolio decision theory. Sequential learning in a two-dimensional asset horizon is performed by Johannes et al. (2014). However, none of these approaches is focusing on mixtures of HF and lower frequencies approaches and aims at large dimensional allocation problems.

We can summarize the following findings: First, none of the underlying (state-of-the-art) predictive models is able to produce positive Sharpe ratios when transaction costs are *not* taken into account ex ante. This is mainly due to high turnover implied by (too) frequent re-balancing.
Second, when incorporating transaction costs into the optimization problem, performance differences between competing predictive models for the return distribution become smaller than in the case without an explicit inclusion of transaction costs. It is shown that none of the underlying approaches does produce significant utility gains on top of each other. We thus conclude that the respective pros and cons of the individual models in terms of efficiency, predictive accuracy and stability of covariance estimates are leveled out under turnover regularization. We moreover conclude that the benefits of HF-based covariance predictions are smaller than in case of daily GMV allocations.

Third, despite of a similar performance of individual predictive models, mixing high-frequency and low-frequency information is beneficial and yields significantly higher Sharpe ratios. This is due to time variations in the individual model’s predictive ability. Taking this into account when constructing (time-varying) combination weights, the relative contribution of HF-based return predictions is on average approximately 40%. The remaining 60% are provided by both the multivariate stochastic volatility model (approximately 30%) and predictions based on the shrunken sample covariances. HF-based predictions are particularly superior in volatile market periods, but are dominated by SV-based predictions in more calm periods. Fourth, naive strategies or GMV strategies are statistically and economically significantly outperformed.

The structure of this paper is as follows: Section 2 theoretically studies the effect of transaction costs on the optimal portfolio structure. Section 3 gives the econometric setup accounting for parameter and model uncertainty. Section 4 presents the underlying predictive models. In Section 5, we describe the data and present the empirical results. Finally, Section 6 concludes.

## 2 The Role of Transaction Costs

### 2.1 Decision Framework

We consider an investor equipped with power utility function depending on return and risk aversion parameter . At every period , the investor allocates her wealth among distinct risky assets with the aim to maximize expected utility at by choosing the allocation vector . We impose the constraint .

The choice of is based on drawing inference from observed data. The information set at time consists of the time series of past returns , where are the gross returns computed using end-of-day asset prices . The set of information may contain additional variables , as, e.g., intra-day data.

We define an optimal portfolio as the allocation, which maximizes expected utility of the investor after subtracting transaction costs arising from re-balancing. We denote transaction costs as , depending on the desired portfolio weight and reflecting broker fees and implementation shortfall.

Transaction costs are a function of the distance between the new allocation and the allocation right before readjustment, , where is a vector of ones and the operator denotes element-wise multiplication. The vector builds on allocation , which has been considered as being optimal given expectations in , but effectively changed due to returns between and .

At time , the investor monitors her portfolio and solves a static maximization problem conditional on the current beliefs on the distribution of returns of the next period and the current portfolio weights :

(EU) |

Note that optimization problem (2.1) reflects the problem of an investor who constantly monitors her portfolio and exploits all available information, but re-balances only if the costs implied by deviations from the path of optimal allocations exceed the costs of re-balancing. This form of myopic portfolio optimization ensures optimality (after transaction costs) of allocations at each point in time. Accordingly, the optimal wealth allocation from representation (2.1) is governed by i) the structure of turnover penalization , and ii) the return forecasts .

### 2.2 Transaction Costs in Case of Gaussian Returns

In general, the solution to the optimization problem (2.1) cannot be derived analytically but needs to be approximated using numerical methods. However, assuming being a multivariate log-normal density with known parameters and , problem (2.1) coincides with the initial Markowitz (1952) approach and yields an analytical solution, resulting from the maximization of the certainty equivalent (CE) after transaction costs,

(1) |

#### 2.2.1 Quadratic transaction costs

We model the transaction costs for shifting wealth from allocation to as a quadratic function given by

(2) |

with cost parameter . The allocation according to (1) can then be restated as

(3) | ||||

with

(4) | ||||

(5) |

where denotes the identity matrix.
The optimization problem with quadratic transaction costs can be thus interpreted as a classical mean-variance problem *without* transaction costs, where (i) the covariance matrix is regularized towards the identity matrix (with serving as shrinkage parameter) and the mean is shifted by . Hence, if increases, is shifted towards a diagonal matrix representing the case of uncorrelated assets. Higher transaction costs are therefore equivalent to a setup with less diversification benefits.
The shift from to can be alternatively interpreted by exploiting and reformulating the problem as

From this representation it becomes obvious that the shift of the mean vector is proportional to the deviations of the current allocation to the -setting. This can be interpreted as putting more weight on assets with (already) high exposure. Proposition 1 shows the effect of rising transaction costs on optimal re-balancing.

###### Proposition 1.

(6) |

###### Proof.

See Appendix 7. ∎

Hence, if the transaction costs are prohibitively large, the investor may not implement the efficient portfolio despite her knowledge of the true return parameters and . The effect of transaction costs in the long-run can be analyzed in more depth by considering the well-known representation of the mean-variance efficient portfolio,

(7) |

If denotes the initial allocation, sequential re-balancing allows us to study the long-run effect, given by

(8) |

Hence, can be interpreted as a weighted average of and the initial allocation , where the weights depend on the ratio . The following proposition shows, however, that a range for exists (with critical upper threshold), for which the initial allocation can be ignored in the long-run.

###### Proposition 2.

, where denotes the Frobenius norm .

###### Proof.

See Appendix 7. ∎

Using Proposition 2 for and , the series converges to and . In the long-run, we then obtain

Note, that the location of the initial portfolio itself does not play a role on the upper threshold which ensures the long-run convergence towards . Instead, is affected only by the risk aversion and the eigenvalues of .

#### 2.2.2 Proportional () transaction costs

Although attractive from an analytical perspective, transaction costs of quadratic form may represent an unrealistic proxy of costs associated with trading in real financial markets. Instead, in literature there is widespread use of transaction cost measures proportional to the sum of absolute re-balancing (-norm of re-balancing), which impose stronger penalization on turnover and arguably are more realistic, see, for instance DeMiguel et al. (2009). Transaction costs proportional to the norm of yield the form

(9) |

with cost parameter . Although the effect of transaction costs on the optimal portfolio cannot be derived in closed-form comparable to the quadratic () case, the impact of turnover penalization can still be interpreted as a form of regularization. If we assume for simplicity of illustration , the optimization problem (1) corresponds to

(10) | ||||

(11) |

The first-order conditions for the constrained optimization are

(12) | |||

(13) |

where is the vector of sub-derivatives of , i.e., , consisting of elements which are or in case or , respectively, or in case . Solving for yields

(14) |

where corresponds to the weights of the GMV portfolio. Proposition 3 shows that this optimization problem can be formulated as a (regularized) minimum variance problem.

###### Proposition 3.

Portfolio optimization problem (10) is equivalent to the minimum variance problem with

(15) |

where is the subgradient of . , and

###### Proof.

See Appendix 7. ∎

The interpretation of this result is straightforward: the effect of imposing transaction costs proportional to the norm of re-balancing can be interpreted as a standard GMV setup with a regularized version of the variance-covariance matrix. The form of the matrix implies that for high transaction costs , more weight is put on those pairs of assets, whose exposure is re-balanced in the same direction. The result is similar to Fan et al. (2012), who show that the risk minimization problem with constrained weights

(16) |

can be interpreted as the minimum variance problem

(17) |

with where is a Lagrange multiplier and is the subgradient vector of the function evaluated at the the solution of optimization problem (16). Note, however, that the transaction cost parameter is given to the investor, whereas is an endogenously imposed restriction with the aim to decrease the impact of estimation error.

Investigating the long-run effect of the initial portfolio in the presence of transaction costs in the spirit of Proposition 1 is complex as analytically tractable representations are not easily available. General insights from the benchmark case, however, can be transferred to the setup with transaction costs: First, a high cost parameter may prevent the investor to implement the efficient portfolio. Second, as the norm of any vector is bounded from above by its norm, penalization is always stronger than in the case of quadratic transaction costs. Therefore, we expect that the convergence of portfolios from the initial allocation towards the efficient portfolio is generally slower, but qualitatively similar.

#### 2.2.3 Empirical implications

To empirically illustrate the effects discussed above, we compute the performance of portfolios after transaction costs based on assets and daily readjustments based on data ranging from June to March .^{6}^{6}6A description of the dataset and the underlying estimators is given in more detail in Section 5.
The unknown variance-covariance matrix is estimated in two ways: We compute the sample variance-covariance estimator and the shrinkage estimator by Ledoit and Wolf (2004) with a rolling window of length days. We refrain from estimating the mean and set . The initial portfolios weights are set to , corresponding to the naive portfolio.
Then, for a fixed and daily estimates of , portfolio weights are re-balanced as solutions of optimization problem (3) using . This yields a time series of optimal portfolios and realized portfolio returns . Subtracting transaction costs then yields the realized portfolio returns net of transaction costs .

Figure 1 displays annualized Sharpe ratios^{7}^{7}7Computed as the ratio of the annualized sample mean and standard deviation of . after subtracting transaction costs for both the sample covariance and the shrinkage estimator for a range of different values of , measured in basis points. The purple line represents Sharpe ratios implied by the naive allocation.

We observe that the naive portfolio is clearly outperformed. This is remarkable, as it is well-known that parameter uncertainty especially in high dimensions often leads to superior performance of the naive allocation (see, e.g. DeMiguel et al. (2009)). We moreover find that already very small positive values of have a strong regulatory effect on the covariance matrix. Recall that quadratic transaction costs directly affect the diagonal elements and thus the eigenvalues of . Inflating then each of the eigenvalues already by a small magnitude has a substantial effect on the conditioning of the covariance matrix. We observe that transaction costs of just 1 bp significantly increase the conditioning number and strongly stabilize and particularly its inverse, . In fact, the optimal portfolio based on the sample variance-covariance matrix which adjusts ex-ante for transaction costs of leads to a net-of-transaction-costs Sharpe ratio of , whereas neglecting transaction costs in the optimization yields a Sharpe ratio of only . This effect is to a large extent a pure regularization effect. For rising values of , this effect marginally declines and leads to a declining performance for values of between and bp. We therefore observe a dual role of transaction costs. On the one hand, they improve the conditioning of the covariance matrix by inflating the eigenvalues. On the other hand, they reduce the mean portfolio return. Both effects influence the Sharpe ratio in opposite direction causing the concavity of graphs for values of up to approximately bp.

For higher values of we observe a similar pattern: Here, a decline in re-balancing costs due to the implied turnover penalization kicks in and implies an increase of the Sharpe ratio. If the cost parameter , however, gets too high, the adverse effect of transaction costs on the portfolio’s expected returns dominate. Hence, as predicted by Proposition 1, the allocation is ultimately pushed back to the initial portfolio (the portfolio in this case). This is reflected by the lower panel of Figure 1, showing the distance to the initial allocation in terms of the average distance.

Moreover, the described effects are much more pronounced for the sample covariance than for its shrunken counterpart. As the latter is already regularized, additional regulation implied by turnover penalization obviously has a lower impact. Nevertheless, turnover regularization implies that forecasts even based on the sample covariance generate reasonable Sharpe ratios, which tend to perform equally well than those implied by a (linear) shrinkage estimator. Hence, we observe that differences in the out-of-sample performance between the two approaches decline if the turnover regularization becomes stronger.

Figure 2 visualizes the effect of transaction costs. Note that transaction costs imply a regulation which acts similarly to a Lasso penalization. Such a penalization is less smooth than that implied by quadratic transaction costs and implies a stronger dependence of the portfolio weights on the previous day’s allocation. This strong path-dependence implies that the plots in Figure 2 are less smooth than in the case of quadratic transaction costs. This affects our evaluation, as the paths of the portfolio weights may differ substantially over time if the cost parameter is slightly changed.

However, we find similar effects as for quadratic transaction costs. For low values of , the Sharpe ratio increases in . Here, the effects of covariance regularization and reduction of turnover overcompensate the effect of declining portfolio returns (after transaction costs). For larger values of , we observe the expected convergence towards the performance of the naive portfolio. For this range of , the two underlying approaches perform rather en par. Hence, additional regularization as implied by the shrinkage estimator is not (very) effective and does not generate excess returns on top of the sample variance-covariance matrix.

## 3 Basic Econometric Setup

### 3.1 Parameter Uncertainty and Model Combination

The optimization problem (2.1) posses the challenge of providing a sensible density of future returns. The predictive density should reflect dynamics of the return distribution in a suitable way, which opens many different dimensions on how to choose a model .
The model reflects assumptions regarding the return generating process in form of a likelihood function depending on unknown parameters . Assuming that future returns are distributed as , where is a point estimate of the parameters , however, would imply that the uncertainty perceived by the investor ignores estimation error.^{8}^{8}8See e.g. Brown (1976), Kan and Zhou (2007) or Avramov and Zhou (2010) for detailed treatments of the impact of parameter uncertainty on optimal portfolios.
Consequently, the resulting portfolio weights would be sub-optimal and the exposure of the investor to risky assets would be unreasonably high.

To accommodate such parameter uncertainty and to pose a setting, where the optimization problem (2.1) can be naturally addressed by numerical integration techniques, we employ a Bayesian approach. Hence, by defining a model implying the likelihood and choosing a prior distribution , the posterior distribution

(18) |

reflects beliefs about the distribution of the parameters after observing the set of available information, . The (posterior) predictive distribution of the returns is then given by

(19) |

If the parameters cannot be estimated with high precision, the posterior distribution yields a predictive return distribution with more mass in the tails than focusing only on . Therefore, parameter uncertainty automatically leads to a higher predicted probability of tail events, implying the fat-tailedness of the posterior predictive asset return distribution.

Moreover, potential structural changes in the return distribution and time-varying parameters make it hard to identify a single predictive model which consistently outperforms all other models. Therefore, an investor may instead combine predictions of distinct predictive models , reflecting either personal preferences, data availability or theoretical considerations.^{9}^{9}9Model combination in the context of return predictions has a long tradition in econometrics, starting from Bates and Granger (1969). In finance, Avramov (2003), Uppal and Wang (2003), Garlappi et al. (2007), Johannes et al. (2014) and Anderson and Cheng (2016), among others, apply model combinations and investigate the effect of model uncertainty on financial decisions.
Stacking the predictive distributions yields

(20) |

The joint predictive distribution is computed conditional on combination weights , which can be interpreted as discrete probabilities over the set of models . The probabilistic interpretation of the combination scheme is justified by enforcing that all weights take positive values and add up to one:

(21) |

This yields the joint predictive distribution

(22) |

corresponding to a mixture distribution with time-varying weights.

Depending on the choice of the combination weights , the scheme balances how much the investment decision is driven by each of the individual models. Well-known approaches to combine different models are, among many others, Bayesian model averaging (Hoeting et al., 1999), optimal prediction pooling (Geweke et al., 2011) and decision-based model combinations (Billio et al., 2013).

We choose conditional on the past data as a solution to the maximization problem

(23) |

In line with Geweke et al. (2011) and Durham and Geweke (2014) we focus on evaluating the goodness-of-fit of the predictive distributions as a measure of predictive accuracy based on rolling-window maximization of the predictive log score

(24) |

where is the window size and is defined as in (19).^{10}^{10}10In our empirical application we set days. If the predictive density concentrates around the observed return values, the predictive likelihood is higher.
Alternatively, we implemented utility-based model combination in the spirit of Billio et al. (2013) and Pettenuzzo and Ravazzolo (2016) by choosing as a function of past portfolio-performances net of transaction costs.^{11}^{11}11The combination weights for the utility-based approach can be computed by choosing

### 3.2 MCMC-based inference

In general, the objective function of the portfolio optimization problem (2.1) is not available in closed form. Furthermore, the posterior predictive distribution may not arise from a well-known class of probability distributions. Therefore, the computation of portfolio weights depends on (Bayesian) computational methods.
First, we compute sample draws from the posterior distribution for every model using Markov Chain Monte Carlo algorithms.^{12}^{12}12See Hastings (1970) and Chib and Greenberg (1995).
Then, draws from the posterior predictive distribution are generated by sampling from .
Obtaining the combination weights based on the past predictive performance requires the computation of the predictive scores of every model as input to the optimization problem (24). This value can be approximated by computing

(25) |

where are the observed returns at time . After updating the combination weights , samples from the joint predictive distribution are generated by using the draws from the posterior models . Samples from are obtained by computing

(26) |

The integral in optimization problem (2.1) can be approximated for any given weight by Monte-Carlo techniques using the draws by computing

(27) |

The vector represents draws from the posterior predictive portfolio return distribution (after subtracting transaction costs) for allocation weight vector . The numerical solution of the allocation problem (2.1) is then obtained by choosing such that the sum in (27) is maximized, i.e.,

(28) |

## 4 Predictive Models

As predictive models we choose representatives of three major model classes. First, we include covariance forecasts based on *high-frequency* data utilizing blocked realized kernels as proposed by Hautsch et al. (2012). Second, we employ predictions based on parametric models for using *daily* data. An approach which is sufficiently flexible, while guaranteeing well-conditioned covariance forecasts, is a stochastic volatility factor model according to Chib et al. (2006). Thanks to the development of numerically efficient simulation techniques by Kastner et al. (2017), (MCMC-based) estimation is tractable even in high dimensions. This makes the model becoming one of the very few parametric models (with sufficient flexibility) which are feasible for data of these dimensions. Third, as a candidate representing the class of shrinkage estimators, we employ the approach by Ledoit and Wolf (2004).

The choice of models is moreover driven by computational tractability in a large-dimensional setting requiring numerical integration through MCMC techniques and in addition a portfolio bootstrap procedure as illustrated in Section 5. We nevertheless believe that these models yield major empirical insights, which can be easily transfered to modified or extended approaches.

### 4.1 A Wishart Model for Blocked Realized Kernels

Realized measures of volatility based on HF data have been shown to provide accurate estimates of daily covariances.^{13}^{13}13See, e.g., Andersen and Bollerslev (1998), Andersen et al. (2003), Barndorff-Nielsen et al. (2009), Liu (2009) and Hautsch et al. (2015), among others.
To produce forecasts of covariances based on HF data, we employ blocked realized kernel (BRK) estimates as proposed by Hautsch et al. (2012) to estimate the quadratic variation of the price process based on irregularly spaced and noisy price observations.

The major idea is to estimate the covariance matrix block-wise. Accordingly, stocks are ordered according to their average number of daily mid-quote observations. By separating the stocks into 4 equal-sized groups, the resulting covariance matrix is then decomposed into blocks representing pair-wise correlations within each group and across groups.^{14}^{14}14Hautsch et al. (2015) find that 4 liquidity groups constitutes a reasonable (data-driven) choice for a similar data set. We implemented the setting for up to 10 groups and find similar results in the given framework.
We denote the set of indexes of the assets associated with block by . For each asset , denotes the time stamp of mid-quote on day .
The so-called refresh time is the time it takes for all assets in one block to observe at least one mid-quote update and is formally defined as

(29) |

where denotes the number of midquote observations of asset before time . Hence, refresh time sampling synchronizes the data in time with denoting the time, where all of the assets belonging to group have been traded at least once since the last refresh time . Synchronized returns are then obtained as , with denoting the log mid-quote of asset at time .

Refresh-time-synchronized returns build the basis for the multivariate realized kernel estimator by Barndorff-Nielsen et al. (2011), which allows (under a set of assumptions) to consistently estimate the quadratic covariation of an underlying multivariate Brownian semi-martingale price process which is observed under noise. Applying the multivariate realized kernel on each block of the covariance matrix, we obtain

(30) |

where is the Parzen kernel, is the -lag auto-covariance matrix of the assets belonging to block , and is a bandwidth parameter, which is optimally chosen according to Barndorff-Nielsen et al. (2011).

The estimates of the correlations between assets in block take the form

(31) |

The blocks are then stacked as described in Hautsch et al. (2012) to obtain the correlation matrix .

The diagonal elements of the covariance matrix, , , are estimated based on univariate realized kernels according to Barndorff-Nielsen et al. (2008). The resulting variance-covariance matrix is then given by

(32) |

We stabilize the covariance estimates by smoothing over time and computing simple averages of the last days, i.e., . A (smoothed) correlation matrix is then obtained as

(33) |

with . In Section 5, we illustrate that such smoothing of estimates improves the predictive performance of the model as the impact of extreme intra-daily activities is reduced.

To produce not only forecasts of the asset return covariance, but of the entire density, we parametrize a suitable return distribution, which is driven by the dynamics of in the spirit of Jin and Maheu (2013).
The dynamics of the predicted return process conditional on the latent covariance are modeled as multivariate Gaussian.
To capture parameter uncertainty, integrated volatility is modeled as a Wishart distribution.^{15}^{15}15This represents a multivariate extension of the normal-inverse-gamma approach, applied, for instance, by Barndorff-Nielsen (1997), Andersson (2001), Jensen and Lunde (2001) and Forsberg and Bollerslev (2002).
Thus, the model is defined by:

(34) | ||||

(35) | ||||

(36) | ||||

(37) |

Although we impose a Gaussian likelihood conditional on , the posterior predictive distribution of the returns exhibit fat tails after marginalizing out due to the choice of the prior.

### 4.2 Stochastic Volatility Factor Models

Parametric models for return distributions in very high dimensions accommodating time variations in the covariance structure are typically either highly restrictive or computationally (or numerically) not feasible.
Even dynamic conditional correlation (DCC) models as proposed by Engle (2002) are not feasible for processes including several hundreds of assets.^{16}^{16}16One notable exception is a shrinkage version of the DCC model as recently proposed by Engle et al. (2017).
Likewise, stochastic volatility models allow for flexible (factor) structures but have been computationally not feasible for high-dimensional processes either.
Recent advances in MCMC sampling techniques, however, make it possible to estimate stochastic volatility factor models even in very high dimensions while keeping the numerical burden moderate. Employing interweaving schemes to overcome well-known issues of slow convergence and high autocorrelations of MCMC samplers for SV models, Kastner et al. (2017) provide means to considerably boost up computational speed.

We therefore assume a stochastic volatility factor model in the spirit of Shephard (1996), Jacquier et al. (2002) and Chib et al. (2006) as given by

(38) |

where is a matrix of unknown factor loadings, is a diagonal matrix of latent factors capturing idiosyncratic effects, and is a diagonal matrix of common latent factors. The innovations and are assumed to follow independent standard normal distributions. The model thus implies that the covariance matrix of is driven by a factor structure

(39) |

with capturing common factors and capturing idiosyncratic components. The covariance elements are thus parametrized in terms of the unknown parameters, whose dynamics are triggered by j common factors. All latent factors are assumed to follow AR(1) processes,

(40) |

where the innovations follow independent standard normal distributions and is an unknown initial state. The AR(1) representation captures the persistence in idiosyncratic volatilities and correlations. The assumption that all elements of the covariance matrix are driven by identical dynamics is obviously restrictive, however, yields parameter parsimony even in high dimensions. Estimation errors can therefore be strongly limited and parameter uncertainty can be straightforwardly captured by choosing appropriate prior distributios for , and . The approach can be seen as a strong parametric regularization of the covariance matrix which, however, still accommodates important empirical features. Furthermore, though the joint distribution of the data is conditionally Gaussian, the stationary distribution exhibits thicker tails.

The priors for the univariate stochastic volatility processes are chosen independently in line with Aguilar and West (2000) and Pati et al. (2014), i.e.,

(41) |

The level is equipped with a normal prior, the persistence parameter is chosen such that , which enforces stationarity, and for we assume ^{17}^{17}17In the empirical application we set the prior hyper-parameters to and as proposed by Kastner et al. (2017).
For each element of the factor loadings matrix, a hierarchical zero-mean Gaussian distribution is chosen.
Choosing hierarchical priors for the factor loadings allows us to enforce sparsity reducing parameter uncertainty especially in very high dimensions, see e.g. Griffin and Brown (2010) and Kastner (2016). Kastner (2016) proposing interweaving strategies in the spirit of Yu and Meng (2011) to reduce the enormous computational burden for high dimensional estimations of SV objects. .

### 4.3 Covariance Shrinkage

The most simple and natural covariance estimator is the rolling window sample covariance estimator,

(42) |

with , and estimation window of length .
It is well-known that is highly inefficient and yields poor asset allocations as long as does not sufficiently exceed . To overcome this problem, Ledoit and Wolf (2004) propose shrinking towards a more efficient (though biased) estimator of .^{18}^{18}18Instead of shrinking the eigenvalues of linearly, an alternative approach would be the implementation of non-parametric shrinkage in the spirit of Ledoit and Wolf (2012), which is left for future research. The classical linear shrinkage estimator is given by

(43) |

where denotes the sample constant correlation matrix and minimizes the Frobenius norm between and . For more details, see Ledoit and Wolf (2004). is based on the sample correlations , where is the -th element of the -th column of the sample covariance matrix . The average sample correlations are given by yielding the -th element of as .

Finally, the resulting predictive return distribution is obtained by assuming . Equivalently, a Gaussian framework is implemented for the sample variance-covariance matrix . Hence, parameter uncertainty is only taken into account through the imposed regularization of the sample variance-covariance matrix. We refrain from imposing additional prior distributions to study the effect of a pure covariance regularization and to facilitate comparisons with the sample covariance matrix.

## 5 Empirical Analysis

### 5.1 Data and General Setup

In order to obtain a representative sample of US-stock market listed firms, we select all constituents from the S&P 500 index, which were traded during the complete time period starting in June 2007, the earliest date for which corresponding HF-data from the LOBSTER database is available. This results in a total dataset containing stocks listed at Nasdaq.^{19}^{19}19Exclusively focusing on stocks, which are continuously traded through the entire period, is a common proceeding in the literature and implies some survivorship bias and the negligence of younger companies with IPO’s after 2007. In our allocation approach, this aspect could be in principle addressed by including *all* stocks from the outset and a priori imposing zero weights to stocks in periods, when they are not (yet) traded. Such a proceeding, however, would further increase the dimension of the asset space, implying additional computational burden. Moreover, the conclusions drawn from our empirical study are not critically dependent on this aspect. We therefore refrain from further extensions of the asset space.

The data covers the period from June 2007 to March 2017, corresponding to 2,409 trading days after excluding weekends and holidays. Daily returns are computed based on end-of-day prices. All the computations are performed after adjusting for stock splits and dividends. As of March 2017, the total market capitalization of the stocks included in our setup exceeds 17,000 Billion USD, which comes close to the S&P 500 capitalization of 21,800 Billion USD.
We extend our data set by HF data extracted from the LOBSTER database, which provides tick-level message data for every asset and trading day.^{20}^{20}20See https://lobsterdata.com. Utilizing midquotes amounts to more than Billion observations.

Panel a) of Figure 3 visualizes the cross-sectional average of annualized realized volatilities estimated based on univariate realized kernels according to Barndorff-Nielsen et al. (2008) for each trading day. Panel b) of Figure 3 shows the average correlations computed using blocked realized kernel estimates based on 4 groups on a daily basis. We observe generally positive correlations revealing (partly) substantial and persistent fluctuations.

In order to investigate the prediction power and resulting portfolio performance of our models, we sequentially generate forecasts on a daily basis and compute the corresponding paths of portfolio weights.
Table 1 summarizes the distinct steps of the estimation and forecasting procedure. We implement different models as of Section 4. The HF approach is based on the BRK-Wishart model with 4 groups and the covariance matrix is predicted as the average of estimates over the most previous 5 days. The SV model is based on common factors^{21}^{21}21The predictive accuracy is very similar for values of between and , but declines when including more factors., while forecasts based on the sample variance-covariance matrix and its shrunken version are based on a rolling window size of 500 days.

Every model is based on a Gaussian likelihood, where parameter uncertainty, as taken into account in the HF model and SV model, leads to fat-tailed predictive return distributions. Moreover, in line with a wide range of studies utilizing daily data, we refrain from predicting mean returns but assume in order to avoid excessive estimation uncertainty. Therefore, we effectively perform global minimum variance optimization under transaction costs and parameter uncertainty as well as model uncertainty.

Bootstrap iteration | Randomly choose investment horizon out of all 308 assets |
---|---|

Initial | Choose predictive models |

Initialize weights | |

At | For every model |

1. MCMC-sampling from posterior distribution | |

2. Generate draws from predictive | |

3. Compute optimal depending on current allocation | |

4. Evaluate (predictive) performance |