Abstract
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ₁ minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. Copyright © 2017 Society for Risk Analysis.
Original language | English |
---|---|
Pages (from-to) | 1532-1549 |
Journal | Risk Analysis |
Volume | 37 |
Issue number | 8 |
Early online date | Mar 2017 |
DOIs | |
Publication status | Published - Aug 2017 |
Citation
Chiu, M. C., Pun, C. S., & Wong, H. Y. (2017). Big data challenges of high-dimensional continuous-time mean-variance portfolio selection and a remedy. Risk Analysis, 37(8), 1532-1549.Keywords
- Constant-rebalancing portfolio
- Constrained ℓ₁ minimization
- Continuous-time mean-variance portfolio
- High-dimensional portfolio selection
- Machine learning
- Sparse portfolio