_{1}

In this paper, we propose a general framework of optimal investment and a collection of trading ideas, which combine probability and statistical theory with, potentially, machine learning techniques, e.g., machine learning regression, classification and reinforcement learning. The trading ideas are easy to implement and their validity is justified by full mathematical rigor. The framework is model-free and can, in principle, incorporate all categories of trading ideas into it. Simulation and backtesting studies show good performance of selected trading strategies under the proposed framework. Sharpe ratios are above 8.00 in simulation study and Sortino ratios are above 4.00 in backtesting, with very limited drawdowns, using 20 years of monthly data of US equities (NASDAQ, NYSE and AMEX from 1999.1 to 2018.12) and 17 years of monthly data of China A-Share equities (Shanghai and Shenzhen Stock Exchange from 2002.1 to 2018.8).

In this paper, we propose a general framework, under which optimal portfolio construction or investment activities can be carried out, along with a number of trading strategies. This paper adds to the knowledge of the investment management literature by introducing the following ideas. First, we recognize the time varying property of asset return distributions^{1}, develop and call for new methods, potentially based on machine learning and panel regression, to conduct the model inference and portfolio optimization. Second, in order to build the investment management framework, we propose to use machine learning method to handle big data input and the dynamic programming problems, which might arise when we try to model the market uncertainty and formulate a dynamic optimal investment problem. Third, most importantly, we propose a completely different way to look at the randomness of the financial market. Previous work focuses on forecasting-type investment management techniques and tries to build various types of models to predict the uncertainties. Technical analysis is inevitably used. The authors of this paper, however, do not rely on the serial correlations of asset returns or analysis based on technical indicators. Instead, we effectively use the information from the cross-sectional data of asset returns and try to build various statistics, i.e., the crystal balls, that enable us to observe the future realizations of the market uncertainty in an aggregate or portfolio level. Investment strategies that are accurate at certain confidence levels are proposed and tested via simulation and backtesting studies. The analysis in this paper, for the first time in the financial literature, utilizes the cross sectional data of financial assets to infer their aggregate time series behavior. The proposed theories prove to be effective in both simulations, in an artificial environment, and backtesting studies with real market data. Last, but not least, we propose a combination of brute-force model-free approach, such as machine learning (reinforcement learning or Q-learning) in financial analysis, which can be found in [^{2}.

Throughout the history of financial analysis, researchers and practitioners strive to build investment or trading strategies with short, medium or long term to benefit from the economic and market movements. However, inevitably in the literature, all the work focus on predicting future market movement with the information available from the past. For example, popular methods include, but are not limited to, trend following, mean-reversion or long-short strategies. Taking the class of trend following strategies as an example, some source of literature review can be found in [

Moreover, current practice estimates model parameters directly using historical time-series data, which, potentially, introduces some problems. First, the distributions of economic or financial time series might be time varying, therefore a brute-force inference using historical data might result in significant estimation bias. Second, imposing additional model structure on the historical data series introduces further model risks.

All those facts encourage the authors of this paper to search for a new framework, under which portfolio management and trading activities are conducted. Ideally, first, this framework should be model-free such that it is able to incorporate and accommodate all the approaches, either model dependent or model independent, static or dynamic, into it. Second, this framework should allow efficient and accurate parameter inference that captures time dependent feature of the coefficients involved.

In addition to the proposal of a general framework, we work with a large class of optimal investment strategies based on a rotation of the original asset space. For some of the strategies, we do not try to predict the market movements from the past data directly, which we only utilize to get model parameter estimates. Here all the mentioned parameters are theoretically current time measurable and do not involve prediction. Instead, we try to identify, orthogonalize and isolate the random sources in the market and diversify away the randomness (asymptotically or exactly). Moreover, we do not try to model the serial correlation structure of the asset returns and mainly focus on the cross-sectional properties of them. Simulation and backtesting studies show good applicability of the investment models that we have developed. To the best of our knowledge, this paper is the first to discuss such orthogonalization and diversification, in the literature of investment and portfolio management.

In the end, numerical experiments are carried out. We find consistently good performance under both simulation and backtesting studies, which coincides with the basic intuition that if we get good parameter estimates, performance will be guaranteed by mathematical and probability laws. In spite of the good performance, limitations of the testing approach and remediation are discussed.

The organization of this paper is as follows. Section 2 describes the main investment framework. Section 3 introduces concrete investment strategies under the proposed framework. Section 4 performs simulation studies to test the models proposed, Section 5 backtests the models using equity data in the US and China markets and Section 6 concludes. Readers who are only interested in the investment strategies can skip Section 2, which contains the rigorous mathematical derivations, and directly start from Section 3.

This section contains the mathematical description and construction of the optimal investment framework. We first introduce the necessary probability space and tools required for further analysis. Next, we propose a rotated asset space which we will be working with instead of the original asset space. Afterwards, we write down the general formulation of the optimization problem and propose an investment framework to jointly solve the optimization problem and meantime perform parameter inference via machine learning (ML). Readers can directly start from Section 3 for concrete trading models, with the understanding that skipping this section does not prevent them from understanding the investment strategies.

Assume that the randomness in the financial economic system under consideration is modeled by a filtered probability space ( Ω , F , F ( ⋅ ) , ℙ ) , where Ω represents the sample space, modeling the entire collection of possible outcomes of the system, F represents all the information in the system and F ( ⋅ ) : = { F t } t ≥ 0 is the information filtration with F : = ∪ t ≥ 0 F t , satisfying the usual conditions, and ℙ is the historical probability measure on F . There are M financial assets in the economic system, whose rate of return processes are denoted by an M-vector { R t } t ≥ 0 . Suppose that R t ∈ L 2 ( F t ) , meaning that R t has finite variance-covariance structure for all t ≥ 0 . Obviously we have the following (trivial) decomposition

R t + h = E [ R t + h | F t ] ︸ μ t + ( R t + h − E [ R t + h | F t ] ) ︸ martingaledifferencesequence U ^ t + h = μ t + U ^ t + h ︸ σ t U t + h = μ t + σ t U t + h : = μ ( t 0 , t 0 + h , ⋯ , t , ω , R t 0 , ⋯ , R t , U t 0 , ⋯ , U t ) ︸ deterministicandpredictableattime t + σ ( t 0 , t 0 + h , ⋯ , t , ω , R t 0 , ⋯ , R t , U t 0 , ⋯ , U t ) ︸ deterministicandpredictableattime t U t + h ︸ randomattime t . (1)

Here process U ^ t ∈ ℝ M is an M-vector generating the randomness, all the quantities are defined with proper dimensions, and it is obvious that F t = σ ( U ^ 0 → t ) , i.e., U ^ generates all the information in the financial economic system. The last line of the above equation tries to impose some model structures on μ and σ , which can be of any functional form. U ^ can be modeled by, for example, a joint Lévy process, a system of stochastic differential equations, a linear or nonlinear time series or even a collection of artificial neural networks. A detailed explanation of Equation (1) is postponed to the next section.

Following Section 2.1, we assume that the source of randomness in the economy and the financial market can be represented by an N-dimensional ( 1 ≤ N ≤ ∞ ) jointly independent and identically distributed (i.i.d.)^{3} stochastic process

U t = { U t j } j = 1 N (conditional on F t − h ) with zero mean and COV t ( U t + h i , U t + h j ) = δ i , j ,

for any t ≥ 0 and h > 0 , where δ i , j is the Kronecker Delta and h is the smallest time increment under our consideration. Information filtration { F t } t ≥ 0 is generated by U. Recall that, there are M primary financial assets traded in the market, whose rate of returns are denoted by an M-vector R t = { R t m } m = 1 M . Suppose that we have the conditional decomposition^{4} (conditional on F t − h )

R t = μ t − h + σ t − h U t (2)

where μ t − h ∈ F t − h is an M × 1 vector and σ t − h ∈ F t − h is an M × N matrix, which can both be estimated at some precision and accuracy at time t − h . Then, we know that E t − h [ R t ] = μ t − h ^{5} and COV t − h [ R t − μ t − h ] = σ t − h ⊺ σ t − h ^{6}.

Note that, this setting is very general as h ∈ ℝ + can be 1-second, 1-day, 1-week or 1-year and it encompasses all the possible time frequencies. ( μ t − h , σ t − h ) can be any stochastic process materialized at time t − h , for every ( t , h ) ∈ ℝ 2 + . ( μ , σ ) also helps to model the cross-sectional and time series correlation structure of R. When h → 0 + , we can consider the limiting case of Equation (2) as a system of stochastic differential equations with jumps (SDEJ), when μ t − h = μ ( t − h , U t − h , R t − h ) and σ t − h = σ ( t − h , U t − h , R t − h ) , i.e., they are functions of random sources U t − h and R t − h at time t − h . Equation (2) defines a general semi-martingale R when proper technical conditions are satisfied.

Remark 1 (On M, N and Factor Structure). If M ≥ N , i.e., the number of financial assets R is larger than the number of random sources U, we are in an effectively complete market. As some ( M − N , to be accurate) assets are redundant, we can choose N linearly independent assets in this case. However, if M < N , the market is incomplete. For the sake of generality, we study the case where N = ∞ . Consider an orthonormal basis { e t n } n = 1 ∞ and decompose R t m as

R t m = ∑ n = 1 ∞ 〈 R m , e n 〉 t e t n (3)

where 〈 ⋅ , ⋅ 〉 t is the canonical inner product in L 2 ( F t ) , i.e., the Hilbert space of all the stochastic processes that have finite second order moments at time t, and m runs from 1 to M. Equation (3) defines an infinite series expansion of R t m and we truncate the first M elements and write

R t m = ∑ n = 1 M 〈 R m , e n 〉 t e t n + φ t m , M . (4)

Here φ t m , M : = ∑ n = M + 1 ∞ 〈 R m , e n 〉 t e t n is considered to be the residual term. Then, if we denote Θ t : = ( θ t m , n ) m , n : = ( 〈 R m , e n 〉 t ) m , n , e t = ( e t 1 , ⋯ , e t M ) , likewise for R t and ϕ t M = ( φ t 1, M , ⋯ , φ t M , M ) , we will have

Θ t − 1 R t = e t + Θ t − 1 ϕ t M . (5)

Clearly, our analysis is asymptotic in nature under the assumption that

1 M 1 M ⋅ Θ t − 1 ϕ t M ≅ 0

where 1 M is an M-vector with entries all equal to 1. The above description justifies the analysis in this paper, the asymptotic investment framework and related strategies to be proposed in Section 3.

Remark 2 (More on Factor Structure). Mapping Equation (2) to the popular factor representation of asset returns R, we have

R t = α t − h + β t − h ⋅ F t − τ + σ t − h ε t (6)

where τ can be 0 or h. Equation (2) can be viewed as an equivalent form of Equation (6) after a proper Gram-Schmidt orthogonalization process on ( F , ε ) . The validity of a factor representation is justified in Remark 1. Moreover, the determination of the factor space F requires a thorough theoretical and empirical study. For example, one choice of the factors is VIX index, studied in [

We illustrate ideas under the discrete-time setting, with the understanding that to solve continuous time models, time discretization is inevitable, which essentially reduces a continuous time problem to a discrete time one. We are seeking a rotation matrix λ t − h and a portfolio weight vector w t − h of appropriate dimensions, such that the w-weighted average of the rotated asset space

w t − h ⋅ λ t − h ⋅ R t ︸ randomattime t − h = w t − h ⋅ λ t − h ⋅ μ t − h ︸ deterministicattime t − h + w t − h ⋅ λ t − h ⋅ σ t − h ⋅ U t ︸ ≅ 0 (7)

is what we want to work with.

An Example of the Rotation

Rewrite Equation (2) as

R ^ t = [ σ t − h ⊺ σ t − h ] − 1 σ t − h ⊺ R t = [ σ t − h ⊺ σ t − h ] − 1 σ t − h ⊺ μ t − h + U t = θ t − h + U t (8)

assuming that σ t − h ⊺ σ t − h is of full-rank. Define the rotated assets as R ^ t = [ σ t − h ⊺ σ t − h ] − 1 σ t − h ⊺ R t , with the Moore-Penrose inverse λ t − h : = [ σ t − h ⊺ σ t − h ] − 1 σ t − h ⊺ defining the rotation^{7}. Note that, conditional on the information filtration F t − h , i.e., all the public or private information available at time t − h , R ^ t is mutually orthogonal. Our goal is to find optimal portfolio weights w t − h = ( w t − h 1 , w t − h 2 , ⋯ , w t − h N ) , on the rotated asset space R ^ t , for all ( t , h ) ∈ ℝ 2 + and t ≥ h . The optimally realized return at time t is therefore

R ^ t w = w t − h ⋅ R ^ t (9)

= w t − h ⋅ θ t − h + w t − h ⋅ U t (10)

and

E t − h [ R ^ t w ] = w t − h ⋅ θ t − h . (11)

With the rotated assets R ^ t , we can do the following optimization, i.e., to minimize the impact of the error term w t − h ⋅ U t on the investment strategies that we try to develop. In the sequel, we will always assume M = N , without loss of generality, unless we want to discuss the cases with incomplete market.

The Pricing Kernel

After the rotation stated in the previous section, we can define the stochastic discount factor in the rotated asset space, projected onto the space spanned by the assets R ^ , as M t = R t f ∑ n = 1 N U t n + ξ t , where R t f is the (locally) risk-free rate and ξ t ⊥ R ^ t , meaning that ξ t lies in the orthogonal space of R ^ t in L 2 ( F t ) . A linear transformation can bring the pricing kernel back to the original asset space.

The General No Arbitrage Asset Pricing Formula. The classic asset pricing relation reads, under no arbitrage condition

P t = E [ m t + h P t + h | F t ] (12)

where h is the smallest time increment, P t denotes the asset price at time t, m t is the stochastic discount factor evaluated at time t and F is the information filtration. Simple algebra transforms the above equation to

E [ m t + h ( R t + h − R t + h f ) | F t ] = 0 (13)

where R t + h f ∈ F t is the return of the locally risk-free asset. Of course, stochastic discount factor m depends on the information filtration F , which is assumed to be generated by an r-dimensional process X. Further assume that m t = g ( t , X 0 , X h , ⋯ , X t ) .

Suppose that the asset span is denoted by S and the corresponding return process is R t ∈ L 2 ( F t ) , i.e., R ∈ ℝ M is square-integrable. Equation (13) is equivalent to

cov t ( m t + h , R t + h − R t + h f ) + B t , t + h E t [ R t + h − R t + h f ] = 0. (14)

Here B t , t + h denotes the t-price of a zero-coupon bond with maturity time t + h . Because we have σ ( R t ) ⊆ F t and Span ( R ) ⊆ Span ( X ) ^{8}, we can, without loss of generality, assume that m t + h = 1 + ω t ⋅ R t + h . Then, we have

ω t Σ t + B t , t + h ⋅ E t [ R t + h − R t + h f ] = 0 (15)

where Σ is the conditional variance-covariance matrix and therefore

ω t = − B t , t + h ⋅ E t [ R t + h − R t + h f ] ⋅ Σ t − 1 . (16)

Once we obtain the weight process ω , we can price any asset in the span S . Therefore, the problem is to compute E t [ R t + h ] and E t [ R t + h i ⋅ R t + h j ] for any ( i , j ) pair. Alternative method to compute the weight process can be found in [

Time-Series Regression. For most of the papers in the literature, time-series regression is utilized to find the relation

R t + h = f ( t , X t , X t − h , ⋯ , X 0 ) + ϵ t , t + h . (17)

This, of course, can serve as an option in our analysis. Functional form f can be represented by a basis function expansion or a deep artificial neural network. To obtain the values of E t [ R t + h i ⋅ R t + h j ] , we can run a time-series regression of R t + h i ⋅ R t + h j on ( X t , X t − h , ⋯ ) . This means we can utilize regression technique to compute the volatility estimates.

Panel Regression. In this paper, we emphasize the methodology using the panel regression technique. Following the notation above, we have common risk factors Y, asset specific risk factors Z = ( Z 1 , ⋯ , Z n ) and X = ( Y , Z ) . We can run a panel regression of the following form

( R t + h , X t + h ) = p ( t , R t , X t , ⋯ , R t − k h , X t − k h ) + ϵ t , t + h . (18)

^{8 σ ( ξ ) } is the information sigma-algebra generated by ξ and Span ( R ) is the linear space spanned by R.

Then, we have

E t [ ( R t + h , X t + h ) ] = p ( t , R t , X t , ⋯ , R t − k h , X t − k h ) . (19)

The benefit of doing so is three-fold. First, it can make use of the entire cross-section of asset return data. Second, it can give estimations of future risk factor returns X. Third, it generates more observations and can reduce the reliance on the past historical data series, i.e., k can be a small integer. To obtain the values of E t [ R t + h i ⋅ R t + h j ] , we can run a panel regression of R t + h i ⋅ R t + h j on X ^ t , where X ^ t is the new risk factor process adjusted for the dimensions of the variance-covariance matrix. In order to achieve better precision in function approximation via machine learning, we can interpolate the cross-sectional asset returns and formulate a cross-sectional curve to get as many samples as possible at any granularity and then perform the regression functional approximation via machine learning. For a meaningful interpolation we can first sort the cross-sectional asset returns and then interpolate the resulted curve via any interpolation technique, for example, linear, polynomial interpolation or even an artificial neural network. The detailed interpolation should be done as following. First sort R t + h and the related sorted return series is ( R t + h [ 1 ] , ⋯ , R t + h [ M ] ) . Also do the same for X and denote ( X t + h [ 1 ] , ⋯ , X t + h [ M ] ) as the sorted series. Suppose the regressor variable is ( 1, ⋯ , M ) . We interpolate p points between ( R t + h [ i ] , R t + h [ i + 1 ] ) . Assume that ( X t + h [ k i ] , X t + h [ k i + 1 ] ) pair corresponds to ( R t + h [ i ] , R t + h [ i + 1 ] ) . We choose p

equal distance points from the interpolated sequence for ( X t + h [ 1 ] , ⋯ , X t + h [ M ] ) and the goal is achieved.

General Discussions. Time-series regression seeks the same functional relation between the dependent and independent variables across time, while the functional relations can be time-varying via panel regression. However, the dependency on risk factors of each asset in the universe can be different for time-series regression while for panel regression the functional dependency is the same across assets. Moreover, in time-series regression, we need more samples than the number of factors. However, in panel regression, we can incorporate asset specific factors, for example, earnings-per-share or book to market ratio, into the regression framework. If the asset span contains 7000 assets, we use 10 different asset specific risk factors and look back 2 periods in time, in panel regression there will be 7000 observations and 20 independent variables. As discussed previously, we can also interpolate the 7000 asset returns to get more samples, potentially, infinity, to run the regression. The reasons for the preference on panel regression over time-series regression are as follows. First, the functional dependency of asset returns on state variables, i.e., factors, might be time varying. Therefore, using long historical data series might be inappropriate. Second, according to derivatives pricing theory, the functional form of asset returns on the state variables is the same across different assets, which means a panel regression is suitable.

Consider the following dynamic portfolio optimization problem in a stochastically varying financial environment

sup ( w , τ ) ∈ W × S [ 0, T ] φ [ D τ G τ w ] (20)

where τ is a stopping time and S [ 0, T ] is a subspace of all the stopping times between [ 0, T ] . D t is the discount factor under the physical measure and G t w is the cumulative payoff measure of the investment portfolio process w t adjusted by the risk, which can be path-dependent, where w t ∈ W , a proper space of optimal portfolio weights. φ ( ⋅ ) is an appropriate measure of performance, which can be conditional expectation, conditional quantile or other metrics. Note that, Equation (20) relates the investment portfolio choice practice to a dynamic problem with optimal stopping. Therefore the optimal exercise boundary can be computed. Specifically, G can be Sharpe ratio, Sortino ratio, max drawdown, or any other risk adjusted performance measure. Function G can depend on realized values of state variables, or conditional expected values that require us to use ANNs to predict. The ultimate output of the above optimization problem is a set of portfolio weights at each time t. Note that ω can be modeled also via ANNs, according to [

In [

Back to our framework of investment, we propose to work under the rotated asset space R ^ and use Equation (20) as our main optimization problem. We call for a general methodology to combine the steps of model estimation and dynamic optimization. The reason is that, under the observation that the distributions of asset returns are time varying, it introduces bias to estimate the model parameters ( μ t , σ t ) at time t directly using the data prior to it. However, RL makes it possible to optimize the portfolio weights in a model-free manner. Equation (20) is more general than mean-variance optimization because G can be functional of past or future trajectories of ( μ , σ , U , R ) , which results in a dynamic stochastic optimal control problem. Dynamic portfolio optimization takes into account the market regime changes. As [

To summarize, the authors of this paper propose a new way of practice in both pricing and trading, to eliminate model dependence^{9}, while maintaining the general economic or finance framework. This calls for the development of more advanced Artificial Neural Network (ANN) and RL techniques^{10}.

The first step to create the general investment framework is data processing and factor construction. Basically, factors represent the risk decomposition of any asset return in the universe and by bearing the risks, investors get rewarded in the financial market. Factors, which can be both qualitative and quantitative, should fall into the following categories. First, political environment and policies made by government and other authorities should be included. Second, macroeconomic factors, such as economic cycle, GDP, inflation, monetary policies and fiscal policies, should be included. Third, micro-economic and financial factors, such as market returns, yield to maturity for bonds, are helpful. Fourth, fundamental factors, such as earnings per share, book to market ratios, are crucial to equities. Fifth, technical indicator factors, such as the output from other predictive or trading models, namely, the resulted portfolio weights or predictions, stock market technical indicators and indicators from behavioral finance theories, i.e., market sentiment, real time news from natural language processing, should also be included. Please note that, there might be factors that do not fall into the above categories, such as weather conditions, as long as they are helpful in identifying the risk characteristics of an asset, we should include them. To summarize, this module processes raw market data and formulates different factors, for a meaningful risk decomposition of asset returns. Moreover, different factors might have different observation frequencies, sometimes, interpolation, in time dimension, is needed for granularity considerations.

^{9}Eliminating model dependence introduces data dependence.

^{10}One reference on this topic is [

After the factors are constructed, we can use the methodologies outlined in Section 2.3.2 to compute the conditional expected values for asset returns and factor returns. The predictions can be made at any time frequency, e.g., a second, an hour, a day, a week, or even a quarter. It would be helpful to mention that the frequency of factor input should match the frequency of prediction. Instead of using calendar as a measure of time flow, we can use trading volume or volatility as the metric. For example, we divide the data into small blocks by equal trading volumes or the accumulation of volatility. If reinforcement learning technique is used, this step will be merged into the next one: Portfolio Construction.

After we obtain the predicted asset and factor returns, we can compute the portfolio weights based on them as inputs. The first candidate is mean-variance frontier or Black-Litterman model. As a second choice, we can utilize machine learning classification techniques, e.g., k-means method, to categorize the asset universe into small groups based on the predicted and realized values and make long-short decisions accordingly. Moreover, Bayesian constrained deep reinforcement learning can be used to formulate the dynamic optimal portfolios. We emphasize that, outputs from other investment models can be used as inputs to our framework either by formulating them as factors or via ensemble method, which is quite popular in machine learning literature and practice.

There are many risk management techniques available in the literature. We mention two of them. First, dynamic portfolio insurance techniques can be used to reduce the max-drawdown of the constructed portfolio. Second, we can set stop-gain or stop-loss threshold to formulate a more prudent strategy. The last step is to perform a sensitivity and risk-attribution analysis to better understand the strategy performance.

^{11}According to [

^{12}Alternatively, the joint distribution of U can be recovered directly from historical data.

^{13}Otherwise empirical quantiles have to be used.

We refer the interested readers to [^{11}. All of the above analysis can be applied to the rotated asset space under the proposed framework.

With Equation (9), we can build a trading model which is profitable at some confidence level. To do this, we need to assume a joint distribution for U^{12} in order to compute its quantile values^{13}. Taking joint Gaussian distribution as an example, denote the top α-quantile of w t − h ⋅ U t as q α + ( t ) and the bottom α-quantile of w t − h ⋅ U t as q α − ( t ) . Then, if w t − h ⋅ θ t − h > q α + ( t ) , we long w t − h ⋅ R ^ t and if w t − h ⋅ θ t − h < q α − ( t ) , we short w t − h ⋅ R ^ t . This method applies at portfolio level or individual asset level.

Long-short portfolio method means that we can score each asset, e.g., based on the past values and predictions of its returns or other variables, in the universe, and rank the cross section. Long the top (bottom) and short the bottom (top) quantiles of the asset span and formulate a trend following (contrarian) strategy. The classification of different groups of assets can be done via machine learning classification methods based on the realized and predicted values, i.e., the computed conditional expectations. Please note that, the financial market can be a perfect blend of momentum effect and mean-reversion effect. For example, the stocks that perform well might continue to perform well. However, the stocks with worst performance in the past might tend to perform better in the next period. This leads us to ask, whether, in the long run, mean-reversion effect is significant and, in short run, momentum effect is dominating or the opposite? Careful empirical investigations should be carried out to answer this question.

As a third attempt, we try to utilize strong law of large numbers (S-LLN hereafter) to eliminate the randomness represented by U t at time t − h , if M is

large enough. Let w t − h j ≡ 1 M and we have

1 M ⋅ R ^ t M ≡ 1 M ⋅ θ t − h M + 1 M ⋅ U t M . (21)

Here 1 M = ( 1,1, ⋯ ,1 ) is an M-vector. But note that, as U is jointly independent (or uncorrelated), when M (and therefore N) is large, we have

1 M ⋅ U t M → 0 in probability or almost surely, according to the (strong) LLN. Therefore, we have, approximately

1 M ⋅ R ^ t M ≅ 1 M ⋅ θ t − h M . (22)

Equation (22) is extremely powerful. It equates the realization of a future random variable at time t to a variable which is known currently at t − h . If

1 M ⋅ θ t − h M > λ + , we can long the asset 1 M ⋅ R ^ t M and when 1 M ⋅ θ t − h M < λ − , we short 1 M ⋅ R ^ t M . Here λ + ≥ 0 and λ − ≤ 0 are two threshold values that trigger the algorithm, which can be time dependent or even stochastic.

^{14}T is a g-vector.

For countries or regions that do not permit short-selling or this activity is costly, we can use futures contract to continue our analysis. Consider the asset space R t and g futures contracts F t , T ∈ ℝ g which start at time t with maturity time T^{14}. Consider the following M-factor regression

F t , T = a + b R t + γ ⋅ U t , T (23)

where U t , T : = ( U t , T 1 , ⋯ , U t , T g ) is an i.i.d. sequence and γ describes the covariance structure. Because, we have

R t : = μ t − h + σ t − h U t (24)

then

F t , T = a + b μ t − h + [ b σ t − h , γ ] [ U t , U t , T ] (25)

= θ t − h + υ t − h U ^ t , T . (26)

Here

U ^ t , T = G S [ U t , U t , T ]

where GS denotes Gram-Schmidt orthogonalization under the canonical L 2 ( F t ) -norm. Then, perform similar transformation

[ υ t − h ⊺ υ t − h ] − 1 υ t − h ⊺ ⋅ F t , T = [ υ t − h ⊺ υ t − h ] − 1 υ t − h ⊺ ⋅ θ t − h + U ^ t , T . (27)

Same analysis follows from Equation (27). As we can see from the above algorithm, regression (23) has a large number of factors, which is difficult to implement in practice with limited computation power. Therefore, we do not test this algorithm in this paper.

In this section, we study the impact of estimation errors on the proposed optimal trading strategy in Section 3.4. First, assume that μ t − h is estimated with a zero-mean error μ ^ t − h = μ t − h + ι t − h , where ι t − h is the error term with E t − h [ ι t − h ] = 0 . Then, we have

R ^ t : = P t − h σ ( μ t − h + ι t − h ) + ( U t − P t − h σ ι t − h ) (28)

where P t − h σ : = [ σ t − h ⊺ σ t − h ] − 1 σ t − h ⊺ . It can be seen that when M = N is sufficiently large, we will have

1 M M ⋅ P t − h σ ( μ t − h + ι t − h ) (29)

= 1 M M ⋅ θ t − h ︸ truevalue + 1 M M ⋅ P t − h σ ι t − h ︸ errorterm . (30)

Because P t − h σ ι t − h has zero mean, according to [

The next step is to estimate the impact of estimation error of volatility matrix on the algorithm given that the drift term μ is estimated correctly. Suppose we have

R ^ t = ( P t − h σ + ϵ t − h ) μ t − h + ( I + ε t − h ) U t (31)

= ( P t − h σ μ t − h + U t ) ︸ trueestimator + ( ϵ t − h μ t − h + ε t − h U t ) ︸ estimationerror . (32)

Here ( ϵ , ε ) are estimation error terms which are uncorrelated with U, with E t − h [ ε t − h ] = 0 and E t − h [ ε t − h ] = 0 . A similar argument shows that the impact of the error terms will vanish if M is sufficiently large. The discussion of joint impact of estimation errors from both drift μ and volatility term σ is analogous with only more complicated formula.

Following Equations (6) and (7), we can write

λ t − h ⋅ R t = λ t − h ⋅ α t − h + λ t − h ⋅ β t − h ⋅ F t + λ t − h ⋅ σ t − h ⋅ ε t (33)

= λ t − h ⋅ α t − h + λ t − h ⋅ Φ t − h ⋅ ϒ t . (34)

Let us remind the readers that R t ∈ ℝ M , λ t − h ∈ ℝ J × M , σ t − h ∈ ℝ M × M and ε t ∈ ℝ M × M , where J is the number of rotated assets chosen to formulate the optimal portfolio. We denote ϒ : = [ F , ε ] ∈ ℝ M × N and Φ : = [ β , σ ] ∈ ℝ M × N . In this section, we assume that the market is incomplete with M < N and J ≤ N − M . The null space of matrix Φ t − h is denoted by K e r ( Φ t − h ) and its

dimension is N − M . Further denote { v t − h n } n = 1 J as a set of linearly independent vectors in K e r ( Φ t − h ) . Taking { v t − h n } n = 1 J as portfolio weights, we have

R ^ t n = v t − h n ⋅ R t = v t − h n ⋅ α t − h . (35)

Therefore, we have J rotated assets R ^ t : = { R ^ t n } n = 1 J , which are conditional deterministic. We can perform mean-variance optimization or simply assign equal weights to R ^ t , based on the signs^{15} of { v t − h n ⋅ α t − h } n = 1 J .

From the decomposition Formula (2), we can derive the equations to construct the factor mimicking portfolios. Suppose we have

R t = α + β F t + ϵ t (36)

^{15}It is also interesting to find vectors { v t − h n } n = 1 J ∈ K e r ( Φ t − h ) such that { | v t − h n ⋅ α t − h | } n = 1 J achieves maximum.

where R t is N × 1 , α is N × 1 , β is N × K , F t is K × 1 and ϵ t is of N × 1 dimension. A simple multiplication of both sides of Equation (36) by the Moore-Penrose inverse of γ = ( β , I ) yields

( F t , ϵ t ) ≅ ( γ ⊺ γ ) − 1 γ ⊺ ( R t − α ) . (37)

We can construct indexes of F by constructing portfolios of R according to Equation (37).

For direct simulation approach, We assume that ( μ t − h , σ t − h ) is estimated correctly and Equation (8) is already obtained. It is justifiable to use independent draws for daily realizations of θ t − h across time, as our methods do not try to forecast future values based on serial correlations, which is the main difference between the proposed trading models and almost all the other methods in the existing literature. Of course, simulating directly the market price of risk vector θ t − h avoids the estimations of model parameters based on historical data, which eliminates some estimation errors. Transaction cost is simplified to be 5.00% per year and subtracted from the realized strategy return at each month. The real backtesting study is postponed to Section 5. Note that, for risk management purposes, in real-world backtesting in the US equity market, we can mix our portfolio with some percentage of VIX index or add risk management to the current methodologies.

Next, we work with a more realistic setting in order to illustrate how calibration impacts the model performance. The data generating process (DGP) of asset returns R is assumed to be a Gaussian process R t m ≅ N ( μ t m , σ t m ) , where μ t m = μ ^ m + ϵ t m is a combination of a trend term μ ^ m plus a noise ϵ t m which also follows a joint normal distribution N ( 0, Σ ϵ ) at each time t. { σ t m } m = 1 M is again sampled from a joint Gaussian distribution N ( σ , Σ σ ) at each time t. The covariance structure of the DGP is modeled through ( Σ ϵ , Σ σ ) .

We first determine the number of assets M and the time scope T, which are set to be 200 and 240-months. Then, we simulate one trajectory of the M assets with T months into the future and perform the model calibration for ( μ t , σ t ) in Equation (2) and optimization analysis. For realistic values of ( μ ^ , σ , Σ ϵ , Σ σ ) , we record the Sharpe and Sortino ratios of the strategies under equal weight assumption, scaled weight assumption and risk parity assumption, which are documented in Section 4.1.1.

The simulation study in this section compares the simple NAV curve produced by a naive trend following model and our equal-weight model. The trend following model proceeds as follows. Compute the T-period moving average of asset returns for each stock and thus formulate a vector of moving averages R ¯ t at time t. Long the cross section with equal weights if 1 M ⋅ R ¯ t > 0 and short otherwise. The DGP of the simulation study in this section is an ARMA(1,1)-GARCH model with realistic coefficients, simulated using ugarchpath function offered by rugarch package in R programming language.

In this section, we try to test a long only strategy based on the computed conditional expectations via simulation study. The study proceeds as following. First, simulate asset-specific factor processes H ∈ R M × T and B ∈ R M × T as independent draws from random normal variables, where M is the number of assets in the cross-section and T is the number of periods for our consideration. Then transform the processes H and B by the following logic: replace the i-th column by the average of ( i − 1 ) -th and i-th column for each H and B. This step adds some serial correlations across time for ( H , B ) . Suppose that the asset return process follows R = 5 × H 2 + H − H × B . We compute E t [ R t + h ] = ϕ ( t , H t , B t ) via machine learning regression. Then, we sort the conditional expected returns computed, long the top 100 assets and record the performance. We set M = 1000 and T = 500 days in this simulation study. Another two possible strategies that serve as the extensions of the one introduced in Section 3.4 are as follows. First, compute the mean value of the conditional expected returns and if it is positive, long the asset universe. We go short the asset universe otherwise. The asset span is rotated as proposed in Section 2.3.1. Second, find the maximum expected rotated asset return at each time t, and long this asset when the expected value is positive, hold to the next period and we go short otherwise.

In this simulation study, we test the methods outlined in Sections 2.4, 3.4 and 4.1.1. Specifically, we generate 36 months of Sharpe ratios θ of the M assets, perform optimization at each month and record the net asset value curve (NAV curve). After obtaining the values of θ , we generate U and therefore R ^ . We generate w t − h ⋅ U t randomly at each month and add to the realized returns to account for the remaining randomness of the market. We assume that θ is sampled from a Gaussian distribution with mean 0.95 and standard deviation 1.40, annualized. Those values are estimated from S&P500 historical data from 1990/01/01 to 2018/02/27. The frequency of data is set to be monthly. After

obtaining the time series of R ^ , we estimate θ ¯ t + T − 1 : = 1 T ∑ j = 0 T − 1 R ^ t + j .

In this simulation study, we test the methods outlined in Sections 2.4, 3.4 and 4.1.1.

Strategy Information | Equal Weights | Scaled Unconstrained Weights | Unit Weights |
---|---|---|---|

M = 50 | 3.05 | 4.20 | 2.28 |

M = 150 | 3.82 | 5.14 | 3.96 |

M = 200 | 5.08 | 6.12 | 4.56 |

M = 250 | 5.15 | 7.73 | 4.72 |

M = 300 | 6.41 | 8.38 | 5.94 |

Strategy Information | Equal Weights | Scaled Unconstrained Weights | Unit Weights |
---|---|---|---|

M = 50 | 2.45 | 3.49 | 1.36 |

M = 150 | 3.20 | 6.86 | 3.78 |

M = 200 | 4.32 | 7.32 | 5.01 |

M = 250 | 5.05 | 7.98 | 5.79 |

M = 300 | 6.09 | 8.49 | 6.84 |

Simulation Study 3 corresponds to the ideas presented in Sections 2.4, 3.4 and 4.1.2. The methodology is as follows. First, simulate the market rate of returns R using the DGP specified in Section 4.1.2. Second, with the realized time series of { R t } t = 1 K , use a moving window of T periods to compute

μ ^ t + T − 1 : = 1 T ∑ j = 0 T − 1 R t + j (38)

σ ^ t + T − 1 ⊺ σ ^ t + T − 1 : = C O V ( R t , R t + 1 , ⋯ , R t + T − 1 ) . (39)

Third, compute θ ^ t + T − 1 : = [ σ ^ t + T − 1 ⊺ σ ^ t + T − 1 ] − 1 σ ^ t + T − 1 ⊺ μ ^ t + T − 1 . Fourth, U can be recovered by applying U ^ t − T + 1 : = [ σ ^ t + T − 1 ⊺ σ ^ t + T − 1 ] − 1 σ ^ t + T − 1 ⊺ R t + T − 1 − θ ^ t + T − 1 . We report the Sharpe ratios in

Strategy Information | Equal Weights | Scaled Unconstrained Weights | Equal Risk Contribution |
---|---|---|---|

DGP joint Calibration | 4.29 | 6.02 | 5.19 |

eigenvectors of Σ t and λ t is a diagonal matrix collecting the square-root of eigenvalues of Σ t .

This experiment corresponds to Section 4.1.3. The NAV curves are displayed in

The simulation study in this section corresponds to the method documented in Section 4.1.4. NAV plot for long only strategy is on the left, and the NAV for law of large numbers based strategy is on the right (

We use monthly US stock return data downloaded from the CRSP database provided by WRDS. The data span from January of 1999 to December of 2018. After excluding missing return data, there are 1,521,909 security-month observations. The data set is comprised of various publicly traded securities, of which common stock is the major type, accounting for 81.00% of the sample. Returns are winsorized at the 1st and 99th percentile within each share code-month group to mitigate the effect of outliers or simply truncated at ± ϒ % -level for some chosen ϒ .

For US equities, we apply the methodology introduced in Section 2.3.2 to estimate the conditional expected asset returns and shrinkage for variance-covariance matrix using a T-day moving average window. Two scenarios are considered. For the first scenario, we aim at testing the claim that as long as the conditional expected asset returns can be correctly estimated, the model performance is guaranteed by S-LLN. Therefore, we assume an imperfect foresight for one period ahead: meaning that at each time t, we can use the returns materialized at time t + h to infer the conditional expectations of asset returns

ℝ t [ R t + h ] , where the estimation methodology and R are described in Section 2.3.2. The factor process is chosen also to be R, i.e., we let the cross-sectional returns to explain their own behavior. If the average of the cross-sectional of the expected returns is positive, we go long the entire universe with equal weights. We short the equally-weighted universe if the aforementioned quantity is negative. The second scenario tries to infer the conditional expected asset returns in the same way and obtain variance-covariance matrix through shrinkage method, using cov.shrink function in R. Under the rotated asset space R ^ , we compute the conditional expected returns and rank them, long the top θ-percentage and short the bottom θ-percentage. The performance is evaluated quarterly. We do not account for transaction cost in this backtesting study. A preliminary test shows that a 30 bps cost per unit of portfolio weight change does not affect the testing results. Nor does the restriction of asset space to domestic US common shares affect the outcome. Our tests are robust in this sense.

For the first scenario, we get 100% positive quarterly returns and a Sharpe ratio of 1.87. Sortino ratio is therefore ∞ . The result of the second scenario is summarized in

In addition to the US equity market, we test our algorithm with China A-shares. The monthly return data are downloaded from Wind terminal and span from

Year | Q1 | Q2 | Q3 | Q4 | Annual | SP500 | Excess Return |
---|---|---|---|---|---|---|---|

2000 | 22.55% | 4.85% | 7.58% | 8.00% | 42.98% | −9.32% | 52.30% |

2001 | −3.72% | 11.52% | −5.82% | −6.86% | −4.88% | −12.07% | 7.19% |

2002 | 22.13% | −1.81% | 8.41% | −6.79% | 21.94% | −24.35% | 46.29% |

2003 | −0.43% | 2.12% | 26.63% | 17.14% | 45.46% | 24.22% | 21.24% |

2004 | 6.12% | 0.54% | 0.79% | 3.39% | 10.83% | 8.88% | 1.95% |

2005 | 5.00% | 4.45% | 1.94% | 3.02% | 14.41% | 3.24% | 11.17% |

2006 | 8.73% | 0.99% | −2.12% | 2.99% | 10.60% | 12.98% | −2.38% |

2007 | 6.77% | 4.45% | −1.43% | 5.32% | 15.01% | 3.90% | 11.11% |

2008 | 2.99% | 2.99% | −2.97% | 4.39% | 7.39% | −45.45% | 52.84% |

2009 | 10.32% | −9.44% | 14.46% | 9.60% | 24.93% | 23.58% | 1.35% |

2010 | 8.38% | −7.04% | −2.57% | 4.36% | 3.13% | 13.80% | −10.67% |

2011 | 11.13% | 1.87% | −12.02% | −3.72% | −2.76% | 1.15% | −3.91% |

2012 | 23.32% | −5.11% | −11.99% | 1.32% | 7.54% | 13.16% | −6.62% |

2013 | 13.39% | 3.06% | 3.09% | 8.55% | 28.09% | 26.54% | 1.55% |

2014 | 2.51% | 3.35% | −3.64% | −5.27% | −3.04% | 11.14% | −14.18% |

2015 | 4.66% | 1.56% | 3.60% | −1.50% | 8.31% | 0.11% | 8.20% |

2016 | 1.97% | −2.96% | 6.64% | 6.90% | 12.55% | 9.62% | 2.39% |

2017 | 4.70% | 2.97% | −2.37% | 4.40% | 9.70% | 17.96% | −8.26% |

2018 | 4.27% | 1.77% | −0.08% | 5.77% | 11.70% | −5.32% | 17.02% |

January of 2002 to August of 2018. We consider Shanghai and Shenzhen A-share markets. HS300 index data are also downloaded from Wind. Returns are winsorized as following. For positive returns, we apply a cap of 99.00% and for negative returns, a floor of −99.00% is imposed.

We test the algorithm introduced in Section 3.4.1 with equally-weighted portfolios in the original asset space. The methodology is as follows. We compute the T-quarter moving average of past returns for each of the stocks in the A-share universe. Then, apply equal weights on the vector of the moving averages. If this number is positive, we long the equal-weight portfolio and if this number is negative, we short the equal-weight portfolio.

The net asset value curve (NAV curve) is shown in

Theoretically sound, our general investment framework does rely heavily on the quality of parameter estimates for ( μ , σ , θ ) and [ σ ⊺ σ ] − 1 , as illustrated by both simulation and backtesting studies. Better estimates result in better strategy performance. However, under the observations that the distributions of asset price returns are time varying, it becomes very hard to estimate the accurate values of the model parameters based merely on the time-series data as each point in the time series is sampled from a different distribution. Also, the time varying property might be different under various sample frequencies. Our general framework posts two challenges to the field of parameter estimation. First, find the right time frequency such that the model parameters can be estimated

accurately. Second, find the correct estimation method to minimize the numerical errors.

In this paper, we outline a general framework of optimal investment and discuss several concrete investment strategies under this framework. The basic idea of the proposed investment methodologies is proper diversification and the elimination of the future market randomness. Simulation studies and backtesting show good performance of the proposed methods under this framework. Note that, the same ideas apply to all categories of investment strategies, i.e., trend following, mean-reversion, long-short, etc.. For example, the long-short strategy tries to score the assets and long or short certain classes of assets whose scores fall in predetermined sets. We can apply this type of analysis on the rotated asset space. The integration of the proposed investment framework with other classes of strategies is also interesting, which we leave to future research.

Moreover, the real market environment is, of course, much more complicated than the data generating processes we consider in the simulation study. The research for more advanced model parameter estimation techniques, when there is model uncertainty, time changing parameters and measurement errors, is highly important and necessary. Examples can be found in [

Last, but not least, the simulation studies and backtesting in this paper focus on equity data. However, the methodologies can be applied to any asset class, for which the rate of return can be defined and computed.

The author declares no conflicts of interest regarding the publication of this paper.

Zhang, L.L. (2019) A General Framework of Optimal Investment. Journal of Mathematical Finance, 9, 535-560. https://doi.org/10.4236/jmf.2019.93028