Assignment Document

Statistical Arbitrage in the U.S. Equities Market

Pages:

Preview:


  • "Statistical Arbitrage in the U.S. Equities Market *† * Marco Avellaneda and Jeong-Hyun Lee First draft: July 11, 2008 This version: June 15, 2009 Abstract We study model-driven statistical arbitrage in U.S. equities. The trading signals are generate..

Preview Container:


  • "Statistical Arbitrage in the U.S. Equities Market *† * Marco Avellaneda and Jeong-Hyun Lee First draft: July 11, 2008 This version: June 15, 2009 Abstract We study model-driven statistical arbitrage in U.S. equities. The trading signals are generated in two ways: using Principal Component Analysis and using sector ETFs. In both cases, we consider the residuals, or idio- syncratic components of stockreturns, andmodelthemasmean-reverting processes. This leads naturally to “contrarian” trading signals. The main contribution of the paper is the construction, back-testing and comparison of market-neutral PCA- and ETF- based strategies ap- plied to the broad universe of U.S. stocks. Back-testing shows that, af- ter accounting for transaction costs, PCA-based strategies have an av- erage annual Sharpe ratio of 1.44 over the period 1997 to 2007, with much stronger performances prior to 2003. During 2003-2007, the aver- age Sharpe ratio of PCA-based strategies was only 0.9. Strategies based on ETFs achieved a Sharpe ratio of 1.1 from 1997 to 2007, experiencing a similar degradation after 2002. We also introduce a method to account for daily trading volume infor- mation in the signals (which is akin to using “trading time” as opposed to calendartime),andobservesigni?cantimprovementinperformanceinthe case of ETF-based signals. ETF strategies which use volume information achieve a Sharpe ratio of 1.51 from 2003 to 2007. The paper also relates the performance of mean-reversion statistical arbitrage strategies with the stock market cycle. In particular, we study in detail the performance of the strategies during the liquidity crisis of the summer of 2007. We obtain results which are consistent with Khandani and Lo (2007) and validate their “unwinding” theory for the quant fund drawdown of August 2007. * Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, N.Y. 10012 USA † Finance Concepts, 49-51 Avenue Victor-Hugo, 75116 Paris, France. 11 Introduction Thetermstatisticalarbitrageencompassesavarietyofstrategiesandinvestment programs. Their common features are: (i) trading signals are systematic, or rules-based, as opposed to driven by fundamentals, (ii) the trading book is market-neutral, in the sense that it has zero beta with the market, and (iii) the mechanismforgeneratingexcessreturnsisstatistical. Theideaistomakemany bets with positive expected returns, taking advantage of diversi?cation across stocks, to produce a low-volatility investment strategy which is uncorrelated with the market. Holding periods range from a few seconds to days, weeks or even longer. Pairs-trading is widely assumed to be the “ancestor” of statistical arbitrage. If stocks P and Q are in the same industry or have similar characteristics (e.g. Exxon Mobile and Conoco Phillips), one expects the returns of the two stocks to track each other after controlling for beta. Accordingly, if P and Q denote t t the corresponding price time series, then we can model the system as ln(P /P )=a(t-t )+ßln(Q /Q ) + X (1) t t 0 t t t 0 0 or, in its di?erential version, dP dQ t t = adt + ß +dX , (2) t P Q t t where X is a stationary, or mean-reverting, process. This process will be re- t ferred to as the cointegration residual, or residual, for short, in the rest of the paper. In many cases of interest, the drift a is small compared to the ?uctua- tionsofX andcanthereforebeneglected. Thismeansthat,aftercontrollingfor t beta, the long-short portfolio oscillates near some statistical equilibrium. The model suggests a contrarian investment strategy in which we go long 1 dollar of stock P and short ß dollars of stock Q if X is small and, conversely, go short P t and long Q if X is large. The portfolio is expected to produce a positive return t asvaluationsconverge(seePole(2007)foracomprehensivereviewonstatistical arbitrage and co-integration). The mean-reversion paradigm is typically asso- ciated with market over-reaction: assets are temporarily under- or over-priced with respect to one or several reference securities (Lo and MacKinley (1990)). Another possibility is to consider scenarios in which one of the stocks is expected to out-perform the other over a signi?cant period of time. In this case the co-integration residual should not be stationary. This paper will be principally concerned with mean-reversion, so we don’t consider such scenarios. “Generalizedpairs-trading”,ortradinggroupsofstocksagainstothergroups of stocks, is a natural extension of pairs-trading. To explain the idea, we con- sider the sector of biotechnology stocks. We perform a regression/cointegration analysis, following (1) or (2), for each stock in the sector with respect to a benchmark sector index, e.g. the Biotechnology HOLDR (BBH). The role of the stock Q would be played by BBH and P would an arbitrary stock in the biotechnology sector. The analysis of the residuals, based of the magnitude of 2X , suggests typically that some stocks are cheap with respect to the sector, t others expensive and others fairly priced. A generalized pairs trading book, or statistical arbitrage book, consists of a collection of “pair trades” of stocks rel- ative to the ETF (or, more generally, factors that explain the systematic stock returns). In some cases, an individual stock may be held long against a short position in ETF, and in others we would short the stock and go long the ETF. Due to netting of long and short positions, we expect that the net position in ETFs will represent a small fraction of the total holdings. The trading book will look therefore like a long/short portfolio of single stocks. This paper is concerned with the design and performance-evaluation of such strategies. The analysis of residuals is our starting point. Signals will be based on relative-value pricing within a sector or a group of peers, by decomposing stock returnsintosystematicandidiosyncraticcomponentsandstatisticallymodeling the idiosyncratic part. The general decomposition may look like n X dP t (j) = adt + ß F + dX , (3) j t t P t j=1 (j) wherethetermsF ,j =1,...,nrepresentreturnsofrisk-factorsassociatedwith t the market under consideration. This leads to the interesting question of how to derive equation (3) in practice. The question also arises in classical portfolio theory, but in a slightly di?erent way: there we ask what constitutes a “good” set of risk-factors from a risk-management point of view. Here, the emphasis is instead on the residual that remains after the decomposition is done. The main contribution of our paper will be to study how di?erent sets of risk-factors lead to di?erent residuals and hence to di?erent pro?t-loss (PNL) for statistical arbitrage strategies. Previousstudiesonmean-reversionandcontrarianstrategiesincludeLehmann (1990), LoandMacKinlay(1990)andPoterbaandSummers(1988). Inarecent paper, Khandani and Lo (2007) discuss the performance of the Lo-MacKinlay contrarian strategies in the context of the liquidity crisis of 2007 (see also refer- encestherein). Thelatterstrategieshaveseveralcommonfeatureswiththeones developed in this paper. Khandani and Lo (2007) market-neutrality is enforced by ranking stock returns by quantiles and trading “winners-versus-losers”, in a dollar-neutral fashion. Here, we use risk-factors to extract trading signals, i.e. todetectover-andunder-performers. Ourtradingfrequencyisvariablewhereas Khandani-Lotradeat?xedtime-intervals. Ontheparametricside,Poterbaand Summers (1988) study mean-reversion using auto-regressive models in the con- text of international equity markets. The models of this paper di?er from the lattermostlyinthatweimmunizestocksagainstmarketfactors,i.e. weconsider mean-reversion of residuals (relative prices) and not of the prices themselves. The paper is organized as follows: in Section 2, we study market-neutrality using two di?erent approaches. The ?rst method consists in extracting risk- factorsusingPrincipalComponentAnalysis(Jolli?e(2002)). Thesecondmethod uses industry-sector ETFs as proxies for risk factors. Following other authors, we show that PCA of the correlation matrix for the broad equity market in 3the U.S. gives rise to risk-factors that have economic signi?cance because they can be interpreted as long-short portfolios of industry sectors. However, the stocks that contribute the most to a particular factor are not necessarily the largest capitalization stocks in a given sector. This suggests that PCA-based risk factors may not be as biased towards large-capitalization stocks as ETFs, as the latter are generally capitalization-weighted. We also observe that the variance explained by a ?xed number of PCA eigenvectors varies signi?cantly acrosstime,whichleadsustoconjecturethatthenumberofexplanatoryfactors needed to describe stock returns (to separate systematic returns from residuals) is variable and that this variability is linked with the investment cycle, or the 1 changes in the risk-premium for investing in the equity market. This might explain some of the di?erences that we found in performance between the PCA and ETF methods. InSection3and4,weconstructthetradingsignals. Thisinvolvesthestatis- ticalestimationoftheresidualprocessforeachstockatthecloseofeachtrading day, using 60 days of historical data prior to that date. Estimation is always done looking back 60 days from the trade date, thus simulating decisions which might take place in real trading. The trading signals correspond to signi?cant deviations of the residual process from its estimated mean. Using daily end-of- day (EOD) data, we perform a calculation of daily trading signals, going back to 1996 in the case of PCA strategies and to 2002 in the case of ETF strategies, across the universe of stocks with market-capitalization of more than 1 billion USD at the trade date. The condition that the company must have a given capitalization at the trade date (as opposed to at the time when the paper was written), avoids survivorship bias. Estimationandtradingrulesarekeptsimpletoavoiddata-mining. Foreach stock, the estimation of the residual process is done using a 60-day trailing window because this corresponds roughly to one earnings cycle. The length of the window is not changed from one stock to another. We select as entry point for trading any residual that deviates by 1.25 standard deviations from equilibrium,andweexittradesiftheresidualislessthan0.5standarddeviations from equilibrium, uniformly across all stocks. In Section 5 we back-test several strategies which use di?erent sets of fac- tors to generate residuals, namely: (i) synthetic ETFs based on capitalization- 2 weighted indices , (ii) actual ETFs, (iii) a ?xed number of factors generated by PCA, (iv) a variable number of factors generated by PCA. Due to the mecha- nism described above used to generate trading signals, the simulation is always out-of-sample, in the sense that the estimation of the residual process at time t uses information available only for the 60 days prior to this time. In all trades, we assume a slippage/transaction cost of 0.05% or 5 basis points per trade (a round-trip transaction cost of 10 basis points). 1 See Scherer and Avellaneda (2002) for similar observations for Latin American debt se- curities in the 1990’s. 2 Synthetic ETFs are capitalization-weighted sector indexes formed with the stocks of each industry that are present in the trading universe at the time the signal in calculated. We used synthetic ETFs because most sector ETFs where launched only after 2002. 4In Section 6, we consider a modi?cation of the strategy in which signals are estimated in “trading time” as opposed to calendar time. In the statistical analysis, using trading time on EOD signals is e?ectively equivalent to multi- plying daily returns by a factor which is inversely proportional to the trading volume for the past day. This modi?cation accentuates (i.e. tends to favor) contrarian price signals taking place on low volume and mitigates (i.e. tends not to favor) contrarian price signals which take place on high volume. It is as if we “believe more” a print that occurs on high volume and are less ready to bet against it. Back-testing the statistical arbitrage strategies using trading- time signals leads to improvements in most strategies, suggesting that volume information, is valuable in the context of mean-reversion strategies , even at the EOD sampling frequency and not just only for intra-day trading. In Section 7, we discuss the performance of statistical arbitrage in 2007, and particularly around the inception of the liquidity crisis of August 2007. We comparetheperformancesofthemean-reversionstrategieswiththeonesstudied in the recent work of Khandani and Lo (2007). Conclusions are presented in Section 8. 2 Aquantitativeviewofrisk-factorsandmarket- neutrality N Let us denote by {R } the returns of the di?erent stocks in the trading i i=1 universe over an arbitrary one-day period (from close to close). Let F represent the return of the “market portfolio” over the same period, (e.g. the return on a capitalization-weighted index, such as the S&P 500). We can write, for each stock in the universe, ˜ R = ß F + R , (4) i i i which is a simple regression model decomposing stock returns into a systematic ˜ component ß F and an (uncorrelated) idiosyncratic component R . Alterna- i i tively, we consider multi-factor models of the form m X ˜ R = ß F + R . (5) i ij j i j=1 Heretherearemfactors,whichcanbethoughtofasthereturnsof“benchmark” portfolios representing systematic factors. A trading portfolio is said to be N market-neutral if the dollar amounts {Q } invested in each of the stocks are i i=1 such that N X ß = ß Q =0, j =1,2,...,m. (6) ij i j i=1 The coe?cients ß correspond to the portfolio betas, or projections of the port- j folio returns on the di?erent factors. A market-neutral portfolio has vanishing 5portfolio betas; it is uncorrelated with the market portfolio or factors that drive the market returns. It follows that the portfolio returns satisfy ? ? N N m N X X X X ? ? ˜ Q R = Q ß F + Q R i i i ij j i i i=1 i=1 j=1 i=1 " # m N N X X X ˜ = ß Q F + Q R ij i j i i j=1 i=1 i=1 N X ˜ = Q R (7) i i i=1 Thus, a market-neutral portfolio is a?ected only by idiosyncratic returns. We shall see below that, in G8 economies, stock returns are explained by approxi- mately m=15 factors (or between 10 and 20 factors), and that the systematic component of stock returns explains approximately 50% of the variance (see Plerou et al. (2002) and Laloux et al. (2000)). The question is how to de?ne “factors”. 2.1 The PCA approach A?rstapproachforextractingfactorsfromdataistousePrincipalComponents Analysis (Jolli?e (2002)). This approach uses historical share-price data on a cross-section of N stocks going back M days in history. For simplicity of expo- sition, the cross-section is assumed to be identical to the investment universe, 3 although this need not be the case in practice. Let us represent the stocks return data, on any given date t , going back M +1 days as a matrix 0 S -S i(t -(k-1)?t) i(t -k?t) 0 0 R = , k =1,...,M, i=1,...,N, ik S i(t -k?t) 0 whereS isthepriceofstockiattimetadjustedfordividendsand?t=1/252. it Since some stocks are more volatile than others, it is convenient to work with standardized returns, R -R ik i Y = ik s i where M X 1 R = R i ik M k=1 and M X 1 2 2 s = (R -R ) ik i i M -1 k=1 3 For instance, the analysis can be restricted to the members of the S&P500 index in the US, the Eurostoxx 350 in Europe, etc. 6The empirical correlation matrix of the data is de?ned by M X 1 ? = Y Y , (8) ij ik jk M -1 k=1 which is symmetric and non-negative de?nite. Notice that, for any index i, we have M P 2 (R -R ) M ik i X 1 1 k=1 2 ? = (Y ) = =1. ii ik 2 M -1 M -1 s i k=1 The dimensions of ? are typically 500 by 500, or 1000 by 1000, but the data is small relative to the number of parameters that need to be estimated. In fact, if we consider daily returns, we are faced with the problem that very long estimation windows M \u0000 N don’t make sense because they take into account the distant past which is economically irrelevant. On the other hand, if we just considerthebehaviorofthemarketoverthepastyear, forexample, thenweare faced with the fact that there are considerably more entries in the correlation matrix than data points. In this paper, we always use an estimation window for the correlation matrix of 1-year (252 trading days) prior to the trading date. The commonly used solution to extract meaningful information from the 4 data is to model the correlation matrix . We consider the eigenvectors and eigenvalues of the empirical correlation matrix and rank the eigenvalues in de- creasing order: N =? > ? = ? =...= ? =0. 1 2 3 N We denote the corresponding eigenvectors by \u0000 \u0000 (j) (j) (j) v = v ,....,v , j =1,...,N. 1 N A cursory analysis of the eigenvalues shows that the spectrum contains a few large eigenvalues which are detached from the rest of the spectrum (see Figure 1). We can also look at the density of states {#of eigenvalues between x and y} D(x,y) = N (see Figure 2). For intervals (x,y) near zero, the function D(x,y) corresponds to the “bulk spectrum” or “noise spectrum” of the correlation matrix. The eigenvaluesatthetopofthespectrumwhichareisolatedfromthebulkspectrum are obviously signi?cant. The problem that is immediately evident by looking at Figures 1 and 2 is that there are fewer “detached” eigenvalues than industry sectors. Therefore, we expect that the boundary between “signi?cant” and 4 We refer the reader to Laloux et al. (2000), and Plerou et al. (2002) who studied the correlation matrix of the top 500 stocks in the US in this context. 7Figure 1: Top 50 eigenvalues of the correlation matrix of market returns com- puted on May 1 2007 estimated using a 1-year window and a universe of 1417 stocks (see also Table 3). (Eigenvalues are measured as percentage of explained variance.) 8Figure2: ThedensityofstatesforMay1-2007estimatedusinga1-yearwindow, correspondingtothesamedatausedtogenerateFigure1. Noticethatthereare some “detached eigenvalues”, and a “bulk spectrum”. The relevant eigenvalues includesthedetachedeigenvaluesaswellasaeigenvaluesintheedgeofthebulk spectrum. 9“noise” eigenvalues to be somewhat blurred and to correspond to be at the edge of the “bulk spectrum”. This leads to two possibilities: (a) we take into accounta?xednumberofeigenvaluestoextractthefactors(assuminganumber close to the number of industry sectors) or (b) we take a variable number of eigenvectors, depending on the estimation date, in such a way that a sum of the retained eigenvalues exceeds a given percentage of the trace of the correlation matrix. Thelatterconditionisequivalenttosayingthatthetruncationexplains a given percentage of the total variance of the system. Let ? ,...,? , m<N be the signi?cant eigenvalues in the above sense. For 1 m each index j, we consider a the corresponding “eigenportfolio”, which is such that the respective amounts invested in each of the stocks is de?ned as (j) v (j) i Q = . i s i The eigenportfolio returns are therefore N (j) X v i F = R j =1,2,...,m. (9) jk ik s i i=1 Itiseasyforthereadertocheckthattheeigenportfolioreturnsareuncorrelated 0 0 inthesensethattheempiricalcorrelationofF andF vanishesforj 6=j . The j j factors in the PCA approach are the eigenportfolio returns. Figure 3: Comparative evolution of the principal eigenportfolio and the capitalization-weighted portfolio from May 2006 to April 2007. Both portfo- lios exhibit similar behavior. Each stock return in the investment universe can be decomposed into its projection on the m factors and a residual, as in equation (4). Thus, the PCA 10approachdeliversanaturalsetofrisk-factorsthatcanbeusedtodecomposeour returns. It is not di?cult to verify that this approach corresponds to modeling the correlationmatrixofstockreturnsasasum ofarank-mmatrixcorrespond- ing to the signi?cant spectrum and a diagonal matrix of full rank, m X (k) (k) 2 ? = ? v v +\u0000 d , k ij ij i j ii k=0 2 where d is the Kronecker delta and \u0000 is given by ij ii m X (k) (k) 2 \u0000 = 1- ? v v k ii i i k=0 sothat? =1. Thismeansthatwekeeponlythesigni?canteigenvalues/eigenvectors ii of the correlation matrix and add a diagonal “noise” matrix for the purposes of conserving the total variance of the system. 2.2 Interpretation of the eigenvectors/eigenportfolios As pointed out by several authors (see for instance, Laloux et al.(2000)), the dominant eigenvector is associated with the “market portfolio”, in the sense (1) that all the coe?cients v , i = 1,2..,N are positive. Thus, the eigenport- i (1) (1) v i folio has positive weights Q = . We notice that these weights are in- i s i versely proportional to the stock’s volatility. This weighting is consistent with the capitalization-weighting, since larger capitalization companies tend to have smaller volatilities. The two portfolios are not identical but are good proxies 5 for each other, as shown in Figure 3. To interpret the other eigenvectors, we observe that (i) the remaining eigen- vectors must have components that are negative, in order to be orthogonal (1) to v ; (ii) given that there is no natural order in the stock universe, the “shapeanalysis”thatisusedtointerpretthePCAofinterest-ratecurves(Litter- man and Scheinkman (1991) or equity volatility surfaces (Cont and Da Fonseca (2002))doesnotapplyhere. TheanalysisthatweusehereisinspiredbyScherer and Avellaneda (2002), who analyzed the correlation of sovereign bond yields across di?erent Latin American issuers (see also Plerou et. al.(2002) who made similar observations). We rank the coe?cients of the eigenvectors in decreasing order: (2) (2) (2) v = v = ... = v , n n n 1 2 N the sequence n representing a re-labeling of the companies. In this new order- i ing, we notice that the “neighbors” of a particular company tend to be in the 5 The positivity of the coe?cients of the ?rst eigenvector of the correlation matrix in the case when all assets have non-negative correlation follows from Krein’s Theorem. In practice, the presence of commodity stocks and mining companies implies that there are always a few negatively correlated stock pairs. In particular, this explains why there are a few negative weights in the principal eigenportfolio in Figure 4. 11(2) same industry group. This property, which we call coherence, holds true forv andforotherhigh-rankingeigenvectors. Aswedescendinthespectrumtowards the noise eigenvectors, the property that nearby coe?cients correspond to ?rms in the same industry is less true and coherence will not hold for eigenvectors of the noise spectrum (almost by de?nition!). The eigenportfolios can therefore be interpreted as long-short portfolios at the level of industries or sectors. Figure 4: First eigenvector sorted by coe?cient size. The x-axis shows the ETF corresponding to the industry sector of each stock. 2.3 The ETF approach: using the industries Another method for extracting residuals consists in using the returns of sector ETFs as factors. Table 3 shows a sample of industry sectors number of stocks of companies with capitalization of more than 1 billion USD at the beginning of January 2007, classi?ed by sectors. It gives an idea of the dimensions of the trading universe and the distribution of stocks corresponding to each industry sector. We also include, for each industry, the ETF that can be used as a risk-factor for the stocks in the sector for the simpli?ed model (11). Unlike the case of eigenportfolios, which are uncorrelated by construction, ETF returns are correlated. This can lead to redundancies in the factor decom- position: strongly correlated ETFs sometimes give rise to large factor loadings withopposingsignsforstocksthatbelongtoorarestronglycorrelatedwithdif- ferentETFs. Thereareseveralapproachesthatcanbeusedtoremedythis: one isarobustversionofmultipleregressionaimingat“sparse”representations. For example, the matching pursuit algorithm (Davis, Mallat & Avellaneda (1997)) 12Figure 5: Second eigenvector sorted by coe?cient size. Labels as in Figure 4. Figure 6: Third eigenvector sorted by coe?cient size. Labels as in Figure 4. 13Top 10 Stocks Bottom 10 Stocks Energy, oil and gas Real estate, ?nancials, airlines Suncor Energy Inc. American Airlines Quicksilver Res. United Airlines XTO Energy Marshall & Isley Unit Corp. Fifth Third Bancorp Range Resources BBT Corp. Apache Corp. Continental Airlines Schlumberger M & T Bank Denbury Resources Inc. Colgate-Palmolive Company Marathon Oil Corp. Target Corporation Cabot Oil & Gas Corporation Alaska Air Group, Inc. Table 1: The top 10 stocks and bottom 10 stocks in second eigenvector. Top 10 Stocks Bottom 10 Stocks Utility Semiconductor Energy Corp. Arkansas Best Corp. FPL Group, Inc. National Semiconductor Corp. DTE Energy Company Lam Research Corp. Pinnacle West Capital Corp. Cymer, Inc. The Southern Company Intersil Corp. Consolidated Edison, Inc. KLA-Tencor Corp. Allegheny Energy, Inc. Fairchild Semiconductor International Progress Energy, Inc. Broadcom Corp. PG&E Corporation Cellcom Israel Ltd. FirstEnergy Corp. Leggett & Platt, Inc. Table 2: The top 10 stocks and bottom 10 stocks in third eigenvector. 14which favors sparse representations is preferable to a full multiple regression. Another class of regression methods known as ridge regression would achieve the similar goal (see, for instance Jolli?e (2002)). In this paper we use a simple approach. We associate to each stock a single sectorETF(followingthepartitionofthemarketshowninTable3)andperform a regression of the stock returns on the corresponding ETF returns, i.e. ˜ R = ßR +R . i ETF i i where ETF is associated with stock i. i 3 A relative-value model for equity valuation We propose a quantitative approach to stock valuation based on its relative performance within industry sector ETFs or, alternatively, with respect to the constructed PCA factors. In Section 4, we present a modi?cation of this ap- proachwhichtakeintoaccountthetradingvolumeinthestocks,withinasimilar framework. Our investment model is purely based on price and volume data, although in principle it could be extended to include fundamental factors, such changes in analysts’ recommendations, earnings momentum, and other quan- ti?able factors. Weshallusecontinuous-timenotationanddenotestockpricesbyS (t),....,S (t), i N where t is time measured in years from some arbitrary starting date. Based on themulti-factormodelsintroducedintheprevioussection,weassumethatstock returns satisfy the system of stochastic di?erential equations N X dS (t) dI (t) i j =a dt+ ß +dX (t), (10) i ij i S (t) I (t) i j j=1 where the term N X dI (t) j ß ij I (t) j j=1 represents the systematic component of returns (driven by the returns of the eigenportfolios or the ETFs). The coe?cients ß are the corresponding factor ij loadings. In the case of ETF factors, we work with the model dS (t) dI(t) i =a dt+ß +dX (t), (11) i i i S (t) I(t) i 6 where I(t) is the ETF corresponding to the stock under consideration. In both cases, the idiosyncratic component of the return is ˜ dX (t)=a dt+dX (t). i i i 6 In other words, we analyze a “pair-trade” between each stock and its assigned ETF. 15"

Related Documents

Start searching more documents, lectures and notes - A complete study guide!
More than 25,19,89,788+ documents are uploaded!

Why US?

Because we aim to spread high-quality education or digital products, thus our services are used worldwide.
Few Reasons to Build Trust with Students.

128+

Countries

24x7

Hours of Working

89.2 %

Customer Retention

9521+

Experts Team

7+

Years of Business

9,67,789 +

Solved Problems

Search Solved Classroom Assignments & Textbook Solutions

A huge collection of quality study resources. More than 18,98,789 solved problems, classroom assignments, textbooks solutions.

Scroll to Top