Basic trade stats done for Faber model, indices in isolation

In general, use of the 10 week SMA timing system improved returns relative to volatility (measured by Sharpe) and max drawdown over the same stats for buying and holding the same index.

There were some discrepancies in the stats that I produced with R and the ones that Faber claimed in his paper. Especially in the trade stats for trading the US Govt 10 Year Bonds (ignore the gspc label).

Media_httpi26photobuc_timxd

For the same period (1973 through 2008) Faber has a buy and hold CAGR of 8.69% and a timing CAGR of 8.79%, whereas I obtained 8.3% for B&H and only 6.5% for timing. Faber has a max drawdown of 18.79% for B&H and 11.2% for timing, but I obtained, through R, about a 16.5% max dd for both B&H and timing. Not huge discrepancies, and they seem to be less significant for the other 4 indices; quantifying instead of eyeballing the differences in stats might help uncover the cause, but at this point this is a proof of concept, not a rigorous trading system (yet), so I think I’ll move onto modeling Faber’s portfolio.

My research “queue”

Here’s a rough list of what I want to look at in the near future:

  • replicating Faber’s model, and improving it
  • measuring idiosyncratic returns, or the degree of stock returns affected by internal factors (company news, performance, etc.), isolated from the effect of external factors (macro, sector, and other systemic factors). The type of analysis in this article by Matthew Rothman MD and Head of Quantitative Equity Strategies at Barclays (zero hedge – Alpha is dead) is what I’m talking about.
  • improving and further analyzing the robustness of one of my ETF timing strategies. Currently tracking daily performance at covestor.com/troy-shu

Will post my progress, methods, code, etc. along the way.

ARTICLE: the future of quant finance

http://bit.ly/bUtLuG

Media_httpwwwquantfsc_fdnwx

I completely agree with the above article. Quant finance is dominated by high frequency trading; actually, in most people’s minds HFT is quant/computation finance. Everyone’s using the same price and volume market data, trying to squeeze out profits by trading at the lowest latency possible. HFT is being commoditized. So you look at other places in the value chain where the performance isn’t good enough yet: one example is, as the article calls it, using exogenous instead of endogenous market data. This is the kind of data that Bloomberg and Reuters don’t provide, the kind of data that no one uses… yet. The first thing that comes to mind is http://www.thestocksonar.com/, which semantically analyzes news articles for positive/negative sentiment on stocks using machine learning/AI techniques.

Despite all the uses of algorithms and computers today, the human brain is still our most valuable asset. We, not computers, decide how to differentiate ourselves from our competitors—what new strategies to research and trade, what new kinds of data to use. The “human aspect” is still paramount in quant finance.

Faber’s market timing 2

R source for graph, calculation of equity for 10 week trading system seen below. S&P 500 index data since 1973.  Playing around with an example is a great way to learn.

Finishing up the Faber system for the 5 indices he uses (S&P, MSCI EAFE, GSCI, NAREIT, and 10 year bonds), trading each in isolation. Waiting for S&P and GS to respond to my query about where to find the total return series for the GS Commodity Index. Next step is to position size them at 20% each to generate one portfolio.

Faber’s Market Timing paper 1

Unknown

Using the quantmod and TTR libraries in R. Graph of S&P 500 index from 1973 to present, with equity curve of simple 10-week SMA timing system on the S&P 500, used in Mebane Faber’s paper. Faber’s method for “tactical asset allocation” has produced great risk-adjusted returns for the past 40 years (backtested), which brings up some questions: is it robust? how can it be improved? I have several ideas for this… but first I want to see if I can replicate Faber’s original system in R.