## How to Measure Forecast Accuracy: How to Compare Professional Forecasters alongside Statistical Models

### Abstract

The forecasting data is essential during the process of constructing investment strategies. It is critical to measure their accuracy ex post in order to evaluate and analyze the portfolio performance. In addition, measuring the sensitivity of our actions to the forecasts ex ante is also important for appropriate adjustments. We will be focusing on the ex post part of that in this article. We test the accuracy of forecasts using two methods: the distribution method and odds ratio measure. We also explore the use of the Information Coefficient (IC) to the forecasts data. The results show that the mean of forecasting data is the same as the empirical data, but the variance has different performance.

### 1 Introduction

The forecasting asset returns, volatility and correlations play an important role in the investment strategy. They are used in the portfolio optimization process. In this case, the accuracy of forecasting data is essential. In this report, we are trying to use three methods to measure the forecasts accuracy.

• Comparing the distribution of forecasts and empirical data
• Using odds ratio to measure the accuracy
• Using Information Coefficient (IC) to measure the accuracy

This section references several forecasts from JPM throughout the years to use their asset price forecasts

Notes:

JPM Long Term Capital Market Assumptions (LTCMA) Release Dates

• 2016 - published 29 Oct 2015
• 2017 - published 26 Oct 2016
• 2018 - published 27 Oct 2017
• 2019 - published 29 Oct 2018

The price of ETFs data set is from Tiingo [https://www.tiingo.com/].

The time of data is from 2 Jan 2013 to 10 May 2019.

### 2 Related Research

#### 2.1 The Distribution Method

By comparing the empirical distribution of ETFs return and the simualted distribution of the forecasting data, we can see the similar and different features of those two data sets.

#### 2.2 The Odds Ratio Measure

Odds ratio is a statistic that quantifies the strength of the association of two events, A and B. This statistic is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B. Two events are independent if and only if the odds ratio equals to 1.

In the Baysian Model, odds ratio is used to test the difference of two models for the selection purpose. In our report, we used this statistic to explore whether the assumption of forcasting data is different from the empirical data.

#### 2.3 The IC Measure

The Information Coefficient (IC) is a measure used to evaluate the performance of an active strategy, which shows how closely the analysts' financial forecasts match the result from the empirical market. In this report, we implement IC to the forecasts data and empirical data.

### 3 Methodology

#### 3.1 The Distribution Method

We analyze the distributions with the following steps:

• Simulate the distribution based on JPM forecasts of returns' mean and volatility with normal distribution assumption
• Get the weighted average empirical return for shares and bonds
• Compare the distribution of forecasts data and empirical data

#### 3.2 The Odds Ratio Measure

First, we define two models:

• $$m_0$$ is the normal distribution with the historical mean and variance of returns
• $$m_1$$ is the normal distribution with the forcasting mean and variance of returns

The mean of $$m_0$$ is the average return of the past three years data, and variance is also calculated from the past three years. For example, the mean and variance of $$m_0$$ in 2016 is derived from the returns of 2013-2016.

For the real average return X, the odds ratio of those two models is:

\begin{equation*} Odds Ratio = \frac{P(X|m_1)}{P(X|m_0)} \end{equation*}

We calculate the odds ratio in two different ways:

1. Directly get the average of all returns in one year and compute the odds ratio
2. Compute the likelihood of every ETF $$P(X_{it}| m_j), where \ i = 1,..., N \ and \ t = 2016, 2017, 2018, 2019 \ and \ j = 0, 1$$ and get the average of those conditional probability, then compute the total odds ratio as the following equation. This odds ratio can show the effect of correlations among the ETFs.

\begin{equation*} Odds Ratio = \frac{P(X|m_1)}{P(X|m_0)} = \frac{\sum_{i=1}^N P(X_{it}|m_1)}{\sum_{i=1}^N P(X_{it}|m_0)} \end{equation*}

If the odds ratio is greater than 1, the $$m_1$$ model would be more accurate than the $$m_0$$ model and vice versa.

#### 3.3 The IC Measure

Before starting with IC, we need to know what is Information Ratio (IR).

IR assess the returns of an alpha model conditioned on a dimension of risk. It measures the average of an active portfolio return (the deviation between the returns of the portfolio and benchmark, which is also known as Tracking Error) relative to the increased volatility of the active portfolio and a passive portfolio.

\begin{equation*} IR = \frac{\overline{\alpha}}{\sigma(\alpha)} \end{equation*}

IC is a linear statistic which measures the cross-sectional correlation between the security return forecasts coming from a factor and the subsequent actual returns for securities. In practice, IC is essential because it is used to calculate Information Ratio (IR), with the following equation:

\begin{equation*} IR = \frac{\overline{IC_t}}{std(IC_t)} \end{equation*}

In our reseach, we use ... what data?

3.3.1 Raw IC

"Raw IC" is a simple one-period IC for total return correlation. The assumptions of "Raw IC" are:

• Portfolio weights are propotional to the forecasts
• The forecasts have zero cross-sectional mean
• No systematic risk in the market
• All stocks have the same specific risk

Denote active weights by $$\mathbf{w}=(w_1,\cdots, w_N)'$$ and subsequent retuns by $$\mathbf{r}=(r_1,\cdots, r_N)'$$. The excess return for t periods is

\begin{equation*} \alpha_t = \sum_{i=1}^N w_i r_i=\mathbf{w}'\cdotp \mathbf{r} \end{equation*}

We have $$\mathbf{w}'\cdotp \mathbf{i}=0$$ for a dollar-neutral portforlio. We replace returns with relative returns against the cross-sectional average $$\overline{r}$$

\begin{equation*} \alpha_t = \sum_{i=1}^N w_i (r_i-\overline{r}) \\ =\mathbf{w}'\cdotp (\mathbf{r}-\overline{r} \mathbf{i}) \\ = (N-1)corr(\mathbf{w},\mathbf{r})dis(\mathbf{w})dis(\mathbf{r}) \end{equation*}

where $$dis(\cdotp)$$ refers to the dispersion of $$\cdotp$$.

According to the assumption, portfolio weights are proportional to the forecasts, so $$\mathbf{w} = c\mathbf{f}$$, or $$w_i=cf_i$$, for all $$i$$.

Since the forecasts have zero cross-sectional mean, we have

\begin{equation*} \alpha_t = \sum_{i=1}^N w_i (r_i - \overline{r})=c(N-1)IC \cdotp dis(\mathbf{f}) dis(\mathbf{r})\\ IC = corr(\mathbf{f},\mathbf{r}) \end{equation*}

Typically, a high positive IC is desired. An IC of 0.1 or higher on an annual basis is considered quite strong.

3.3.2 Risk-Adjusted IC

A risk-adjusted IC is more realistic than the raw IC because it eliminates the systematic bias in the factor.

First, we need to get the optimal weights of a market-neutral portfolio. The weights are not only dollar neutral but also neutral to all risk factors. We solve the following mean-variance optimization to obtain portfolio weights $$\mathbf{w}$$

\begin{equation*} Max \quad \mathbf{f}'\cdotp\mathbf{w}-\frac{1}{2}\lambda\cdotp(\mathbf{w}'\cdotp\Sigma\cdotp\mathbf{w}_t)\\ subject\ to \quad \mathbf{w}'\cdotp\mathbf{i}=0,\ and\ \mathbf{w}'\cdotp\mathbf{B}=0 \end{equation*}

where $$\lambda$$ refers to the risk-aversion parameter.

With the multifactor model, the covariance matrix becomes

\begin{equation*} \Sigma = \mathbf{B}\Sigma_I\mathbf{B}'+\mathbf{S}. \end{equation*}

Solving the above optimization problem, we can obtain the weights

\begin{equation*} w_i=\lambda^{-1} \frac{f_i-l_0-l_1\beta_{1i}-\cdots-l_K\beta_{Ki}}{\sigma_i^2}. \end{equation*}

We can obtain the excess returns given the active weights

\begin{equation*} \alpha_t=\sum_{i=1}^N w_i r_i=\lambda^{-1} \sum_{i=1}^N \frac{f_i-l_0-l_1\beta_{1i}-\cdots-l_K\beta_{Ki}}{\sigma_i^2} r_i. \end{equation*}

The number on the numerator is the risk-adjusted forecasts. We also replace the return $$r_i$$ in the excess return by the risk adjusted returns. We denote them as

\begin{equation*} F_i=\frac{f_i-l_0-l_1\beta_{1i}-\cdots-l_K\beta_{Ki}}{\sigma_i}\\ R_i=\frac{r_i-m_0-m_1\beta_{1i}-\cdots-m_K\beta_{Ki}}{\sigma_i}\\ \end{equation*}

Thus we have \begin{equation*} \alpha_t=\sum_{i=1}^N w_i r_i=\lambda^{-1} \sum_{i=1}^N F_i R_i\\ =(N-1)\lambda_t^{-1}corr(\mathbf{F}_t , \mathbf{R}_t )dis(\mathbf{F}_t)dis(\mathbf{R}_t). \end{equation*}

We choose $$m_0$$ such that $$avg(\mathbf{R}_t)=0$$

Let's see how we can calculate the risk-aversion parameter. We derive it from the model tracking error, which is the product of the sum of specific variace and the square of the active weights. The model tracking error is the residual variance, which is

\begin{equation*} \sigma_{model}^2 = \sum_{i=1}^N w_i^2\sigma_i^2 = \lambda_t^{-2} \sum_{i=1}^N F_i^2\\ =\lambda_t^{-2}(N-1)([dis(\mathbf{F}_t)]^2+[avg(\mathbf{F}_t)]^2)\\ \approx \lambda_t^{-2}(N-1)[dis(\mathbf{F}_t)]^2. \end{equation*}

where we assume $$avg(\mathbf{F}_t)\approx 0$$.

Thus we have

\begin{equation*} \lambda_t=\frac{\sqrt{N-1}dis(\mathbf{F}_t)}{\sigma_{model}}. \end{equation*}

Substituting the risk-aversion parameter with the above equation in the excess return, we have

\begin{equation*} \alpha_t = IC_t \sqrt{N-1} \sigma_{model} dis(\mathbf{R}_t) \approx IC_t \sqrt{N} \sigma_{model} dis(\mathbf{R}_t). \end{equation*}

To get the IR, we need to get the expected excess return. Assuming $$dis(\mathbf(R)_t)$$ is constant and equal to its mean. we have

\begin{equation*} \overline{\alpha_t} = \overline{IC_t} \sqrt{N} \sigma_{model} \overline{dis(\mathbf{R}_t)}. \end{equation*}

The expected active risk is

\begin{equation*} \sigma = std(IC_t) \sqrt{N} \sigma_{model} \overline{dis(\mathbf{R}_t)}. \end{equation*}

And the IR is

\begin{equation*} IR = \frac{\overline{\alpha}}{\sigma(\alpha)} \end{equation*}

A Better Estimation of IR

In reality, there is a correlation between IC and $$\mathbf(R)_t$$. In order to have a high positive excesss return, we often expect a positive IC as well as a high dispersion, but a negative IC with a low dispersion. In this case, we denote $$\rho$$ as the correlation coefficient of IC and dispersion. Thus, we have

\begin{equation*} \overline{\alpha_t} = \sqrt{N} \sigma_{model} {\overline{IC_t}\ \overline{dis(\mathbf{R}_t)}+\rho\cdotp std(IC_t) std[dis(\mathbf{R}_t)]}. \end{equation*}

### 4 Results

In :
import os
import pandas as pd
import numpy as np
import datetime
In :
buckets = pd.DataFrame([{
'asset_class': 'Fixed Income',
'bucket': 'U.S. Intermediate Treasuries'
}, {
'asset_class': 'Equities',
'bucket': 'U.S. Large Cap'
}])

years = pd.Series([2016, 2017, 2018, 2019], name='fcast_year')

fcasts = pd.concat([
buckets.copy().join(pd.Series([yr]*len(buckets), name='fcast_year')) for yr in years
])

data_dir = 'C:/Users/yyang/astrocyte/analysis/data'

fname=os.path.join(data_dir, 'assumption.csv')
assumption = pd.read_csv(fname, index_col=0)

fcasts.loc[:, 'mu'] = assumption.iloc[:,0].values # Annualized compound returns
fcasts.loc[:, 'sigma'] = assumption.iloc[:,1].values  # Annualized vol
fcasts

Out:
In :
def calc_cov(assumption):
df=pd.DataFrame(np.zeros((len(assumption),len(assumption))))
for i in np.arange(0,len(assumption),2):
year = assumption.iloc[i:i+2,:]
correl= year.iloc[0,2]
covmat=pd.DataFrame(np.array([[1,correl],[correl,1]]))
df.iloc[i:i+2,i:i+2]=np.array(covmat)
return df

idx = pd.MultiIndex.from_frame(fcasts[['fcast_year', 'bucket']])
fcast_correl = pd.DataFrame(np.array(calc_cov(assumption)), index=idx, columns=idx)
fcasts.set_index(['fcast_year','asset_class','bucket'], inplace=True)
fcasts.sort_index(inplace=True)

## TODO Fill in the information from the USD LTCMA
fcasts

Out:
In :
class DataLoader():

def __init__(self, data_dir='~/', ticker_info_fname='ticker_info.json', os_date = '2019-01-01'):
import json
import os
with open(os.path.join(data_dir, ticker_info_fname), 'r') as fh:
data = json.load(fh)
self._data = data
self._data_os_date = os_date

def get_sibling_share_class_list(self, ticker):
"""returns the list of [ticker, sid, name for each ticker] from the graph"""
return self._data['siblings'][ticker]

def get_ticker_neighbors_by_score(self, ticker, as_of_date=None):
"""returns the map of {
'src': [{'score': None,'ticker': None, 'uuid': None, replication_id, strategy_id}],
'dest': [{'score': None,'ticker': None, 'uuid': None, replication_id, strategy_id}]
} from the database"""
return self._data['results'][self._data_os_date][ticker]['neighbors']

def get_ticker_meta(self, ticker):
"""returns all of the meta_data for a given ticker from the graph"""
return self._data['results'][self._data_os_date][ticker]['meta']

def get_prices(self, ticker):
"""returns {
'ticker': ticker,
'close/splits/dividends': {
'ref_period': []
'value': []
}
} from the database"""
return self._data['results'][self._data_os_date][ticker]['prices']

def get_all_adj_close(self):
import pandas as pd
import datetime
df = pd.concat([
pd.Series(d['adj_close']['value'], index=d['adj_close']['ref_period'], name=ticker)
for ticker, d in self._data['results'][self._data_os_date].items()], axis=1, sort=False).sort_index()
df.index = pd.Index([datetime.datetime.strptime(dt,'%Y-%m-%d') for dt in df.index])
return df

def get_fees(self, ticker):
"""returns a pandas 'records' like serialization with
'ref_period' and the various fields
"""
return self._data['results'][self._data_os_date][ticker]['fees']

def get_buckets_for_tickers(self, ticker_map):
""""""
from collections import defaultdict
counts = defaultdict(lambda: {k:0 for k in ticker_map.keys()})

for bucket, tickers in ticker_map.items():
for ticker in tickers:
counts[ticker][bucket] += 1
d = self._data['results'][self._data_os_date].get(ticker, None)
if d is not None:
neighbors = d['neighbors']['src']
for n in neighbors:
for universe in n['universe'].split(','):
counts[universe][bucket] += 1

results = {}
for k, v in counts.items():
max_count = -1
for b, c in v.items():
if c > max_count:
results[k] = b
max_count = c
return pd.Series(results, name='bucket')

def get_all_meta(self, ticker_map):
import pandas as pd
r = [d['meta'] for d in self._data['results'][self._data_os_date].values() if d['meta'] is not None]
return pd.DataFrame(r).set_index('ticker').join(self.get_buckets_for_tickers(ticker_map))


In :
large_cap_tickers = ["AFDGX", "AFYDX", "ALSEX", "AUUKX", "AUUYX", "BTIRX", "CLCRX", "CLNCX", "CLXRX", "CWMFX", "DFELX", "DFEOX", "DFSIX", "DFUSX", "DTMEX", "EGORX", "FDESX", "FDTIX", "FLCEX", "FLCPX", "FSKAX", "FXAIX", "GSUTX", "GTCEX", "GTLOX", "HAITX", "HBGIX", "HGISX", "HGITX", "HIAGX", "HSTAX", "HSTIX", "IIRLX", "INGIX", "IPLIX", "IRLCX", "ISJBX", "JIARX", "JUEQX", "JUSRX", "KPLCX", "MFRHX", "MFRJX", "MFRKX", "MFVFX", "MFVSX", "MFVZX", "MIEYX", "MIEZX", "MITDX", "MITHX", "MITIX", "MITJX", "MLVHX", "MLVPX", "MLVRX", "MLVTX", "MMFVX", "MMFYX", "MMIEX", "MMIZX", "MRFIX", "MRGHX", "MRGJX", "MRGKX", "MRGRX", "MXKWX", "PCAPX", "PCAQX", "PDSIX", "PLFIX", "PLFPX", "PLFSX", "PLJMX", "PMYTX", "PMYYX", "PURYX", "PWCIX", "REDWX", "RESGX", "RFNGX", "RSIPX", "RWMEX", "RWMFX", "SBSDX", "SBSPX", "SEEHX", "SNXFX", "SSEYX", "SUWZX", "SWPPX", "SWTSX", "SXPRX", "TCEPX", "TEIHX", "TEQWX", "TIQRX", "TISAX", "TRSPX", "TSTFX", "VCSRX", "VDIGX", "VPSPX", "VSTIX", "WMFFX", "WSHFX"]
intermediate_bond_tickers = ["ABNFX", "AGMNX", "APBDX", "BBTBX", "BFAFX", "BIAZX", "BRAMX", "CFAFX", "CGVRX", "CMFFX", "CNFRX", "CRCSX", "CRCUX", "DFAPX", "DFIGX", "FBNDX", "FGBPX", "FGMNX", "FIKQX", "FSIGX", "FTHRX", "FUAMX", "FXNAX", "GAKPX", "GDFTX", "GGIRX", "GRRGX", "GSBPX", "GSCSX", "GSUPX", "JCBRX", "JIGBX", "JIGMX", "KCCSX", "LCRTX", "LCRVX", "LSSAX", "MCBDX", "MCBLX", "MCBYX", "MCZZX", "MFAEX", "MFAFX", "MXDQX", "MXGMX", "NRCRX", "OGGPX", "OGGQX", "PMRDX", "PMREX", "PMRIX", "RBFEX", "RBFFX", "RBFGX", "RBFHX", "RFATX", "RGVGX", "RIWTX", "RMAEX", "RMAFX", "RMAGX", "RMAHX", "SEGMX", "TMBFX", "TMBTX", "WAPIX", "WTRIX"]
len(large_cap_tickers), len(intermediate_bond_tickers)
Out:
(106, 67)

#### 4.1 The Distribution Method

In :
import random
ret_list = []
for i in range(len(fcasts.index)):
ret_list.append(np.random.normal(fcasts['mu'].iloc[i]/250, fcasts['sigma'].iloc[i]/np.sqrt(250), 250))
In :
df_simu = pd.DataFrame(index = fcasts.index, columns = range(250), data = ret_list)
df_simu

Out:
In :
fcasts['mu'] = fcasts['mu']/250
fcasts['sigma'] = fcasts['sigma']/np.sqrt(250)
fcasts

Out:
In :
ticker_map = {
'U.S. Intermediate Treasuries': intermediate_bond_tickers,
'U.S. Large Cap': large_cap_tickers,
}

data_loader = DataLoader(data_dir=data_dir)
all_meta = data_loader.get_all_meta(ticker_map)

adj_close = data_loader.get_all_adj_close()

adj_close.loc[:, all_meta.index]
adj_close_os = adj_close.loc['2016-01-01':,]

z = (adj_close_os / adj_close_os.fillna(method='bfill').iloc[0,:])

# Exclude outliers
outliers = np.log(z).diff() / np.log(z).diff().abs().mean()
outliers_thresh = np.percentile((outliers).stack(), q=[.1,99.9])*2
outliers_mask = (outliers < outliers_thresh) | (outliers > outliers_thresh)

outlier_tickers = sorted(outliers_mask.loc[:, (outliers_mask.sum(axis=0) > 0)].columns.tolist())

for c in outlier_tickers:
all_meta.loc[c, :] = None
all_meta.dropna(how='all', inplace=True)

display(all_meta)


#### Average return of empirical data

In :
bucket_names = ['U.S. Intermediate Treasuries', 'U.S. Large Cap']
ret_avg = []
for b in bucket_names:
temp_idx = all_meta.loc[all_meta.bucket == b, :].index
ret_avg += [adj_close_os[temp_idx].pct_change().mean(axis = 1).dropna()]

df_avg = pd.DataFrame(np.transpose(ret_avg), columns = bucket_names, index = ret_avg.index)

mask = df_avg.abs() > df_avg.abs().median()*1.2*3
df_avg[mask] = None
df_avg.dropna(inplace = True)

time = ['2016', '2017', '2018', '2019']
df_avg_dist = pd.DataFrame(index = df_simu.index, columns = ['mu_empirical', 'sigma_empirical'])
i = 0
for t in time:
df_avg_dist['mu_empirical'].iloc[i] = df_avg[t].mean()
df_avg_dist['mu_empirical'].iloc[i+1] = df_avg[t].mean()
df_avg_dist['sigma_empirical'].iloc[i] = df_avg[t].std()
df_avg_dist['sigma_empirical'].iloc[i+1] = df_avg[t].std()
i = i+2

df_avg_dist

Out:
In :
sigma_real = []
time = ['2016', '2017', '2018', '2019']
for t in time:
sigma_real += [df_avg[t].std(axis = 0)]

df_sigma = pd.DataFrame(sigma_real, index = time)
df_sigma

Out:

Equity Plots

In :
import matplotlib.pylab as plt
import seaborn as sns

fig1 = plt.figure(figsize=(16, 10))
ax1 = fig1.add_subplot(221)
ax2 = fig1.add_subplot(222)
ax3 = fig1.add_subplot(223)
ax4 = fig1.add_subplot(224)

b = 'U.S. Large Cap'
sns.distplot(df_simu.iloc, label="JPM Forecast", ax=ax1); ax1.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2016'], label="Empirical Return", ax=ax1); ax1.axvline(x=df_avg[b].loc['2016'].mean(),color = 'red')
ax1.legend()
ax1.set_title("2016 U.S. Large Cap")
sns.distplot(df_simu.iloc, label="JPM Forecast", ax=ax2); ax2.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2017'], label="Empirical Return", ax=ax2); ax2.axvline(x=df_avg[b].loc['2017'].mean(),color = 'red')
ax2.legend()
ax2.set_title("2017 U.S. Large Cap")
sns.distplot(df_simu.iloc, label="JPM Forecast", ax=ax3); ax3.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2018'], label="Empirical Return", ax=ax3); ax3.axvline(x=df_avg[b].loc['2018'].mean(),color = 'red')
ax3.legend()
ax3.set_title("2018 U.S. Large Cap")
sns.distplot(df_simu.iloc, label="JPM Forecast", ax=ax4); ax4.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2019'], label="Empirical Return", ax=ax4); ax4.axvline(x=df_avg[b].loc['2019'].mean(),color = 'red')
ax4.legend()
ax4.set_title("2019 U.S. Large Cap")
Out:
Text(0.5, 1.0, '2019 U.S. Large Cap') For each year, the mean of empirical returns is close to the mean of forecasting data. The variance of empirical returns is smaller than the variance of forecasting data.

Treasury Plot

In :
fig3 = plt.figure(figsize=(16, 10))
ax1 = fig3.add_subplot(221)
ax2 = fig3.add_subplot(222)
ax3 = fig3.add_subplot(223)
ax4 = fig3.add_subplot(224)

b = 'U.S. Intermediate Treasuries'
sns.distplot(df_simu.iloc, label = "JPM Forecast", ax=ax1); ax1.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2016'], label="Empirical Return", ax=ax1); ax1.axvline(x=df_avg[b].loc['2016'].mean(),color = 'red')
sns.distplot(df_simu.iloc, label = "JPM Forecast", ax=ax2); ax2.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2017'], label="Empirical Return", ax=ax2); ax2.axvline(x=df_avg[b].loc['2017'].mean(),color = 'red')
sns.distplot(df_simu.iloc, label = "JPM Forecast", ax=ax3); ax3.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2018'], label="Empirical Return", ax=ax3); ax3.axvline(x=df_avg[b].loc['2018'].mean(),color = 'red')
sns.distplot(df_simu.iloc, label = "JPM Forecast", ax=ax4); ax4.axvline(x=fcasts['mu'].iloc,color = 'blue')
sns.distplot(df_avg[b].loc['2019'], label="Empirical Return", ax=ax4); ax4.axvline(x=df_avg[b].loc['2019'].mean(),color = 'red')
ax1.legend()
ax2.legend()
ax3.legend()
ax4.legend()
ax1.set_title("2016 U.S. Intermediate Treasuries")
ax2.set_title("2017 U.S. Intermediate Treasuries")
ax3.set_title("2018 U.S. Intermediate Treasuries")
ax4.set_title("2019 U.S. Intermediate Treasuries")
Out:
Text(0.5, 1.0, '2019 U.S. Intermediate Treasuries') In 2016 and 2017, the empirical mean equals to the forecasting mean, and the empirical variance is smaller than the forecasting varance. In 2018 and 2019, the means of two data sets are still quite close, but the empirical variance becomes much closer to the forecasting data.

#### 4.2 The Odds Ratio Measure

In :
import os
import pandas as pd
import numpy as np
import datetime
In :
buckets = pd.DataFrame([{
'asset_class': 'Fixed Income',
'bucket': 'U.S. Intermediate Treasuries'
}, {
'asset_class': 'Equities',
'bucket': 'U.S. Large Cap'
}])

years = pd.Series([2016, 2017, 2018, 2019], name='fcast_year')

fcasts = pd.concat([
buckets.copy().join(pd.Series([yr]*len(buckets), name='fcast_year')) for yr in years
])

data_dir = 'C:/Users/yyang/astrocyte/analysis/data'

fname=os.path.join(data_dir, 'assumption.csv')
assumption = pd.read_csv(fname, index_col=0)

fcasts.loc[:, 'mu'] = assumption.iloc[:,0].values # Annualized compound returns
fcasts.loc[:, 'sigma'] = assumption.iloc[:,1].values  # Annualized vol
fcasts

Out:
In :
import pandas as pd
import numpy as np
def calc_cov(assumption):
df=pd.DataFrame(np.zeros((len(assumption),len(assumption))))
for i in np.arange(0,len(assumption),2):
year = assumption.iloc[i:i+2,:]
correl= year.iloc[0,2]
covmat=pd.DataFrame(np.array([[1,correl],[correl,1]]))
df.iloc[i:i+2,i:i+2]=np.array(covmat)
return df

idx = pd.MultiIndex.from_frame(fcasts[['fcast_year', 'bucket']])
fcast_correl = pd.DataFrame(np.array(calc_cov(assumption)), index=idx, columns=idx)
fcasts.set_index(['fcast_year','asset_class','bucket'], inplace=True)
fcasts.sort_index(inplace=True)

## TODO Fill in the information from the USD LTCMA
fcasts

Out:
In :
large_cap_tickers = ["AFDGX", "AFYDX", "ALSEX", "AUUKX", "AUUYX", "BTIRX", "CLCRX", "CLNCX", "CLXRX", "CWMFX", "DFELX", "DFEOX", "DFSIX", "DFUSX", "DTMEX", "EGORX", "FDESX", "FDTIX", "FLCEX", "FLCPX", "FSKAX", "FXAIX", "GSUTX", "GTCEX", "GTLOX", "HAITX", "HBGIX", "HGISX", "HGITX", "HIAGX", "HSTAX", "HSTIX", "IIRLX", "INGIX", "IPLIX", "IRLCX", "ISJBX", "JIARX", "JUEQX", "JUSRX", "KPLCX", "MFRHX", "MFRJX", "MFRKX", "MFVFX", "MFVSX", "MFVZX", "MIEYX", "MIEZX", "MITDX", "MITHX", "MITIX", "MITJX", "MLVHX", "MLVPX", "MLVRX", "MLVTX", "MMFVX", "MMFYX", "MMIEX", "MMIZX", "MRFIX", "MRGHX", "MRGJX", "MRGKX", "MRGRX", "MXKWX", "PCAPX", "PCAQX", "PDSIX", "PLFIX", "PLFPX", "PLFSX", "PLJMX", "PMYTX", "PMYYX", "PURYX", "PWCIX", "REDWX", "RESGX", "RFNGX", "RSIPX", "RWMEX", "RWMFX", "SBSDX", "SBSPX", "SEEHX", "SNXFX", "SSEYX", "SUWZX", "SWPPX", "SWTSX", "SXPRX", "TCEPX", "TEIHX", "TEQWX", "TIQRX", "TISAX", "TRSPX", "TSTFX", "VCSRX", "VDIGX", "VPSPX", "VSTIX", "WMFFX", "WSHFX"]
intermediate_bond_tickers = ["ABNFX", "AGMNX", "APBDX", "BBTBX", "BFAFX", "BIAZX", "BRAMX", "CFAFX", "CGVRX", "CMFFX", "CNFRX", "CRCSX", "CRCUX", "DFAPX", "DFIGX", "FBNDX", "FGBPX", "FGMNX", "FIKQX", "FSIGX", "FTHRX", "FUAMX", "FXNAX", "GAKPX", "GDFTX", "GGIRX", "GRRGX", "GSBPX", "GSCSX", "GSUPX", "JCBRX", "JIGBX", "JIGMX", "KCCSX", "LCRTX", "LCRVX", "LSSAX", "MCBDX", "MCBLX", "MCBYX", "MCZZX", "MFAEX", "MFAFX", "MXDQX", "MXGMX", "NRCRX", "OGGPX", "OGGQX", "PMRDX", "PMREX", "PMRIX", "RBFEX", "RBFFX", "RBFGX", "RBFHX", "RFATX", "RGVGX", "RIWTX", "RMAEX", "RMAFX", "RMAGX", "RMAHX", "SEGMX", "TMBFX", "TMBTX", "WAPIX", "WTRIX"]
len(large_cap_tickers), len(intermediate_bond_tickers)
Out:
(106, 67)
In :
ticker_map = {
'U.S. Intermediate Treasuries': intermediate_bond_tickers,
'U.S. Large Cap': large_cap_tickers,
}

data_loader = DataLoader(data_dir=data_dir)
all_meta = data_loader.get_all_meta(ticker_map)

adj_close = data_loader.get_all_adj_close()

adj_close.loc[:, all_meta.index]
adj_close_os = adj_close.loc['2013-01-01':,]

z = np.log(adj_close_os).diff()

# Exclude outliers
outliers = z / z.abs().mean()
outliers_thresh = np.percentile((outliers).stack(), q=[.1,99.9])*2
outliers_mask = (outliers < outliers_thresh) | (outliers > outliers_thresh)

outlier_tickers = sorted(outliers_mask.loc[:, (outliers_mask.sum(axis=0) > 0)].columns.tolist())

for c in outlier_tickers:
all_meta.loc[c, :] = None
all_meta.dropna(how='all', inplace=True)

display(all_meta)

In :
ret = z[all_meta.index]
mean_stock = []
sigma_stock = []
mean_tres = []
sigma_tres = []

time = [2016, 2017, 2018, 2019]
bucket = ['U.S. Intermediate Treasuries', 'U.S. Large Cap']

ret = ret.T
all_ret = pd.concat([all_meta['bucket'], ret], axis = 1)

all_ret = all_ret.set_index(['bucket', all_ret.index]).T
all_ret.index = pd.Index([dt.to_datetime64() for dt in all_ret.index])

for t in time:
mean_stock += [all_ret[bucket].loc[str(t-3): str(t-1)].mean().mean() * 250]
sigma_stock += [all_ret[bucket].loc[str(t-3): str(t-1)].mean(axis = 1).std() * np.sqrt(250)]
mean_tres += [all_ret[bucket].loc[str(t-3): str(t-1)].mean().mean() * 250]
sigma_tres += [all_ret[bucket].loc[str(t-3): str(t-1)].mean(axis = 1).std() * np.sqrt(250)]


In :
all_ret
Out:
In :
forecast_emp = pd.DataFrame({
'equity fore mean': mean_stock,
'equity fore sigma': sigma_stock,
'bond fore mean': mean_tres,
'bond fore sigma': sigma_tres
}, index = time)

In :
forecast_emp
Out:
In :
mean_time_stock = []
mean_time_bond = []

for t in time:
mean_time_stock += [all_ret[bucket].loc[str(t)].mean().mean() * 250]
mean_time_bond += [all_ret[bucket].loc[str(t)].mean().mean() * 250]

mean_emp = pd.DataFrame({
'equity emp mean': mean_time_stock,
'bond emp mean': mean_time_bond
}, index = time)

mean_emp

Out:
In :
from scipy.stats import norm

pdf_fore_stock = []
pdf_emp_stock = []
pdf_fore_bond = []
pdf_emp_bond = []

for t in time:
pdf_fore_stock += [norm.pdf(mean_emp['equity emp mean'].loc[t], fcasts.mu.loc[t], fcasts.sigma.loc[t])]
pdf_emp_stock += [norm.pdf(mean_emp['equity emp mean'].loc[t], forecast_emp['equity fore mean'].loc[t], forecast_emp['equity fore sigma'].loc[t])]
pdf_fore_bond += [norm.pdf(mean_emp['bond emp mean'].loc[t], fcasts.mu.loc[t], fcasts.sigma.loc[t])]
pdf_emp_bond += [norm.pdf(mean_emp['bond emp mean'].loc[t], forecast_emp['bond fore mean'].loc[t], forecast_emp['bond fore sigma'].loc[t])]

pdf_mean_all = pd.DataFrame({
'pdf_fore_stock': pdf_fore_stock,
'pdf_emp_stock': pdf_emp_stock,
'pdf_fore_bond': pdf_fore_bond,
'pdf_emp_bond': pdf_emp_bond
}, index = time)
pdf_mean_all

Out:

Odds ratio of average returns

In :
odds_ratio_stock = pdf_mean_all['pdf_fore_stock'] / pdf_mean_all['pdf_emp_stock']
odds_ratio_bond = pdf_mean_all['pdf_fore_bond'] / pdf_mean_all['pdf_emp_bond']

odds_ratio_all = pd.concat([odds_ratio_stock, odds_ratio_bond], axis = 1)
odds_ratio_all.loc['Average'] = odds_ratio_all.mean()
odds_ratio_all.columns = ['stock_etf', 'bond_etf']

odds_ratio_all

Out:

The average odds ratios of stock ETFs and bond ETFs are close to 1, which means that the performance of historical data and forcasting data are the same.

In :
# mean return of every ETF
eve_mean = []

for t in time:
eve_mean += [all_ret.loc[str(t)].mean() * 250]

eve_mean = pd.DataFrame(eve_mean, index = time).T
eve_mean.sort_index()

Out:
In :
eve_pdf_fore_stock = []
eve_pdf_fore_bond = []
eve_pdf_emp_stock = []
eve_pdf_emp_bond = []

for t in time:
eve_pdf_fore_stock += [norm.pdf(eve_mean[t].loc[('U.S. Large Cap', slice(None))], fcasts.mu.loc[t], fcasts.sigma.loc[t])]
eve_pdf_fore_bond += [norm.pdf(eve_mean[t].loc[('U.S. Intermediate Treasuries', slice(None))], fcasts.mu.loc[t], fcasts.sigma.loc[t])]
eve_pdf_emp_stock += [norm.pdf(eve_mean[t].loc[('U.S. Large Cap', slice(None))], forecast_emp['equity fore mean'].loc[t], forecast_emp['equity fore sigma'].loc[t])]
eve_pdf_emp_bond += [norm.pdf(eve_mean[t].loc[('U.S. Intermediate Treasuries', slice(None))], forecast_emp['bond fore mean'].loc[t], forecast_emp['bond fore sigma'].loc[t])]
C:\Users\yyang\AppData\Local\Continuum\anaconda3\envs\astrocyte_analysis\lib\site-packages\scipy\stats\_distn_infrastructure.py:897: RuntimeWarning: invalid value encountered in greater_equal return (a <= x) & (x <= b) C:\Users\yyang\AppData\Local\Continuum\anaconda3\envs\astrocyte_analysis\lib\site-packages\scipy\stats\_distn_infrastructure.py:897: RuntimeWarning: invalid value encountered in less_equal return (a <= x) & (x <= b) 

Odds ratio of the average of the conditional probability of each ETF

In :
n = len(eve_pdf_fore_stock)

eve_pdf_fore_stock = [eve_pdf_fore_stock[t][~np.isnan(eve_pdf_fore_stock[t])] for t in range(n)]
eve_pdf_fore_bond = [eve_pdf_fore_bond[t][~np.isnan(eve_pdf_fore_bond[t])] for t in range(n)]
eve_pdf_emp_stock = [eve_pdf_emp_stock[t][~np.isnan(eve_pdf_emp_stock[t])] for t in range(n)]
eve_pdf_emp_bond = [eve_pdf_emp_bond[t][~np.isnan(eve_pdf_emp_bond[t])] for t in range(n)]

odds_eve_stock = [eve_pdf_fore_stock[t].sum() / eve_pdf_emp_stock[t].sum() for t in range(n)]
odds_eve_bond = [eve_pdf_fore_bond[t].sum() / eve_pdf_emp_bond[t].sum() for t in range(n)]

odds_eve_mean = [np.mean(odds_eve_stock), np.mean(odds_eve_bond)]

odds_ratio_eve = pd.DataFrame({
'stock_etf': odds_eve_stock,
'bond_etf': odds_eve_bond,
}, index = time)

odds_ratio_eve.loc['Average'] = odds_eve_mean
odds_ratio_eve

Out:

The result is as same as the result of the average of total returns.

### 5 Conclusion

We can get several conclusions from the analysis:

• For the stock ETFs, the distributions of the empirical returns are more concentrated than the normal distribution with forcasting means and variance; the empirical mean is close to the forecasting mean, but the variance is smaller than the forecasting variance
• For the bond ETFs, the empirical mean is close to the forecasting mean for each year; in 2016 and 2017, the empirical variances are small; in 2018 and 2019, the empirical variances are close to the forecasting variances
• The two kinds of odds ratios for both stock and bond ETFs are close to 1, which means the historical distribution and the forecasting distribution are the same, and the correlation among different assets have no effect on the result

### 6 References

In [ ]:
import jovian
jovian.commit()

[jovian] Saving notebook..