Model allocation
Applying portfolio theory on model ensembling.
- 1. Why ensembling works
- 2. Weights by residual variance
- 3. Portfolio theory for ensembling
- 4. Ensembling TPS Aug 2021
- 5. Results
- References
- Ressources
This notebook was originally published on https://www.kaggle.com/joatom/model-allocation.
In this notebook I experiment with two ensembling strategies.
There are many ways to combine different models to improve predictions. A common technique for regression tasks is taking a weighted average of the model predictions (y_pred = (m1(x)*w1 + ... + mn(x)*wn) / n
). Another common technique is building a meta model, that is trained on the models' outputs.
The first chapter starts with a simple linear combination of two models. And we explore with an simple example, why ensembling actually works. These insights will lead, in the second chapter, to the first technique on how to choose weights for a linear ensemble by using residual variance. In the third chapter an alternative for the weight selection is examined. This second technique is inspired by portfolio theory (a theory to combine financial assets). In the fourth chapter the two techniques are applied and compared on the Tabular Playground Series (TPS) - Aug 2021 competition. Finaly cross validation (CV) and leaderboard (LB) Scores are listed in the fith chapter.
To get a better intuition on how good the two models fit the ground truth, we plot the residuals y_true(x)-m(x)
.
If we had to choose one of the models, which one would we prefer? Model 2 does better on the first data point and perfect on the third, but it contains an outlier the 5th data point.
Let's look at the mean and the variance of the residuals.
On the long run Model2 has an average residual of 0. Model 1 carries along a residual of 0.0714. So on average Model 2 seams to do better.
But Model 2 also has a higher variance. That implies we have a great chance to do a great prediction (e.g. x=3) but we also have high risk to screw the prediction (e.g. x=5).
Now we build a simple linear ensemble of the two models like ens = 0.5 * m1 + 0.5 m2
.
The ensemble line is closer to the true values. It also looks smoother then m1 and m2.
In the residual chart we can see that the ensemble does a bit worse for x=3 compared to Model 2. But it also decreases the residuals for the outliers (points 1, 5, 7).
Let's check the stats:
We dramatically reduced the variance, hence reduced the risk/chance. The mean value is now in between Model 1 and Model 2.
Finally let's play around with the model weights in the ensemble and check how mean and variance change.
weight_m1 = np.linspace(0, 1, 30)
ens_mean = np.zeros(30)
ens_var = np.zeros(30)
for i, w1 in enumerate(weight_m1):
# build ensemble for different weights
ens = m1*w1 + m2*(1-w1)
ens_res = y_true - ens
# keep track of mean and var of the differently weighted ensembles
ens_mean[i] = ens_res.mean()
ens_var[i] = ens_res.var()
With the previous 50:50 split the variance seems almost at the lowest point. So we only get a reduction of the mean below 0.0357 if we allow the ensemble to have more variance, hence take more risk.
2. Weights by residual variance
Since the Model 1 and Model 2 are well fitted, their average residuals are pretty close to 0. So let's focus on reducing our variance to avoid surprises on later later predictions.
We now solve for the optimal weights that minimizes the variance of the residual of our ensemble with this function:
fun = lambda w: (y_true-np.matmul(w, preds)).var()
We also define a constraint so that the w.sum() == 0
:
cons = ({'type': 'eq', 'fun': lambda w: w.sum()-1})
If you want, you can also set bounds, so that the weights want be negative.
I don't. I like the idea of going short with a model. And negative weights really increase the results of TPS predictions in chapter 4.
bnds = ((0,None),
(0,None),
(0,None),
(0,None),
(0,None))
Now, we are all set to retrieve the optimal weights.
preds = np.array([m1, m2])
# init weights
w_init = np.ones(preds.shape[0])/preds.shape[0]
# run optimization
res = scipy.optimize.minimize(fun, w_init, method='SLSQP', constraints=cons) #,bounds=bnds
# get optimal weights
w_calc = res.x
print(f'Calculated weights: {w_calc}')
Let's see how the calculated weights perform.
ens_ex1 = np.matmul(w_calc, preds)
ens_ex1_res=y_true-ens_ex1
print(f'Ensemble Ex1. mean: {ens_ex1_res.mean(): .4f}, var: {ens_ex1_res.var(): .4f}')
We con compare the results with the first ensemble 50:50 split. With the calculated weights we could further reduce the variance of the model (0.2219 -> 0.2157). But unfortunately the mean increased a bit (0.0357 -> 0.0380).
We see the trade off between mean and variance and have to decide if we prefer a more stable model or take some risk for better results.
In finance different assets are often combined in a portfolio. There are many criteria for the asset selection/allocation. One of them is by choosing a risk strategy. In 1952 the economist Harry Markowitz defined a Portfolio Selection strategy which built the foundation of many portfolio strategies to come. There is a great summary on Wikipidia, but the original paper can also be found with a google search.
So, what it is all about. Let's assume we are living in an easy, plain vanilla world. We want to build a portfolio that yields high return with low risk. That's not easy. If we only buy stocks of our favorite fruit grower, a rainy summer would result in a low return. Wouldn't it be smart to also buy stocks of a raincoat producer, just in case. But what if the summer was sunny, then we would have rather invested the entire money in fruits instead of raincoats. It's clearly a trade off. Either we lower the risk of loosing money in a rainy summer and invest in both (fruits and raincoats). Or we take the risk investing all money in fruits to maybe gain more money. And if we lower the risk, in which raincoat producer should we invest? The one with the bumpy stock price or the one with a steady, but slowly growing stock price.
Now, we already see the first similarities between our ensemble example above and the Portfolio Theory. Risk can be measured through variance and a good return of our ensemble is results in a low expected residual.
But there is even more in Portfolio Theory. It also takes dependencies between assets into account. If the summer is sunny the fruit price goes up and the raincoat price goes down, they are somewhat negative correlated.
Since we expect the average residual of our fitted models to be close to 0 and we build a linear model, we can expect our ensemble average residual also to be close to 0. Therefore, we focus on optimizing the portfolio variance, which can be boiled down to Var_p = w'*Cov*w
. The covariance measures the dependency between combined models and also considers the variance.
What data can we actually use? In the financial example returns are the increase or decrease of an asset price (p/p_t-1), hence we are looking on returns for a certain period of time. In ML we can take our out-of-fold (oof) predictions and calculate the residuals from the train targets to build a dataset.
Can we do this despite we are looking at a time-series in the financial example? Yes, in this basic portfolio theory we don't take time dependencies into account. But it's important to keep the same order for the different asset returns for correlation/covariance calculation. We want to compare the residual of model 1 and 2 for always the same data item.
The optimization function for the second ensemble technique is:
# Predictions of Model 1 and Model 2
preds = np.array([m1,m2])
# Residuals of Model 1 and Model 2
preds_res = np.array([m1_res, m2_res])
# handle residuals like asset returns
R = np.array(preds_res.mean(axis=1))
# factor by which R is considered during optimization. turned off for our example
q = 0 #-1
# covariance matrix of model residuals
CM = np.cov(preds_res)
# optimization function
fun = lambda w: np.matmul(np.matmul(w.T,CM),w) - q * np.matmul(R,w)
# constraint: weights must sum up to 1.0
cons = ({'type': 'eq', 'fun': lambda x: x.sum()-1})
Run the optimization.
w_init = np.ones(preds.shape[0])/preds.shape[0]
# run optimization
res = scipy.optimize.minimize(fun, w_init, method='SLSQP', constraints=cons) #,bounds=bnds
# get optimal weights
w_calc = res.x
print(f'Calculated weights: {w_calc}')
The weights are the same as in the first technique. That really surprised me. And I run a couple of examples with different models. But the weights were only slightly different between the two techniques.
4. Ensembling TPS Aug 2021
Now that we have to techniques to ensemble, let's try them on the TPS August 2021 data.
We do a 7 kfold split and calculate the residuals on the out-of-fold-predictions, that are used for validation. We train 7 regression models with different architecture so we get some diversity.
N_SPLITS = 7
SEED = 2021
PATH_INPUT = '/home/kaggle/TPS-AUG-2021/input/'
test = pd.read_csv(PATH_INPUT + 'test.csv')
train = pd.read_csv(PATH_INPUT + 'train.csv').sample(frac=1.0, random_state = SEED).reset_index(drop=True)
train['fold_crit'] = train.loss
train.loc[train.loss>=39, 'fold_crit']=39
target = 'loss'
fold_crit = 'fold_crit'
features = list(set(train.columns)-set(['id','kfold','loss','fold_crit']+[target]))
skf = StratifiedKFold(n_splits = N_SPLITS, random_state = None, shuffle = False)
train.kfold = -1
for f, (train_idx, valid_idx) in enumerate(skf.split(X = train, y = train[fold_crit].values)):
train.loc[valid_idx,'kfold'] = f
train.groupby('kfold')[target].count()
models = {
'LinReg': LinearRegression(n_jobs=-1),
'HGB': HistGradientBoostingRegressor(),
'XGB': XGBRegressor(tree_method = 'gpu_hist', reg_lambda= 6, reg_alpha= 10, n_jobs=-1),
'KNN': KNeighborsRegressor(100, n_jobs=-1),
'BayesRidge': BayesianRidge(),
'ExtraTrees': ExtraTreesRegressor(max_depth=2, n_jobs=-1),
'Poisson': Pipeline(steps=[('scale', StandardScaler()),
('pois', PoissonRegressor(max_iter=100))])
}
Fit models and save oof predictions.
for (m_name, m) in models.items():
print(f'# Model:{m_name}\n')
train[m_name + '_oof'] = 0
test[m_name] = 0
y_oof = np.zeros(train.shape[0])
for f in range(N_SPLITS):
train_df = train[train['kfold'] != f]
valid_df = train[train['kfold'] == f]
m.fit(train_df[features], train_df[target])
oof_preds = m.predict(valid_df[features])
y_oof[valid_df.index] = oof_preds
print(f'Fold {f} rmse: {mean_squared_error(valid_df[target], oof_preds, squared = False):0.5f}')
test[m_name] += m.predict(test[features]) / N_SPLITS
train[m_name + '_oof'] = y_oof
print(f"\nTotal rmse: {mean_squared_error(train[target], train[m_name + '_oof'], squared = False):0.5f}\n")
oof_cols = [m_name + '_oof' for m_name in models.keys()]
print(f"# ALL Mean ensemble rmse: {mean_squared_error(train[target], train[oof_cols].mean(axis=1), squared = False):0.5f}\n")
Let's a look at the correlation heatmap.
oof_cols = [m_name + '_oof' for m_name in models.keys()]
oofs = train[oof_cols]
oof_diffs = oofs.copy()
for c in oof_cols:
oof_diffs[c] = oofs[c]-train[target]
oof_diffs[c] = oof_diffs[c]#**2
sns.heatmap(oof_diffs.corr())
XGB and KNN are most diverse, so I export a 50:50 ensemble. I'll also export an equally weighted ensemble of all models and HGB only because it is the best single model.
Next we inspect the variance and mean of the residuals. Means are close to 0, as expected.
oof_diffs.var(), oof_diffs.mean()
These are the histograms of the residuals:
Finally, we apply the two techniques to calculate the ensembling weights
R = oof_diffs.mean().values
CM = oof_diffs.cov().values
q=0
# Var technique
fun_ex1 = lambda w: (train[target]-np.matmul(oofs.values, w)).var()
# Cov technique
fun_ex2 = lambda w: np.matmul(np.matmul(w.T,CM),w) - q * np.matmul(R,w)
cons = ({'type': 'eq', 'fun': lambda x: x.sum()-1})
bnds = ((0,None),
(0,None),
(0,None),
(0,None),
(0,None))
w_init = np.ones((len(models)))/len(models)
res = scipy.optimize.minimize(fun_ex1, w_init, method='SLSQP', constraints=cons) #,bounds=bnds
w_calc = res.x
w_init = np.ones((len(models)))/len(models)
res = scipy.optimize.minimize(fun_ex2, w_init, method='SLSQP', constraints=cons) #,bounds=bnds
w_calc = res.x
References
- Modern Portfolio Theory: https://en.wikipedia.org/wiki/Modern_portfolio_theory
- TPS August 2021 Competition: https://www.kaggle.com/c/tabular-playground-series-aug-2021/overview
Ressources
- Original notebook: https://www.kaggle.com/joatom/model-allocation
- TPS data: https://www.kaggle.com/c/tabular-playground-series-aug-2021/data