diff --git a/notebooks/python/Lab_PD_Calibration.ipynb b/notebooks/python/Lab_PD_Calibration.ipynb new file mode 100644 index 0000000..f2c032d --- /dev/null +++ b/notebooks/python/Lab_PD_Calibration.ipynb @@ -0,0 +1,761 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [], + "include_colab_link": true + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + } + }, + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dcxUDe3dWTOx" + }, + "source": [ + "# PD Calibration\n", + "\n", + "In this lab we will learn how to estimate the long-run PD after a model has been trained. The PD calibration can be done with the score, the monthly portfolio each case belongs to (usually the behavioural scorecard is used), the labels (Default / Non-Default) and a set of economic factors. For this work we will use an exchange rate and a commodity price.\n", + "\n", + "First, we load the data. It is in Excel, so we use the appropriate function from Pandas. There are two worksheets: The first one contains the data for each borrower and each portfolio, and the second one contains the macro factor at each month." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "Ti0EPEhUaUBL" + }, + "source": [ + "# Package load\n", + "import pandas as pd\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import seaborn as sns\n", + "%matplotlib inline\n", + "\n", + "# Bigger and prettier plots\n", + "plt.rcParams['figure.figsize'] = (10, 5)\n", + "plt.style.use('fivethirtyeight')" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "U4aDtYQiacvR" + }, + "source": [ + "# Download Excel file\n", + "!gdown 'https://drive.google.com/uc?id=1UYmgsu5gI5U_VbraKXHxWTXyZSbM6q5S'" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "zsBV0xEuaodx" + }, + "source": [ + "# Load the data. Two datasets are necessary.\n", + "loans = pd.read_excel('PDCalExample.xlsx', # Filename\n", + " sheet_name=0, # Worksheet name or index\n", + " )\n", + "\n", + "econ_factors = pd.read_excel('PDCalExample.xlsx', # Filename\n", + " sheet_name=1, # Worksheet name or index\n", + " )" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "econ_factors" + ], + "metadata": { + "id": "YN02Gw5nz3NZ" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "loans" + ], + "metadata": { + "id": "Mq1LUS9Ozpq1" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "loans.Portfolio.value_counts()" + ], + "metadata": { + "id": "9_fAxkK80AAi" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "WguMyC__bqJ6" + }, + "source": [ + "loans.describe()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "12vzlarlbtCp" + }, + "source": [ + "econ_factors.describe()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3qz3eQS9Ci1s" + }, + "source": [ + "Let's normalize the economic factors using a z-transform." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "aNjL6BHPCmjO" + }, + "source": [ + "from scipy.stats import zscore\n", + "econ_factors=econ_factors.apply(zscore)\n", + "econ_factors.describe()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "N1Tz_odabxtx" + }, + "source": [ + "## Defining Ratings\n", + "\n", + "We have all the data we need. Let's start then by obtaining PD segments. Basel suggests building 7-15 segments. For this, we can use the excellent package [pwlf](https://pypi.org/project/pwlf/). It will allow segmenting a curve using a given number of cuts.\n", + "\n", + "Which curve do we need to segment? The ROC curve!" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "3wgbSkL5bugp" + }, + "source": [ + "from sklearn.metrics import roc_curve, roc_auc_score\n", + "\n", + "# Calculate the ROC curve points\n", + "fpr, tpr, thresholds = roc_curve(loans['Default'],\n", + " loans['Probs'])\n", + "\n", + "# Save the AUC in a variable to display it. Round it first\n", + "auc = np.round(roc_auc_score(y_true = loans['Default'],\n", + " y_score = loans['Probs']),\n", + " decimals = 3)\n", + "\n", + "# Create and show the plot\n", + "plt.plot(fpr,tpr,label=\"Scorecard, auc=\"+str(auc))\n", + "plt.legend(loc=4)\n", + "plt.show()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "r_EIjpaidCP9" + }, + "source": [ + "# Install package\n", + "!pip install pwlf" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ash0iqXQdMzB" + }, + "source": [ + "Now we can segment the curve. The process takes a while to run, and sadly it is sequential, so go make yourself a coffee / tea while this runs." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "VjCu3fRUcr3i" + }, + "source": [ + "import pwlf\n", + "\n", + "# Define the curve with the ROC curve\n", + "piecewise_AUC = pwlf.PiecewiseLinFit(fpr, tpr)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "516fa5czzdsv" + }, + "source": [ + "# Calculate the best curve. Long!\n", + "res = piecewise_AUC.fit(10)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bxSDJS5Jde1f" + }, + "source": [ + "As this is a long process, it is a good idea to save the results. The object res can be pickled, or simply the cuts can be saved in a commented line." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "Qws6gZyZc0Lb" + }, + "source": [ + "res" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "xZG91rd0dolJ" + }, + "source": [ + "# Use previous result\n", + "res = [0. , 0.01383389, 0.03920524, 0.07731607, 0.11285577,\n", + " 0.20653665, 0.32339978, 0.41818781, 0.57283343, 0.7999917 ,\n", + " 1.\n", + " ]" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "e05QF34ywlIg" + }, + "source": [ + "ROC_curve = pd.DataFrame({'fpr': fpr, 'threshold': thresholds})\n", + "ROC_curve" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-Q2M5LWmdyBW" + }, + "source": [ + "To apply the cuts, you can use the method ```fit_with_breaks``` that is available for the ```piecewise_AUC``` object.\n" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "k3oCTTJUd1BD" + }, + "source": [ + "# Apply cuts!\n", + "cuts = piecewise_AUC.fit_with_breaks(res)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nMUtYqPIeS7X" + }, + "source": [ + "We can now apply this to our dataset and see how the piecewise curve fits the ROC curve." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "uTVL2JHQeQ18" + }, + "source": [ + "# predict for the determined points\n", + "xHat = np.linspace(min(fpr), max(fpr), num=10000)\n", + "yHat = piecewise_AUC.predict(xHat)\n", + "\n", + "# plot the results\n", + "plt.figure()\n", + "plt.plot(fpr, tpr, 'o')\n", + "plt.plot(xHat, yHat, '-')\n", + "plt.show()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JaXnfklSefXv" + }, + "source": [ + "We have a pretty good fit! But we are still not done, we now need to understand if the cuts lead to monotonic PDs. This is the final constraint. For this, we first calculate the PD for the whole dataset (we will adjust this later for each portfolio)." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "BMm6-9Sux9Eq" + }, + "source": [ + "# Find probability associated with every cut\n", + "pbb_cuts = np.zeros_like(res)\n", + "i = 0\n", + "\n", + "for fpr in res:\n", + " temp = ROC_curve.loc[np.round(ROC_curve.fpr, 2) == np.round(fpr, 2), 'threshold']\n", + " pbb_cuts[i] = np.mean(temp)\n", + " i += 1\n", + "\n", + "pbb_cuts = np.flip(pbb_cuts)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "K85KdeOzzHTN" + }, + "source": [ + "pbb_cuts = np.append(pbb_cuts, 1)\n", + "pbb_cuts = np.insert(pbb_cuts, 0, 0)\n", + "pbb_cuts" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "JUSBgjTced-E" + }, + "source": [ + "pd_cut = pd.cut(loans['Probs'], pbb_cuts)\n", + "pd_cut" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fVScSagae_-F" + }, + "source": [ + "And we study the output with a crosstab." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "1c4PZHO7e6OM" + }, + "source": [ + "# Create table with cases total.\n", + "PDs_Tab = pd.crosstab(pd_cut,\n", + " loans['Default'],\n", + " normalize = False)\n", + "\n", + "# Calculate default rate.\n", + "print(PDs_Tab)\n", + "pd_final = PDs_Tab[1] / (PDs_Tab[0] + PDs_Tab[1])\n", + "pd_final" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "T31FLRergAro" + }, + "source": [ + "The PD is perfectly monotonous! We should however combine the first three cuts, as the first two cuts contain no defaulters, and the last one, that seems to have to few cases. After doing this, our cuts are reasonable and we can now proceed to calculate the PDs for every portfolio. For this, we need to calculate the average number of defaults for each portfolio, over the total number of cases that month." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "zrzXIrMLhV6I" + }, + "source": [ + "# Adjusted cuts\n", + "pbb_cuts = [0.00000000e+00, 2.28978031e-01, # Delete first cut\n", + " 3.43684010e-01, 4.29913660e-01, 5.55186700e-01, 6.94758101e-01,\n", + " 7.51302438e-01, 8.26511005e-01, 9.11724503e-01, 9.82918518e-01,\n", + " 1.00000000e+00]\n", + "\n", + "# Add the PDCut variable to our dataframe\n", + "loans['PD_Cut'] = pd.cut(loans['Probs'], pbb_cuts)\n", + "\n", + "# Create pivot table\n", + "PD_monthly = pd.pivot_table(loans,\n", + " values = 'Default',\n", + " index = 'Portfolio',\n", + " columns = 'PD_Cut',\n", + " aggfunc = np.mean\n", + " )\n", + "\n", + "PD_monthly" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yO-jn7pzkixX" + }, + "source": [ + "Now we have calculated the PDs for all ratings! We ended up with one less than the Basel recommendation, but this is only because we have very few loans in this toy example. On larger portfolios you will easily reach the 7-15 range.\n", + "\n", + "Let's plot how our ratings look like." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "tmZCROhHjzCZ" + }, + "source": [ + "PD_monthly.plot(subplots=True,\n", + " layout=(2, 5),\n", + " sharex=False,\n", + " sharey=False,\n", + " colormap='viridis',\n", + " fontsize=8,\n", + " legend=False,\n", + " linewidth=0.2);\n", + "plt.tight_layout();" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qCKEJ53r04T3" + }, + "source": [ + "We see that some months have no defaulters. With the very limited data we have that is to be expected, but in large portfolios this will be softer. In general all time series look stationary.\n", + "\n", + "And that's almost it! Now we are ready to calibrate these PDs across the multiple cuts." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WF0XEfsK1fK0" + }, + "source": [ + "## Estimating Long-Term PD\n", + "\n", + "To estimate the long-term PD, we need to estimate what the average PD is given the last PD and the macroeconomic factors. For this, we can use the subpackage [Time Series Analysis](https://www.statsmodels.org/stable/tsa.html#module-statsmodels.tsa) (```tsa```) of the ```statsmodel``` package. We aim to run a SARIMAX model, so we should study stationary properties, seasonality, etc." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "PKaz7tlY5mWG" + }, + "source": [ + "The first step is to turn the Portfolio index into a TimeSeries index. This way Python knows it is dealing with dates. Our data starts in January 1999 and it is monthly data. We arbitrarily set our first day as the 1st of January 1999 and add up from there." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "rtCoqv-z6Q9C" + }, + "source": [ + "from datetime import date\n", + "from dateutil.relativedelta import *\n", + "\n", + "start_date = date(1999, 1, 1)\n", + "PD_monthly.index = [pd.to_datetime(start_date + relativedelta(months=+portfolio_month-1)) for portfolio_month in PD_monthly.index]\n", + "econ_factors.index = [pd.to_datetime(start_date + relativedelta(months=+portfolio_month-1)) for portfolio_month in econ_factors.index]\n", + "econ_factors.drop(columns='Portfolio', inplace = True)" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "PD_monthly.iloc[:, 4]" + ], + "metadata": { + "id": "6hSb6m9DbxEM" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n8Rp1sl09nfE" + }, + "source": [ + "Now we can decompose our time series to see what is happening.\n", + "\n" + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "bNL3_ZPblz6z" + }, + "source": [ + "from statsmodels.tsa.seasonal import seasonal_decompose\n", + "decomposition = seasonal_decompose(PD_monthly.iloc[:, 4], model='additive')\n", + "fig = decomposition.plot()\n", + "plt.show()" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "C3jsmbEh9-3O" + }, + "source": [ + "The time series look fairly stationary, with yearly seasonality. We are read to estimate a model!\n", + "\n", + "Within the ```tsa``` package, the [SARIMAX model](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html#statsmodels.tsa.statespace.sarimax.SARIMAX), present in the subpackage ```statespace```, is the most general model available, allowing for Seasonality, Auto Regression, Integration, Moving Averages, and eXogenous factors. We will train a simpler ARX model (autoregressive with exogenous regressors), but you are welcome to experiment further, more complex, regressions.\n", + "\n", + "Let's search for the best model for a rating, searching between 1 and 6 autoregression factors, using the macroeconomic factors as the exogenous variables." + ] + }, + { + "cell_type": "code", + "metadata": { + "id": "puNin-aY4Mxy" + }, + "source": [ + "# Define the search space.\n", + "p = range(1, 6)\n", + "d = range(0, 2)\n", + "q = range(0, 2)\n", + "\n", + "# Create an interative list of ps, ds, qs.\n", + "from itertools import product\n", + "pdq = list(product(p, d, q))\n", + "\n", + "# Seasonal parameters. One year back.\n", + "ps = range(0, 4)\n", + "ds = range(0, 1)\n", + "qs = range(0, 1)\n", + "seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(product(ps, ds, qs))]\n", + "\n", + "# Train the models for a series and test multiple values.\n", + "y = PD_monthly.iloc[:, 4] # Choose the fifth rating\n", + "\n", + "from statsmodels.tsa.statespace.sarimax import SARIMAX\n", + "\n", + "auc_out = []\n", + "\n", + "for param in pdq:\n", + " for param_seasonal in seasonal_pdq:\n", + " mod = SARIMAX(y,\n", + " exog=np.asarray(econ_factors),\n", + " order=param,\n", + " seasonal_order=param_seasonal,\n", + " enforce_stationarity=False,\n", + " enforce_invertibility=False\n", + " )\n", + " results = mod.fit()\n", + " auc_out.append([param, param_seasonal, results.aic])\n", + " print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))\n", + "\n", + "# Nicer formatting\n", + "auc_out = pd.DataFrame(auc_out,\n", + " columns = ['(p,q,r)', '(ps, qs, rs, S)', 'AIC'])" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "D5PYdY1qDS_U" + }, + "source": [ + "The warning is due to the frequency of the time series which is not provided by Pandas. You can safely ignore it.\n", + "\n", + "Let's see which one is the resulting model. AIC should be minimized." + ] + }, + { + "cell_type": "code", + "source": [ + "auc_out.sort_values(by='AIC', ascending=True)" + ], + "metadata": { + "id": "kNIV7YBtZPCJ" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "\n", + "We can see from these values that the prefer model is an autoregressive of order 1, with 12 month seasonality.\n", + "\n", + "Let's test the final model for these values." + ], + "metadata": { + "id": "QFaGRYqWZ16t" + } + }, + { + "cell_type": "code", + "metadata": { + "id": "HoXxxcPICHwW" + }, + "source": [ + "# ARIMA(1, 1, 1)x(0, 0, 0, 12)12\n", + "mod_BB = SARIMAX(y,\n", + " exog=np.asarray(econ_factors),\n", + " order=(1,1,1),\n", + " seasonal_order=(0,0,0,12),\n", + " enforce_stationarity=False,\n", + " enforce_invertibility=False)\n", + "results_BB = mod_BB.fit()\n", + "\n", + "print(results_BB.summary().tables[1])" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "metadata": { + "id": "vBCdFWNX5Yt3" + }, + "source": [ + "# ARIMA(2, 0, 1)x(0, 0, 0, 12)12\n", + "mod_BB = SARIMAX(y,\n", + " exog=np.asarray(econ_factors),\n", + " order=(2,0,1),\n", + " seasonal_order=(0,0,0,12),\n", + " enforce_stationarity=False,\n", + " enforce_invertibility=False)\n", + "results_BB = mod_BB.fit()\n", + "\n", + "print(results_BB.summary().tables[1])" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "We pick the one that **minimizes** the AIC, which is ARIMA(1, 0, 1)x(0, 0, 0, 12).\n", + "\n", + "> Indented block\n", + "\n" + ], + "metadata": { + "id": "fzpT8b7kgnlo" + } + }, + { + "cell_type": "code", + "metadata": { + "id": "aJTvbOXw50Ox" + }, + "source": [ + "# ARIMA(1, 0, 1)x(0, 0, 0, 12)12\n", + "mod_BB = SARIMAX(y,\n", + " exog=np.asarray(econ_factors),\n", + " order=(1,0,0),\n", + " seasonal_order=(0,0,0,12),\n", + " enforce_stationarity=False,\n", + " enforce_invertibility=False)\n", + "results_BB = mod_BB.fit()\n", + "\n", + "print(results_BB.summary().tables[1])" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yog1AfBWEV-R" + }, + "source": [ + "The results show what, for this dataset, there is not much exogenous information that is useful. This would mean we need to go look for better economic factors! Another interesting result is that the L1 and L2 parameters are not useful either, which means that it is only the third lag that carries the information.\n", + "\n", + "It is not perfect, so we should be a bit careful with this model and possibly study more complex structures. I leave this as an exercise!\n", + "\n", + "And that's it! Repeating this process for every time series leads to our calibrated PDs. Now what we would do is to use the long-run estimates for the economic factors to reach a long-term PD.\n", + "\n", + "For LGD, the process is the same. The only difference is that you need to use the **downturn** economic factors instead of the long-run economic factors" + ] + } + ] +} \ No newline at end of file