Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Typically models of credit card default are built on static data, often collected at time of application. We consider alternative models that also include behavioural data about credit card holders and macroeconomic conditions across the credit card lifetime, using a discrete survival analysis framework. We find that dynamic models that include these behavioural and macroeconomic variables give statistically significant improvements in model fit which translates into better forecasts of default at both account and portfolio level when applied to an out-of-sample data set. Additionally, by simulating extreme economic conditions, we show how these models can be used to stress test credit card portfolios.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To capture the impact of economic factors on loan survival, we consider some macroeconomic variables in line with the literature on credit risk modelling (Bellotti and Crook, 2013;Calabrese et al., 2024). We incorporate the state-level unemployment rate, sourced with monthly frequency from the Bureau of Labor Statistics, as an explanatory variable that shows a sharp increase after Hurricane Katrina and the COVID-19 pandemic outbreak. ...
... where Y it represents the mortgage default based on the 90+-day delinquency definition in equation (1). To predict not only if a mortgage will default but also when, we use a survival approach, which shows better predictive performance in the credit scoring literature (Bellotti and Crook, 2013;Calabrese et al., 2024;Medina-Olivares et al., 2023a). The latent variable Y * it is defined as ...
... In line with other credit-scoring studies (Bellotti and Crook, 2013;Tian et al., 2016;Calabrese et al., 2024), our findings show that a higher unemployment rate corresponds to an increased probability of default. A higher unemployment rate serves as an adverse trigger for borrowers, negatively impacting household liquidity and, consequently, the ability to meet mortgage obligations. ...
Article
Full-text available
Extreme natural disasters, such as tropical cyclones, have a low probability of materialising, but a high social and economic impact, including spillover to financial institutions. We propose a framework for performing a climate-stress testing exercise for the default probability of mortgage loans. We estimated a dynamic credit scoring model based on survival analysis with a relative damage index built using the wind speed of tropical cyclones. We considered scenarios involving tropical cyclone wind speeds with different return periods. We analyse a portfolio of approximately 190,000 mortgage loans granted in Louisiana, one of the US states most affected by tropical cyclones. Our findings suggest that coastline areas are most exposed to severe damage from tropical cyclones. If the geographical area is exposed to an event with a very large return period of 1-in-1,000 years, the probability of default increases by approximately nine percentage points compared to a baseline scenario in the absence of tropical cyclones. However, this finding was mitigated by the insurance coverage. This percentage increases to almost 20 percent in the absence of insurance coverage.
... To capture the impact of economic factors on loan survival, we consider some macroeconomic variables in line with the literature on credit risk modelling (Bellotti and Crook, 2013;Calabrese et al., 2024). We incorporate the state-level unemployment rate, sourced with monthly frequency from the Bureau of Labor Statistics, as an explanatory variable that shows a sharp increase after Hurricane Katrina and the COVID-19 pandemic outbreak. ...
... where Y it represents the mortgage default based on the 90+-day delinquency definition in equation (1). To predict not only if a mortgage will default but also when, we use a survival approach, which shows better predictive performance in the credit scoring literature (Bellotti and Crook, 2013;Calabrese et al., 2024;Medina-Olivares et al., 2023a). The latent variable Y * it is defined as ...
... (Calabrese et al., 2024) show a hazard function with a non-linear decreasing pattern as loan age d increases. To represent this behaviour, we define ϕ based on (Bellotti and Crook, 2013)'s approximation ϕ(d) = (d, d 2 , log d, (log d) 2 ), where the log terms capture the skewed shape of the hazard structure. β 0 denotes the associated coefficient vector. ...
Article
Extreme natural disasters, such as tropical cyclones, have a low probability of materialising, but a high social and economic impact, including spillover to financial institutions. We propose a framework for performing a climate-stress testing exercise for the default probability of mortgage loans. We estimated a dynamic credit scoring model based on survival analysis with a relative damage index built using the wind speed of tropical cyclones. We considered scenarios involving tropical cyclone wind speeds with different return periods. We analyse a portfolio of approximately 190,000 mortgage loans granted in Louisiana, one of the US states most affected by tropical cyclones. Our findings suggest that coastline areas are most exposed to severe damage from tropical cyclones. If the geographical area is exposed to an event with a very large return period of 1-in-1,000 years, the probability of default increases by approximately nine percentage points compared to a baseline scenario in the absence of tropical cyclones. However, this finding was mitigated by the insurance coverage. This percentage increases to almost 20 percent in the absence of insurance coverage.
... To capture the impact of economic factors on loan survival, we consider some macroeconomic variables, in line with the literature on credit risk modelling (Bellotti and Crook, 2013;Calabrese et al., 2024). We incorporate the state-level unemployment rate, sourced with monthly frequency from the Bureau of Labor Statistics, as an explanatory variable that shows a sharp increase after Hurricane Katrina and the outbreak of the COVID-19 pandemic. ...
... (1). To be able to predict not only if a mortgage will default but also when, we use a survival approach, which shows better predictive performance in the credit scoring literature (Bellotti and Crook, 2013;Calabrese et al., 2024;Medina-Olivares et al., 2023a). The latent variable Y * it is defined as ...
... In line with other credit scoring studies (Bellotti and Crook, 2013;Tian et al., 2016;Calabrese et al., 2024), our findings show that a higher unemployment rate corresponds to an increased probability of default. A higher unemployment rate serves as an adverse trigger event for borrowers, negatively impacting household liquidity and, consequently, the ability to meet mortgage obligations. ...
Preprint
Extreme natural disasters like tropical cyclones have a low probability of materialising but a high social and economic impact, including spillover to financial institutions. We propose a framework to perform a climate stress testing exercise of the default probability for mortgage loans. We estimate a dynamic credit scoring model based on a survival analysis with a relative damage index built using the wind speed of tropical cyclones. We consider scenarios of tropical cyclone wind speed with different return periods. We analyse a portfolio of around 190,000 mortgage loans granted in Louisiana, one of the most affected US states by tropical cyclones. Our findings suggest that coastline areas are the most exposed to severe damage from tropical cyclones. If the geographical area is exposed to an event with a very large return period of 1-in-1,000 years, the probability of default increases by about nine percentage points compared to a baseline scenario in the absence of tropical cyclones. This finding is, however, mitigated by insurance coverage. This percentage increases to almost twenty percent if there is no insurance coverage.
... However, discretetime survival models (DTSM) have received great attention in recent years. For credit risk, where data are collected at discrete time points, typically monthly or quarterly repayment periods, a discrete time approach matches the application problem better than continuous time modeling; furthermore, for prediction, using discrete time is computationally more efficient (Bellotti and Crook 2013). Gourieroux et al. (2006) pointed out that the continuous time affine model often has a poor model fit due to a lack of flexibility. ...
... The adapted model not only produces a sequence of each firm's hazard rates at discrete time points but also provides an improvement in default prediction accuracy. Bellotti and Crook (2013) used a DTSM framework to model default on a dataset of UK credit cards. They used credit card behavioral data and macroeconomic variables to improve model fit and better predict the time to default. ...
... Even though time-to-default events can be viewed as occurring in continuous time, credit portfolios are typically represented as panel data, which record account usage and repayment in discrete time (typically monthly or quarterly records). Therefore, it is more natural to treat time as a discrete point for a credit risk model (Bellotti and Crook 2013). If Risks 2024, 12, 31 6 of 26 we use discrete time, the data are presented as a panel data indexed on both account i and discrete time t. ...
Article
Full-text available
Survival models have become popular for credit risk estimation. Most current credit risk survival models use an underlying linear model. This is beneficial in terms of interpretability but is restrictive for real-life applications since it cannot discover hidden nonlinearities and interactions within the data. This study uses discrete-time survival models with embedded neural networks as estimators of time to default. This provides flexibility to express nonlinearities and interactions between variables and hence allows for models with better overall model fit. Additionally, the neural networks are used to estimate age–period–cohort (APC) models so that default risk can be decomposed into time components for loan age (maturity), origination (vintage), and environment (e.g., economic, operational, and social effects). These can be built as general models or as local APC models for specific customer segments. The local APC models reveal special conditions for different customer groups. The corresponding APC identification problem is solved by a combination of regularization and fitting the decomposed environment time risk component to macroeconomic data since the environmental risk is expected to have a strong relationship with macroeconomic conditions. Our approach is shown to be effective when tested on a large publicly available US mortgage dataset. This novel framework can be adapted by practitioners in the financial industry to improve modeling, estimation, and assessment of credit risk.
... However, discrete-time survival models (DTSM) have received great attention in recent years. For credit risk, where data is collected at discrete time points, typically monthly or quarterly repayment periods, a discrete time approach matches the application problem better than continuous time modelling; furthermore, for prediction, using discrete time is computationally more efficient (Bellotti and Crook, 2013). Gourieroux et al. (2006) pointed out that the continuous time affine model often has a poor model fit due to a lack of flexibility. ...
... The adapted model not only produces a sequence of each firm's hazard rates at discrete time points but also provides an improvement in default prediction accuracy. Bellotti and Crook (2013) used a DTSM framework to model default on a data set of UK credit cards. They used credit card behavioral data and macroeconomic variables to improve model fit and better predict the time to default. ...
... Even though time-to-default events can be viewed as occurring in continuous time, credit portfolios are typically represented as panel data which record account usage and repayment in discrete time (typically monthly or quarterly records). Therefore, it is more natural to treat time as discrete point for a credit risk model (Bellotti & Crook, 2013). If we use discrete time, the data is presented as a panel data indexed on both on each account i and discrete time t. ...
Preprint
Full-text available
Survival models have become popular for credit risk estimation. Most current credit risk survival models use an underlying linear model. This is beneficial in terms of interpretability but is restrictive for real-life applications since it cannot discover hidden nonlinearities and interactions within the data. This study uses discrete time survival models with embedded neural networks as estimators of time to default. This provides flexibility to express nonlinearities and interactions between variables, and hence allows for models with better overall model fit. Additionally, the neural networks are used to estimate Age-Period-Cohort (APC) models so that default risk can be decomposed into time components for loan age (maturity), origination (vintage) and environment (e.g., economic, operational and social effects). These can be built as general models, or as local APC models for specific customer segments. The local APC models reveal special conditions for different customer groups. The corresponding APC identification problem is solved by a combination of regularization and fitting the decomposed environment time risk component to macroeconomic data, since the environmental risk is expected to have a strong relationship with macroeconomic conditions. Our approach is shown to be effective when tested on a large publicly available US mortgage data set. This novel framework can be adapted by practitioners in the financial industry to improve modelling, estimation and assessment of credit risk.
... However, such data can be forged easily. To alleviate this problem, different kinds of raw data such as telecommunication data [4], digital footprints [1], and transaction records [2,6,14,25] were used to increase the prediction accuracy. Several credit scoring methods utilized aggregated transaction data over a predefined time window [2,14], while few studies used non-aggregated data [6,25]. ...
... To alleviate this problem, different kinds of raw data such as telecommunication data [4], digital footprints [1], and transaction records [2,6,14,25] were used to increase the prediction accuracy. Several credit scoring methods utilized aggregated transaction data over a predefined time window [2,14], while few studies used non-aggregated data [6,25]. Since aggregating the data can often lead to information loss, using non-aggregated data generally results in better performance. ...
... By using (1), all x u j contained in X u i are scaled so that they are in the range of [0, 1]. Next, for each record, we concatenate all x c j and vec(x u j ) to generate the representation vector t of the record based on (2). ...
Article
Full-text available
Recognizing potential defaulters is a crucial problem for financial institutions. Therefore, many credit scoring methods have been proposed in the past to address this issue. However, these methods rarely consider the interaction among customers such as bank transfer and remittance. With rapid growth in the number of customers adopting online banking services, such interaction information plays a significant role in assessing their credit score. In this paper, we propose a novel scalable credit scoring approach called CDGAT (Graph attention network for credit card defaulters) for predicting potential credit card defaulters. In CDGAT, a customer’s credit score is calculated based on transaction embedding and neighborhood embedding. To obtain the neighborhood embedding, CDGAT first utilizes the Amount-bias Sampling (AbS) strategy to extract a subgraph for each customer. Next, CDGAT directly aggregates neighbors’ features according to their influence weights. The experimental results on the dataset from Industrial and Commercial Bank of China (Macau) Limited (ICBC (Macau)) show that CDGAT significantly outperforms the baseline methods. Furthermore, experimental results reveal that the proposed method is also superior to several state-of-the-art Graph Convolutional Neural Network models in terms of scalability and performance.
... According to Agarwal et al. (2011), the factors that significantly influence credit card quality are marital status, homeownership, and urbanization. Furthermore, Bellotti and Crook (2013) showed that these three factors had a significant effect on the credit card quality of cardholders through variables of occupation, age, income, the number of credit cards, the ratio of payment to bills, transaction frequency, and credit limit. Kiarie (2015) showed that homeownership and age significantly affected credit card quality. ...
... Age has a negative coefficient, meaning that young cardholders tend to have a higher probability of NPL. This condition is in line with the results from Kim (2000), Stavins (2000), Muthomi (2005), Nyamongo (2011), Bellotti and Crook (2013), Wang et al. (2014), Kiarie et al. (2015), Huang (2018), Kim et al. (2018), and Li et al. (2019). Based on Figure 3, the percentage of the number of NPL cardholders compared to the total cardholders continues to increase with the younger age group. ...
... This shows that self-employed individuals have a higher NPL probability than cardholders with non-self-employed status. This condition is in line with results from Muthomi (2005), Muchiru (2008), Bellotti and Crook (2013), Leow & Crook (2014), and Warnakulasuriya (2016). Living in Java has a positive coefficient value. ...
Article
Full-text available
The purpose of this paper is to analyze the demographic and behavioral factors that significantly affect the credit card Non- Performing Loan (NPL). This study is carried out to provide managerial recommendations for controlling credit card NPL. This study uses secondary data from Indonesia’s most significant private bank with 100,000 samples of cardholder data. Demographic factors and cardholder behavior that significantly influence credit card NPL can be used to improve the credit scoring system for new cardholders and as indicators for a behavior scoring system for existing cardholders. This research uses a probability stratified random sampling technique. Logistic regression uses demographic factors and cardholder behavior significantly affected credit card NPL. According to the logistic regression model, cardholder behavior was more likely to NPL than demographic characteristics. The number of credit cards showed the highest credit card NPL probability. How to Cite: Achsan, Wahid, Achsani, N. A, & Bandono, Bayu. (2022). The Demographic and Behavior Determinant of Credit Card Default in Indonesia. Signifikan: Jurnal Ilmu Ekonomi, 11(1), 43-56. https://doi.org/10.15408/sjie.v11i1.20215.
... Leow & Crook (2014) derived features like relationship duration with a bank, transitions between the states of delinquency, average payment amount, and average repayment amount to increase the model performance. Further, few studies have considered macro-economic variables that could potentially impact defaults prediction such as bank interest rate, Consumer Price Index (CPI), Gross Domestic Product (GDP), and unemployment rate (Bellotti & Crook, 2013;. ...
... Further, these datasets had several features that could be further classified into static and dynamic features. The factors contributed to the superior performance of time-series models with accuracy close to 95% along with high precision and recall rates (Bellotti & Crook, 2013;Ho Ha & Krishnan, 2012;Leow & Crook, 2014). Machine learning models are typically preferred over time-series models when datasets are not very huge and the period of data does not span over a longer period. ...
... A discrete-time survival model that included application variables, time-varying behavioral variables, and macro-economic variables was experimented by (Bellotti & Crook, 2013) where they presented a hazard probability graph showing payment defaults peaking at eight months from account opening and then gradually declining over time. Amount-paid-back or payment-status was negatively correlated with defaults as expected since greater ability to pay reduced the probability of defaults. ...
Chapter
The credit card has been one of the most successful and prevalent financial services being widely used across the globe. However, with the upsurge in credit card holders, banks are facing a challenge from equally increasing payment default cases causing substantial financial damage. This necessitates the importance of sound and effective credit risk management in the banking and financial services industry. Machine learning models are being employed by the industry at a large scale to effectively manage this credit risk. This chapter presents the application of the various machine learning methods like time series models and deep learning models experimented in predicting the credit card payment defaults along with identification of the significant features and the most effective evaluation criteria. This chapter also discusses the challenges and future considerations in predicting credit card payment defaults. The importance of factoring in a cost function to associate with misclassification by the models is also given.
... Theoretical framework: Some studies highlight the importance of incorporating macroeconomic conditions for the estimation of borrowers' credit risk (Bellotti & Crook, 2013). However, IADB (2020) announced that after the start of the Covid-19 pandemic in the region, microfinance institutions had restricted liquidity, which affected the availability of credit. ...
... Este artículo examina los factores que afectan el incumplimiento de los pagos de las microfinanzas en Túnez. Marco teórico: Algunos estudios destacan la importancia de incorporar condiciones macroeconómicas para la estimación del riesgo crediticio de los prestatarios (Bellotti & Crook, 2013). Sin embargo, el BID (2020) anunció que tras el inicio de la pandemia de Covid-19 en la región, las instituciones de microfinanzas tenían liquidez restringida, lo que afectó la disponibilidad de crédito. ...
Article
Full-text available
Purpose: Understanding the repayment behavior of the borrower is very important to the lending decisions of financial institutions and thus helps to promote the development of microfinance. This paper examines the factors that affect the repayment default of microfinance in Tunisia. Theoretical framework: Some studies highlight the importance of incorporating macroeconomic conditions for the estimation of borrowers’ credit risk (Bellotti & Crook, 2013). However, IADB (2020) announced that after the start of the Covid-19 pandemic in the region, microfinance institutions had restricted liquidity, which affected the availability of credit. Therefore, for the purpose of the present article, the literature on default in MFIs analyzes the credit risk and factors that influence granted loans Design/Methodology/Approach: We carried out a survey, with a non-stratified sample of 320 microcredit beneficiaries, during 2021 in Enda-inter-Arab agencies located in the region of Sousse- Tunisia. We introduce a binary logistic regression model to predict the values taken by a discrete variable from a series of continuous or binary explanatory variables. Findings: We show that borrowers’ socioeconomic characteristics, total loan, repayment period and past participation in microcredit loans have significant impacts, as special features, on their default rates. Research, Practical & Social implications: This paper has been designed to provide some valuable contributions in improving the repayment performance and can be significant political involvement, derived from the strong relationship between the level of poverty and the success of loan repayments. Indeed, the level of borrowers’ poverty must be given considerable weight before the loan is disbursed. Originality/Value: The results indicate that the qualification and age of the workforce should be considered a basic requirement before the granting of loan. In addition the microfinance institution's credit policy should also consider service projects a priority.
... Theoretical framework: Some studies highlight the importance of incorporating macroeconomic conditions for the estimation of borrowers' credit risk (Bellotti & Crook, 2013). However, IADB (2020) announced that after the start of the Covid-19 pandemic in the region, microfinance institutions had restricted liquidity, which affected the availability of credit. ...
... Este artículo examina los factores que afectan el incumplimiento de los pagos de las microfinanzas en Túnez. Marco teórico: Algunos estudios destacan la importancia de incorporar condiciones macroeconómicas para la estimación del riesgo crediticio de los prestatarios (Bellotti & Crook, 2013). Sin embargo, el BID (2020) anunció que tras el inicio de la pandemia de Covid-19 en la región, las instituciones de microfinanzas tenían liquidez restringida, lo que afectó la disponibilidad de crédito. ...
... Interestingly, modelling consumer credit risk has received increased attention due to the challenges it presents, with sophisticated mathematical and statistical models being used to assess the scoring of consumers and predict default events (see Hand, 2001;Thomas et al., 2001;Crook & Bellotti, 2010;Bellotti & Crook, 2013, for some interesting studies). From a modelling viewpoint, behaviour has been proxied through the use of factors such as, among others, the credit bureau score, outstanding account balance and repayments (Gross & Souleles, 2002a), payment amounts, annual percentage rate (APR), credit limit and number of transactions (Belotti & Crook, 2013). ...
... The choice of independent features is relatively large according to the literature. Following Clark et al. (2021), we control for the natural logarithm of the outstanding balance and its squared term (in addition to the level, and as in Bellotti & Crook, 2013), the utilisation rate (%), and a dummy variable capturing whether the account holder has been unemployed over the last 12-month period. Similar to Belotti and Crook (2013), we also include the time (in months) the individual is with the credit card provider, as well as dummies on whether the individual is part-time employed, a student or retired, and whether one has been in arrears. ...
Article
Full-text available
Over the past years, studies shed light on how social norms and perceptions potentially affect loan repayments, with overtones for strategic default. Motivated by this strand of the literature, we incorporate collective social traits in predictive frameworks on credit card delinquencies. We propose the use of a two-stage framework. This allows us to segment a market into homogeneous sub-populations at the regional level in terms of social traits, which may proxy for perceptions and potentially unravelled behaviours. On these formed sub-populations, delinquency prediction models are fitted at a second stage. We apply this framework to a big dataset of 3.3 million credit card holders spread in 12 UK NUTS1 regions during the period 2015–2019. We find that segmentation based on social traits yields efficiency gains in terms of both computational and predictive performance compared to prediction in the overall population. This finding holds and is sustained in the long run for different sub-samples, lag counts, class imbalance correction or alternative clustering solutions based on individual and socio-economic attributes. Graphical abstract
... Stress testing is becoming very important in the risk evaluation of banks and represents a key technique for risk management and capital decisions for financial institutions, as recognised by the Financial Services Authority (Financial Services Authority, 2008). Most of the approaches available in the literature for stress testing use dynamic models, such as survival models (Bellotti and Crook, 2013;Wang et al., 2020). ...
... Perturbing the data can help financial institutions in performing a stress scenario analysis, instead of stress testing (Bellotti and Crook, 2013), when data is collected over a short period of time and, therefore, macroeconomic variables cannot be used. The percentage of defaulted loans in the Nationwide dataset is 6.8%. ...
Article
Full-text available
To boost the application of machine learning (ML) techniques for credit scoring models, the blackbox problem should be addressed. The primary aim of this paper is to propose a measure based on counterfactuals to evaluate the interpretability of a ML credit scoring technique. Counterfactuals assist with understanding the model with regard to the classification decision boundaries and evaluate model robustness. The second contribution is the development of a data perturbation technique to generate a stress scenario. We apply these two proposals to a dataset on UK unsecured personal loans to compare logistic regression and stochastic gradient boosting (SBG). We show that training a blackbox model (SGB) as conditioned on our data perturbation technique can provide insight into model performance under stressed scenarios. The empirical results show that our interpretability measure is able to capture the classification decision boundary, unlike AUC and the classification accuracy widely used in the banking sector.
... The findings from the study indicated that the Cox proportional hazards survival model performed relatively well in predicting defaults compared to the conventional static logistic regression. The survey by Tony Bellotti and Crook [18] improved the initial model's performance and supported the integration of discrete survival analysis modelling. The resultant model comprised behavioural values (BV), application values, and MVs, with MVs and BVs acting as the TVCs. ...
... They incorporated the card holder's marital status, age, birthplace, family situation, and the distance between the birthplace and current address to define cardholders' social capital. According to Bellotti and Crook [18], customer's macro-environment variables, behaviour, and demographics are three critical variables that may enhance the performance of the Cox model in predicting credit card defaults. On the other hand Wang et al. [72] identified characteristics of the credit card, customers' attitude, demographics, and personality as the main determinants of credit card debt. ...
Article
Full-text available
The main aim of this paper is to help bank management in scoring credit card clients using machine learning by modelling and predicting the consumer behaviour concerning three aspects: the probability of single and consecutive missed payments for credit card customers, the purchasing behaviour of customers, and grouping customers based on a mathematical expectation of loss. Two models are developed: the first provides the probability of a missed payment during the next month for each customer, which is described as Missed payment prediction Long Short Term Memory model (MP-LSTM), whilst the second estimates the total monthly amount of purchases, which is defined as Purchase Estimation Prediction Long Short Term Memory model (PE-LSTM). Based on both models, a customer behavioural grouping is provided, which can be helpful for the bank’s decision-making. Both models are trained on real credit card transactional datasets. Customer behavioural scores are analysed using classical performance evaluation measures. Calibration analysis of MP-LSTM scores showed that they could be considered as probabilities of missed payments. Obtained purchase estimations were analysed using mean square error and absolute error. The MP-LSTM model was compared to four traditional well-known machine learning algorithms. Experimental results show that, compared with conventional methods based on feature extraction, the consumer credit scoring method based on the MP-LSTM neural network has significantly improved consumer credit scoring.
... Thus, the distinction between exogenous and endogenous TVCs is commonly overlooked. When including endogenous TVCs in a survival model for credit scoring, the standard method is to make predictions by lagging their values, thereby relating past information to future survival status (Bellotti & Crook, 2013, 2014Djeundje & Crook, 2018;Leow & Crook, 2016). However, this method has some limitations. ...
Article
Full-text available
In this work, we introduce JointLIME, a novel interpretation method for explaining black‐box survival (BBS) models with endogenous time‐varying covariates (TVCs). Existing interpretation methods, like SurvLIME, are limited to BBS models only with time‐invariant covariates. To fill this gap, JointLIME leverages the Local Interpretable Model‐agnostic Explanations (LIME) framework to apply the joint model to approximate the survival functions predicted by the BBS model in a local area around a new individual. To achieve this, JointLIME minimizes the distances between survival functions predicted by the black‐box survival model and those derived from the joint model. The outputs of this minimization problem are the coefficient values of each covariate in the joint model, serving as explanations to quantify their impact on survival predictions. JointLIME uniquely incorporates endogenous TVCs using a spline‐based model coupled with the Monte Carlo method for precise estimations within any specified prediction period. These estimations are then integrated to formulate the joint model in the optimization problem. We illustrate the explanation results of JointLIME using a US mortgage data set and compare them with those of SurvLIME.
... Research by Aktan et al. [2009], which was published in 2009, revealed that neural networks performed better than certain conventional scoring models in terms of type II errors and prediction accuracy. Models of credit card borrower default were presented by Bellotti et al. [2013] and used both macroeconomic factors and social data on credit card customers. In general, it was discovered that models with macroeconomic and social data were statistically more significant than other models. ...
Article
With the quick growth of the credit card system, there is a rising number of misconduct rates on credit card loans, which creates a financial risk for commercial banks. Thus, successful resolutions of the risks are significant for the sound advancement of the industry in the long term. Numerous financial banks and organizations become more and more attentive to the issue of credit card default because it brings about a high probability of financial risks. Credit risk plays a significant part in the financial business. One of the main functions of a bank is to issue loans, credit cards, investment mortgages, and other credit. One of the most popular financial services offered by banks in recent years has been the credit card. With its constant rise in risk factors, the banking industry is perhaps the most fragile and volatile in the world. Credit risk remains a crucial element for financial institutions that have experienced losses amounting to hundreds of millions of dollars as a result of their incapacity to retrieve the funds disbursed to clients. In the banking industry, it is now vital to forecast whether a borrower will be able to repay the loan. In this paper, we applied different machine learning classifiers, including Random Forest, K Nearest Neighbor, Logistic Regression, Decision Tree, Decision Tree with AdaBoosting, and Random Forest with AdaBoosting, to build a credit default prediction model. The results show that the AdaBoosting model achieved better accuracy than the other machine learning algorithms. Our proposed technique can support financial organizations in controlling, identifying, and monitoring credit risk, and it can identify credit card clients who pay the loan in the next month.
... However, it is challenging to acquire precise loan default risk evaluation results due to the various risks and the complexities of dependencies between numerous influencing factors. Hence, the machine learning methods, e.g., tree-based classifiers [5,20], support vector machines (SVM) [7,[21][22][23][24], neural networks (NN) [21,25,26], ensemble-based methods [6,7], hybrid approaches [24,25,27,28], and others [29][30][31], have been proposed for loan default risk prediction. ...
Article
Full-text available
As a significant application of machine learning in financial scenarios, loan default risk prediction aims to evaluate the client’s default probability. However, most existing deep learning solutions treat each application as an independent individual, neglecting the explicit connections among different application records. Besides, these attempts suffer from the problem of missing data and imbalanced distribution (i.e., the default records are small samples against all the applications). We believe similar records could provide some auxiliary signals, which are of critical importance to alleviate the data missing issue and facilitate data argumentation. To this end, we propose multi-view loan application graphs, dubbed MLAGs. By evaluating the similarity between the records, a loan application graph can be constructed. Furthermore, we arrange different similarity thresholds to organize various graph structures for multi-graph constructions; thus, a variety of representations can be generated via information propagation and aggregation for small sample argumentation. Consequently, the imbalanced data distribution and missing values issues can be alleviated effectively. We conduct experiments on three public datasets from real-world home credit and P2P lending platforms, which show that MGCN outperforms both conventional and deep learning models. Ablation studies also illustrated the validity of each module design.
... Combing the research related to loan defaults by scholars at home and abroad, it was found that early scholars mainly used methods from the fields of statistics and econometrics to conduct researches related to loan defaults. Bellotti and Crook (2013) developed a dynamic model to predict and stress test credit card defaults. Finally he found that the dynamic model had good predictive ability and was able to predict the profitability of a given loan. ...
... The approach developed here is conceptually similar to these efforts, but with a specific focus on the structure of loan performance data. Vintage models such as age-period-cohort models have been effective for stress testing loan portfolios [41][42][43], because they explicitly recognize three primary dimensions along which performance must be measured: the age of the loan; the loan origination date, also called the vintage; and the calendar date. Simulation studies of Cox Ph models have shown that they can be used effectively on problems with two dimensions, but develop instabilities when applied to the three dimensions of loan portfolios [44], in part because of the linear relationship age = time − vintage. ...
Article
Full-text available
Machine learning models have been used extensively for credit scoring, but the architectures employed suffer from a significant loss in accuracy out-of-sample and out-of-time. Further, the most common architectures do not effectively integrate economic scenarios to enable stress testing, cash flow, or yield estimation. The present research demonstrates that providing lifecycle and environment functions from Age-Period-Cohort analysis can significantly improve out-of-sample and out-of-time performance as well as enabling the model's use in both scoring and stress testing applications. This method is demonstrated for behavior scoring where account delinquency is one of the provided inputs, because behavior scoring has historically presented the most difficulties for combining credit scoring and stress testing. Our method works well in both origination and behavior scoring. The results are also compared to multihorizon survival models, which share the same architectural design with Age-Period-Cohort inputs and coefficients that vary with forecast horizon, but using a logistic regression estimation of the model. The analysis was performed on 30-year prime conforming US mortgage data. Nonlinear problems involving large amounts of alternate data are best at highlighting the advantages of machine learning. Data from Fannie Mae and Freddie Mac is not such a test case, but it serves the purpose of comparing these methods with and without Age-Period-Cohort inputs. In order to make a fair comparison, all models are given a panel structure where each account is observed monthly to determine default or non-default.
... The importance of macroeconomic factors in empirical testing and forecasting has been recognised and considered by many studies (Jiang and Dunn, 2013;Bellotti and Crook, 2013). However, there is scant research that specify and examine the connections between macroeconomic factors and credit card debt. ...
Conference Paper
Full-text available
The objective of this paper is to investigate the influence of economic conditions and regulatory interventions as potential drivers of credit card delinquency in Malaysia. Using quarterly data from 1999 to 2021 and the autoregressive distributed lag model, we find that national income, wealth, expectations of future economic conditions, and the supply of credit determine credit card delinquency. We also find evidence in favour of regulatory interventions as a solution to curbing the deteriorating credit card debt situation in Malaysia.
... However, it is challenging to acquire precise loan default risk evaluation results due to the various risks and the complexities of dependencies between numerous influencing factors. Hence, the machine learning methods, e.g., tree-based classifiers [5,15], support vector machines (SVM) [7,[16][17][18][19], neural networks (NN) [16,20,21], ensemble-based methods [6,7], hybrid approaches [19,20,22,23], and others [24][25][26], has been proposed for loan default risk prediction. ...
Preprint
Full-text available
Loan default risk prediction is a major application of machine learning for financial institutions to evaluate the client's default probability. Existing deep learning models rarely consider the connection among application records for loan default detection. We believe similar records, as auxiliary information, are also significant for loan default prediction, particularly for those records with many missing data. Additionally, in practical scenarios, the data distribution is imbalanced since the default records are small samples, which may also lead the model to achieve sub-optimal results. To this end, we propose multi-view loan application graphs, dubbed MLAGs, for small sample augmentation. Additionally, based on the graph convolution, similar records can also be aggregated to alleviate the issue of missing values. Moreover, a multi-view graph convolution network, named MGCN, is applied for loan default risk prediction. We conduct experiments on three public datasets from real-world home credit and P2P lending platforms, which show that MGCN outperforms both conventional and deep learning models.
... However, loan-level models require reliable historical loan-level data (Black 2016), and are more difficult to build (more data), and might be slower to implement than portfolio-level models. The most popular loan-level modelling technique within the credit risk context is the use of proportional hazard models (Bellotti and Crook 2013). The benefits of using a hazard model include that time dummies can be utilised, and the effect of prepayment can be added to the modelling process. ...
Article
Full-text available
The International Financial Reporting Standard (IFRS) 9 relates to the recognition of an entity’s financial asset/liability in its financial statement, and includes an expected credit loss (ECL) framework for recognising impairment. The quantification of ECL is often broken down into its three components, namely, the probability of default (PD), loss given default (LGD), and exposure at default (EAD). The IFRS 9 standard requires that the ECL model accommodates the influence of the current and the forecasted macroeconomic conditions on credit loss. This enables a determination of forward-looking estimates on impairments. This paper proposes a methodology based on principal component regression (PCR) to adjust IFRS 9 PD term structures for macroeconomic forecasts. We propose that a credit risk index (CRI) is derived from historic defaults to approximate the default behaviour of the portfolio. PCR is used to model the CRI with the macroeconomic variables as the set of explanatory variables. A novice all-subset variable selection is proposed, incorporating business decisions. We demonstrate the method’s advantages on a real-world banking data set, and compare it to several other techniques. The proposed methodology is on portfolio-level with the recommendation to derive a macroeconomic scalar for each different risk segment of the portfolio. The proposed scalar is intended to adjust loan-level PDs for forward-looking information.
... Two unique real-world datasets of defaulted and non-defaulted loan accounts for audited and unaudited non-financial private firms gathered from an anonymous major Zimbabwean commercial bank over the sample period from 2010 to 2018 are used to fit the stepwise logistic regression models. Account default refers to a situation when an obligor is not likely to settle up its credit obligations or past due more than 90 days on any substantial credit obligation (Crook and Bellotti, 2013;Basel Committee on Banking Supervision, 2006). Dataset I consists of defaulted and non-defaulted loan accounts for audited private firms while dataset II contains defaulted and non-defaulted loan accounts for unaudited private firms. ...
... Consumer sentiment has also been found to have an effect on credit default probability, as shown in Bellotti and Crook (2013). Consumer sentiment might have a positive effect on delinquency by easing households' borrowing conditions following an improvement in consumer sentiment, which in turn lowers delinquency rates. ...
Article
Abstract Purpose This study aims to explain how delinquency shocks in one type of debt contaminate the others. That is, the authors aim to shed light on the time pattern of delinquencies in different debt types. Design/methodology/approach This study analyzes the interdependencies between mortgage, credit card and auto loans delinquency rates in the USA from 2003 to 2019, using a panel VAR-X, the panel Granger causality tests and the Geweke linear dependence measures. The authors also compute the impulse response functions of a shock to one kind of debt on the others and decompose the variance of the forecast errors. Findings The authors find a statistically significant bidirectional Granger causality between the delinquencies. The Geweke measures of linear dependence and the Dumitrescu and Hurlin Granger non-causality tests support that mortgage predominantly causes credit card and auto loan delinquencies. Auto loans also cause credit card delinquencies. The impulse response functions confirm this pattern. This scenario aligns with a sequence where debtors consider rational first to default on credit cards, second on auto loans and only on mortgages in the last instance. Indeed, credit card delinquencies Granger-cause delinquencies in other debts when it occurs. Originality/value To the best of the authors’ knowledge, this is the first study to focus on the temporal pattern of delinquency rates for all the US states, using panel data. Furthermore, the results call for policymakers to design regulations to break the transmission channel from debt delinquencies.
... Besides, due to the development of information technology, more and more indicators are used in credit scoring model. Typical indicators mainly include macroeconomic variables (Bellotti and Crook 2013;Abdolreza and Fabozzi 2018), soft information (Dorfleitner et al. 2016;Jiang et al. 2018), and digital footprint left by the borrower's visit to the website (Berg et al. 2020;Orlova 2021). (Jiang et al. 2018) proposed a loan default prediction model by combining soft information extracted from descriptive text, and conducted an empirical study on a large P2P credit data in China. ...
Article
Full-text available
In the credit loan practices of lending platforms, there is a mismatch problem between borrowers’ credit scoring and the probability of default (PD), which cannot provide a basis for the credit loan decision. Firstly, the Wald test is used to select the single indicator with a strong default identification ability, and Lasso-Logistic regression is used to determine the optimal combination of indicators with the overall default identification ability to construct the credit scoring indicator system. Secondly, by randomly dividing non-default samples, this article forms multiple groups of balanced samples with default samples to perform Lasso-Logistic regression, and multiple groups of Lasso-Logistic regression coefficients and expert opinion are used to determine the optimal indicator weight and calculate the credit score. The empirical work is developed through 43,471 and 24,153 samples of Lending Club. The results show that the proposed credit scoring method is effective. The credit scoring method proposed in this paper solves the mismatch problem between credit scoring and PD in credit loan decision-making practice. The findings of this paper provide references for assisting credit loan decision-making, reducing bankers’ investment risks, and alleviating borrowers’ financing problems.
... Of late, academics and practitioners have been seeking models that give precedence to the high predictive ability by incorporating macroeconomic variables when predicting SME default. Matenda et al. (2021aMatenda et al. ( , 2021b, Charalambakis and Garrett (2019), and Bellotti and Crook (2013) propounded that the incorporation of macroeconomic variables in default techniques augments their predictive ability. ...
Article
Full-text available
Using stepwise logistic regression models, the study aims to separately detect and explain the determinants of default probability for unaudited and audited small-to-medium enterprises (SMEs) under stressed conditions in Zimbabwe. For effectiveness purposes, we use two separate datasets for unaudited and audited SMEs from an anonymous Zimbabwean commercial bank. The results of the paper indicate that the determinants of default probability for unaudited and audited SMEs are not identical. These determinants include financial ratios, firm and loan characteristics, and macroeconomic variables. Furthermore, we discover that the classification rates of SME default prediction models are enhanced by fusing financial ratios and firm and loan features with macroeconomic factors. The study highlights the vital contribution of macroeconomic factors in the prediction of SME default probability. We recommend that financial institutions model separately the default probability for audited and unaudited SMEs. Further, it is recommended that financial institutions should combine financial ratios and firm and loan characteristics with macroeconomic variables when designing default probability models for SMEs in order to augment their classification rates.
... Construction of adverse scenarios in a multi-period setting has been considered in a credit risk context, see e.g., Breuer, Jandačka, et al. (2012) and Bellotti and Crook (2013), however, a comprehensive treatment of reverse stress testing for dynamic loss models that are described by compound Poisson processes is missing. ...
Preprint
Full-text available
Stress testing, and in particular, reverse stress testing, is a prominent exercise in risk management practice. Reverse stress testing, in contrast to (forward) stress testing, aims to find an alternative but plausible model such that under that alternative model, specific adverse stresses (i.e. constraints) are satisfied. Here, we propose a reverse stress testing framework for dynamic models. Specifically, we consider a compound Poisson process over a finite time horizon and stresses composed of expected values of functions applied to the process at the terminal time. We then define the stressed model as the probability measure under which the process satisfies the constraints and which minimizes the Kullback-Leibler divergence to the reference compound Poisson model. We solve this optimization problem, prove existence and uniqueness of the stressed probability measure, and provide a characterization of the Radon-Nikodym derivative from the reference model to the stressed model. We find that under the stressed measure, the intensity and the severity distribution of the process depend on time and the state space. We illustrate the dynamic stress testing by considering stresses on VaR and both VaR and CVaR jointly and provide illustrations of how the stochastic process is altered under these stresses. We generalize the framework to multivariate compound Poisson processes and stresses at times other than the terminal time. We illustrate the applicability of our framework by considering "what if" scenarios, where we answer the question: What is the severity of a stress on a portfolio component at an earlier time such that the aggregate portfolio exceeds a risk threshold at the terminal time? Moreover, for general constraints, we provide a simulation algorithm to simulate sample paths under the stressed measure.
... Those scenarios can then be fed into existing loan default models to predict the impacts. Stress test models [11,7,3,18,9,10] are widely available at larger lenders and from vendors for smaller lenders, so this allows any lender to quickly run drought scenarios. ...
Preprint
Full-text available
The 2012 Midwestern US drought was analyzed as a template for how climate change-induced water shortages could impact communities in the future. Studying such historical events allows for the creation of impulse functions that can be overlayed on future economic conditions to stress test loan portfolios. County-level data was analyzed to establish the connections between rainfall, crop yields, and macroeconomic factors such as employment, property values, and real gross domestic product. Data from FDIC confirms that these events caused bank losses, but the economic scenarios can provide input to existing loan loss stress test models to simulate hypothetic future losses. The severity and frequency of the events can be tied to the severity of the climate change scenario. Ideally , the industry needs a full library of such climate events from which to create climate stress tests appropriate to geographically-concentrated lenders.
... Hazard models (T , urlea 2021) can be used to assess the riskiness of the obligor by computing a score that indicates whether the obligor defaults within the specified horizon. However, the models can be quite complex, and the model does not determine when the obligor defaults will occur (Crook and Bellotti 2013). More generally, survival analysis can also be used (Chimezda and Marimo 2017). ...
Article
Full-text available
A new methodology to derive IFRS 9 PiT PDs is proposed. The methodology first derives a PiT term structure with accompanying segmented term structures. Secondly, the calibration of credit scores using the Lorenz curve approach is used to create account-specific PD term structures. The PiT term structures are derived by using empirical information based on the most recent default information and account risk characteristics prior to default. Different PiT PD term structures are developed to capture the structurally different default risk patterns for different pools of accounts using segmentation. To quantify what a materially different term structure constitutes, three tests are proposed. Account specific PiT PDs are derived through the Lorenz curve calibration using the latest default experience and credit scores. The proposed methodology is illustrated on an actual dataset, using a revolving retail credit portfolio from a South African bank. The main advantages of the proposed methodology include the use of well-understood methods (e.g., Lorenz curve calibration, scorecards, term structure modelling) in the banking industry. Further, the inclusion of re-default events in the proposed IFRS 9 PD methodology will simplify the development of the accompanying IFRS 9 LGD model due to the reduced complexity for the modelling of cure cases. Moreover, attrition effects are naturally included in the PD term structures and no longer require a separate model. Lastly, the PD term structure is based on months since observation, and therefore the arrears cycle could be investigated as a possible segmentation.
Article
In the current regulatory environment, banks are required to quantify credit risk by means of default probabilities, loss rates conditional on default and expected exposures for a number of purposes: regulatory capital calculation, loan loss provisioning and stress testing. The nature of each credit risk parameter might be different for each application, e.g., forward looking default probabilities are needed for loan loss provisioning while regulatory capital is based on long-term averages. These different requirements for each purpose create a substantial burden especially for small and medium-sized banks. This paper describes a simple framework that allows the consistent calculation of credit risk parameters for all risk applications. It assumes that a bank is using a scorecard based on loan-level data where the data history might only span a couple of years. These data are combined with a macroeconomic model in a suitable way to derive risk parameters compliant with all regulatory requirements.
Article
Models developed by banks to forecast losses in their credit card portfolios have generally performed poorly during the COVID‐19 pandemic, particularly in 2020, when large forecast errors were observed at many banks. In this study, we attempt to understand the source of this error and explore ways to improve model fit. We use account‐level monthly performance data from the largest credit card banks in the U.S. between 2008 and 2018 to build models that mimic the typical model design employed by large banks to forecast credit card losses. We then fit these on data from 2019 to 2021. We find that COVID‐period model errors can be reduced significantly through two simple modifications: (1) including measures of the macroeconomic environment beyond indicators of the labor market, which served as the primary macro drivers used in many pre‐pandemic models and (2) adjusting macro drivers to capture persistent/sustained changes, as opposed to temporary volatility in these variables. These model improvements, we find, can be achieved without a significant reduction in model performance for the pre‐COVID period, including the Great Recession. Moreover, in broadening the set of macro influences and capturing sustained changes, we believe models can be made more robust to future downturns, which may bear little resemblance to past recessions.
Preprint
Full-text available
A novel procedure is presented for finding the true but latent endpoints within the repayment histories of individual loans. The monthly observations beyond these true endpoints are false, largely due to operational failures that delay account closure, thereby corrupting some loans in the dataset with 'false' observations. Detecting these false observations is difficult at scale since each affected loan history might have a different sequence of zero (or very small) month-end balances that persist towards the end. Identifying these trails of diminutive balances would require an exact definition of a "small balance", which can be found using our so-called TruEnd-procedure. We demonstrate this procedure and isolate the ideal small-balance definition using residential mortgages from a large South African bank. Evidently, corrupted loans are remarkably prevalent and have excess histories that are surprisingly long, which ruin the timing of certain risk events and compromise any subsequent time-to-event model such as survival analysis. Excess histories can be discarded using the ideal small-balance definition, which demonstrably improves the accuracy of both the predicted timing and severity of risk events, without materially impacting the monetary value of the portfolio. The resulting estimates of credit losses are lower and less biased, which augurs well for raising accurate credit impairments under the IFRS 9 accounting standard. Our work therefore addresses a pernicious data error, which highlights the pivotal role of data preparation in producing credible forecasts of credit risk.
Preprint
The increasing usage of new data sources and machine learning (ML) technology in credit modeling raises concerns with regards to potentially unfair decision-making that rely on protected characteristics (e.g., race, sex, age) or other socio-economic and demographic data. The authors demonstrate the impact of such algorithmic bias in the microfinance context. Difficulties in assessing credit are disproportionately experienced among vulnerable groups, however, very little is known about inequities in credit allocation between groups defined, not only by single, but by multiple and intersecting social categories. Drawing from the intersectionality paradigm, the study examines intersectional horizontal inequities in credit access by gender, age, marital status, single parent status and number of children. This paper utilizes data from the Spanish microfinance market as its context to demonstrate how pluralistic realities and intersectional identities can shape patterns of credit allocation when using automated decision-making systems. With ML technology being oblivious to societal good or bad, we find that a more thorough examination of intersectionality can enhance the algorithmic fairness lens to more authentically empower action for equitable outcomes and present a fairer path forward. We demonstrate that while on a high-level, fairness may exist superficially, unfairness can exacerbate at lower levels given combinatorial effects; in other words, the core fairness problem may be more complicated than current literature demonstrates. We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm. We discuss the implications of these findings for the financial services industry.
Article
Full-text available
In order to stress test loan portfolios for the impacts of climate change, historical events need to be analyzed to create templates to stress test for future events. Using the 2012 Midwestern US drought as an example, this work creates a stress-testing template for future droughts. The analysis connects weather and crop yield data to impacts on local macroeconomic conditions by comparing drought-impacted agricultural counties with nearby urban counties. After measuring the net macroeconomic impacts of the drought, this was used as an overlay with existing macroeconomic stress models to stress test a lender in a different part of the US for possible drought impacts. Having a library of such climate events would allow lenders to stress test their portfolios for a wide range of possible impacts.
Article
Full-text available
The inclusion of time-varying covariates into survival analysis has led to better predictions of the time to default in behavioural credit scoring models. However, when these time-varying covariates are endogenous, there are two major problems: estimation bias of the survival model and lack of a prediction framework for future values of both the event and the endogenous time-varying covariates. Joint models for longitudinal and survival data is an appropriate framework to model the mutual evolution of the survival time and the endogenous time-varying covariates. To the best of our knowledge, this paper explores for the first time the application of discrete-time joint models to credit scoring. Moreover, we propose a novel extension to the joint model literature by including autoregressive terms in modelling the endogenous time-varying covariates. We present the method via simulations and by applying it to US mortgage loans. The empirical analysis shows, first, that discrete joint models can increase the discrimination performance compared to survival models. Second, when an autoregressive term is included, this performance can be further improved.
Article
Online mail order and online retail purchases have increased rapidly in recent years worldwide, with Covid-19 forcing almost all non-grocery shopping to move online. These practices have facilitated the availability of new data sources, such as web behavioural variables providing scope for innovation in credit risk analysis and decision practices. This paper examines new web browsing variables and incorporates them into survival analysis as predictors of probability of default (PD). Using a large sample of purchase and repayment credit accounts from a major digital retailer and financial services provider, we show that these new variables enhance the predictive accuracy of probability of default (PD) models at account level. This also holds in the absence of credit bureau data, therefore, the new information can help people who may not have a credit history (thin file) who cannot be assessed using traditional variables. Moreover, we leverage on the dynamic nature of these new web variables and explore their predictive value in short and long- term horizons. By adding macroeconomic variables, the possibility for stress-testing is provided. Our empirical findings provide insights into web browsing behaviour, highlight how the inclusion of non-standard variables can improve credit risk scoring models and lending decisions and may provide a solution to the thin files problem. Our results also suggest a direct value added to the online retail credit industry as firms should leverage the increasing trend of consumers embracing the digital environment.
Article
Transition probabilities between delinquency states play a key role in determining the risk profile of a lending portfolio. Stress testing and IFRS9 are topics widely discussed by academics and practitioners. In this paper, we combine dynamic multi-state models and macroeconomic scenarios to estimate a stress testing model that forecasts delinquency states and transition probabilities at the borrower level for a mortgage portfolio. For the first time, a delinquency multi-state model is estimated for residential mortgages. We explicitly analyse and control for repeated events, an aspect previously not considered in credit risk multi-state models. Furthermore, we enhance the existing methodology by estimating scenario-specific forecasts beyond the lag of time-dependent covariates. We find that the number of previous transitions have a significant impact on the level of the transition probabilities, that severe economic conditions affect younger vintages the most, and that the relative impact of the stress scenario differs by attributes observed at origination.
Article
Full-text available
One approach to stress testing the amount of capital required by a bank for credit risk is to use parameterised account level models with credit application characteristics, behavioural characteristics and macroeconomic factors as predictors. The standard methodology underestimates the amount of capital required because it fails to include uncertainty over the model parameters, over the future trajectory of behavioural variables and over volatility. We provide a methodology for estimating the magnitudes of these additional losses and so a methodology to gain a more accurate estimate of the amount of capital required.
Article
Full-text available
Research background: The global financial crisis from 2007 to 2012, the COVID-19 pandemic, and the current war in Ukraine have dramatically increased the risk of consumer bankruptcies worldwide. All three crises negatively impact the financial situation of households due to increased interest rates, inflation rates, volatile exchange rates, and other significant macroeconomic factors. Financial difficulties may arise when the private person is unable to maintain a habitual standard of living. This means that anyone can become financially vulnerable regardless of wealth or education level. Therefore, forecasting consumer bankruptcy risk has received increasing scientific and public attention. Purpose of the article: This study proposes artificial intelligence solutions to address the increased importance of the personal bankruptcy phenomenon and the growing need for reliable forecasting models. The objective of this paper is to develop six models for forecasting personal bankruptcies in Poland and Taiwan with the use of three soft-computing techniques. Methods: Six models were developed to forecast the risk of insolvency: three for Polish households and three for Taiwanese consumers, using fuzzy sets, genetic algorithms, and artificial neural networks. This research relied on four samples. Two were learning samples (one for each country), and two were testing samples, also one for each country separately. Both testing samples contain 500 bankrupt and 500 nonbankrupt households, while each learning sample consists of 100 insolvent and 100 solvent natural persons. Findings & value added: This study presents a solution for effective bankruptcy risk forecasting by implementing both highly effective and usable methods and proposes a new type of ratios that combine the evaluated consumers? financial and demographic characteristics. The usage of such ratios also improves the versatility of the presented models, as they are not denominated in monetary value or strictly in demographic units. This would be limited to use in only one country but can be widely used in other regions of the world.
Article
Accounting standards require from financial institutions to consider and forecast multiple macroeconomic scenarios when calculating loan loss provisions. Loan loss provisions protect a financial institutions against losses. But how to determine objectively the number of scenarios and to forecast scenario probabilities is an unsolved problem. This paper shows that embedding the question into the framework of a hidden Markov model (HMM) leads to a natural answer. A disadvantage of employing HMMs to credit risk is the short length of a typical time series of default rates. To overcome this problem the paper proposes to employ for the transition probability matrix (TPM) of the hidden states a crucial adaptation to the standard approach. The adapted TPM is a hybrid version of a continuous-valued TPM and a discrete-valued valued TPM. The adaptation imposes a structure on the TPM and reduces the number of required parameters for a discrete-valued HMM with M hidden states from M(M−1) to M+2. The proposed model is benchmarked using a time series of defaults and is, for the analysed data, considered optimal in terms of the Akaike and Bayesian information criterion. By building the proposed HMM scenario probabilities can be objectively forecasted and have no longer be expertly assessed.
Article
To categorize credit applications into defaulters or non-defaulters, most credit evaluation models have employed binary classification methods based on default probabilities. However, while some loan applications can be directly accepted or rejected, there are others on which immediate accurate credit status decisions cannot be made using existing information. To resolve these issues, this study developed an optimized sequential three-way decision model. First, an information gain objective function was built for the three-way decision, after which a genetic algorithm (GA) was applied to determine the optimal decision thresholds. Then, appropriate accept or reject decisions for some applicants were made using basic credit information, with the remaining applicants, whose credit status was difficult to determine, being divided into a boundary region (BND). Supplementary information was then added to reevaluate the credit applicants in the BND, and a sequential optimization process was employed to ensure more accurate predictions. Therefore, the model’s predictive abilities were improved and the information acquisition costs controlled. The empirical results demonstrated that the proposed model was able to outperform other benchmarking credit models based on performance indicators.
Chapter
Based on a large micro data set of loan accounts consisting of restructured and non-restructured loans we investigate the ability of loan-specific and macroeconomic covariates to predict the probability of default (PD). We seek to investigate whether differences in the PD between the two categories of loans can be attributed to moral hazard effects. We provide clear cut evidence that the PD of consumer loans and expected default rates are higher for the restructured than the non-restructured loans. We show that loan-specific covariates, reflecting behavioral attitudes of borrowers in consumer loan markets, constitute relatively more important determinants of the PD changes than the macroeconomic covariates. This result is more striking for the restructured loans. We find that the ratio of delinquent amount of a loan over its total balance constitutes the most influential factor of the PD. This result is more intense on the restructured loans category. We argue that the latter can be mostly attributed to moral hazard incentives of borrowers. On the other hand, we show that the ratio of a loan’s payments to the personal income of its obligor can signal reductions in the PD. We argue that this ratio signals the commitment of borrowers without moral hazard incentives to service their debt.
Article
This paper explores alternative forecast approaches for mortgage credit risk for forward periods of up to seven years. Using data from US prime mortgage loans from 2000 to 2016, we find that common borrower, loan contract and external features are significant in explaining credit risk over forward periods. Time variation may come through the ageing and forward channel. We develop a hybrid model for predicting default probabilities that combines both channels and outperforms standalone alternatives. This higher precision results in more accurate economic capital, IFRS 9/CECL loan loss provisioning and mortgage pricing, and hence, a more efficient and resilient resource allocation in commercial banks.
Article
Full-text available
Survival analysis can be applied to build models for time to default on debt. In this paper, we report an application of survival analysis to model default on a large data set of credit card accounts. We explore the hypothesis that probability of default (PD) is affected by general conditions in the economy over time. These macroeconomic variables (MVs) cannot readily be included in logistic regression models. However, survival analysis provides a framework for their inclusion as time-varying covariates. Various MVs, such as interest rate and unemployment rate, are included in the analysis. We show that inclusion of these indicators improves model fit and affects PD yielding a modest improvement in predictions of default on an independent test set.
Article
Full-text available
The capital requirements formula within the Basel II Accord is based on a Merton one factor model and in the case of credit cards an asset correlation of 4% is assumed. In this paper we estimate the asset correlation for two datasets assuming the one factor model. We find that the asset correlations assumed by Basel II are much higher than those observed in the datasets we analyse. We show the reduction in capital requirements that a typical lender would have if the values we estimated were implemented in the Basel Accord in place of the current values.
Article
Full-text available
Models of Autoregressive Conditional Heteroscedasticity (ARCH) and their generalizations are widely used in applied econometric research, especially for analysis of financial markets. We bring to our reader’s attention a consul-tation on this topic prepared from the book of Marno Verbeek “A Guide to Modern Econometrics” appearing soon in the Publishing House “Nauchnaya Kniga” Note: this is not the textbook "A Guide to Modern Econometrics", which is copyright owned by John Wiley and Sons.
Article
Full-text available
In this paper we propose a framework for measuring and stress testing the systemic risk of a group of major financial institutions. The systemic risk is measured by the price of insurance against financial distress, which is based on ex ante measures of default probabilities of individual banks and forecasted asset return correlations. Importantly, using realized correlations estimated from high-frequency equity return data can significantly improve the accuracy of forecasted correlations. Our stress testing methodology, using an integrated micro–macro model, takes into account dynamic linkages between the health of major US banks and macro-financial conditions. Our results suggest that the theoretical insurance premium that would be charged to protect against losses that equal or exceed 15% of total liabilities of 12 major US financial firms stood at 110billioninMarch2008andhadaprojectedupperboundof110 billion in March 2008 and had a projected upper bound of 250 billion in July 2008.
Article
Full-text available
Monte Carlo simulation is a common method for studying the volatility of market traded instruments. It is less employed in retail lending, because of the inherent nonlinearities in consumer behaviour. In this paper, we use the approach of Dual-time Dynamics to separate loan performance dynamics into three components: a maturation function of months-on-books, an exogenous function of calendar date, and a quality function of vintage origination date. The exogenous function captures the impacts from the macroeconomic environment. Therefore, we want to generate scenarios for the possible futures of these environmental impacts. To generate such scenarios, we must go beyond the random walk methods most commonly applied in the analysis of market-traded instruments. Retail portfolios exhibit autocorrelation structure and variance growth with time that requires more complex modelling. This paper is aimed at practical application and describes work using ARMA and ARIMA models for scenario generation, rules for selecting the correct model form given the input data, and validation methods on the scenario generation. We find when the goal is capturing the future volatility via Monte Carlo scenario generation, that model selection does not follow the same rules as for forecasting. Consequently, tests more appropriate to reproducing volatility are proposed, which assure that distributions of scenarios have the proper statistical characteristics. These results are supported by studies of the variance growth properties of macroeconomic variables and theoretical calculations of the variance growth properties of various models. We also provide studies on historical data showing the impact of training length on model accuracy and the existence of differences between macroeconomic epochs. Journal of the Operational Research Society (2010) 61, 399-410. doi: 10.1057/jors.2009.105 Published online 14 October 2009
Article
Full-text available
We review the incorporation of time varying variables into models of the risk of consumer default. Lenders typically have data which are of a panel format. This allows the inclusion of time varying covariates in models of account level default by including them in survival models, panel models or 'correction factor' models. The choice depends on the aim of the model and the assumptions that can be plausibly made. At the level of the portfolio, Merton-type models have incorporated macroeconomic and latent variables in mixed (factor) models and Kalman filter models whereas reduced form approaches include Markov chains and stochastic intensity models. The latter models have mainly been applied to corporate defaults and considerable scope remains for application to consumer loans. Copyright (c) 2009 Royal Statistical Society.
Article
Full-text available
A major topic in retail lending is the measurement of the inherent portfolio credit risk. The needs for a better understanding and dealing with default risky securities have been reinforced by the Basel Committee on Banking Supervision [1999a, 1999b, 2000, 2001a, 2001b, 2002, 2003] which has proposed a revision of the standards for banks' capital requirements.
Article
Full-text available
A major topic in empirical finance is correlation of default risk. Correlations are the main drivers for credit risk on a portfolio basis and for banks’ capital requirements under the New Basel Accord. However, empirical evidence on the magnitude of correlations is rather scarce, mainly due to data limitations. Using a large database of bankruptcies in Germany we estimate correlations using a simple version of the Basel II factor model. Then we extend the model to an approach with observable risk factors and suggest that this model with default probabilities depending on the state of the economy may be more adequate. Empirical evidence on proxies for the credit cycles is presented for German industry sectors. We find that much of the co-movements can be explained by our variables. Finally, we discuss some implications for forecasts of distributions of potential future defaults of a bank’s portfolio. Ein wichtiges Gebiet im Bereich der empirischen Finanzwirtschaft stellen Korrelationen von Kreditausfällen dar. Sowohl in Kreditportfoliomodellen als auch für die Eigenkapitalanforderungen von Banken im Neuen Basler Akkord sind sie die Haupttreiber von Kreditrisiken. Allerdings sind empirische Ergebnisse über die Größenordnungen von Korrelationen hauptsächlich aufgrund unzureichender Datenlage bislang äußerst selten zu finden. Der vorliegende Beitrag schätzt zunächst mit Hilfe einer einfachen statischen Version des Basel II Faktormodells Ausfallwahrscheinlichkeiten und Korrelationen anhand einer großen Datenbank deutscher Unternehmen. Das einfache Modell wird anschließend um beobachtbare makroökonomische Risikofaktoren erweitert und die Ausfallwahrscheinlichkeiten werden in Abhängigkeit des aktuellen Stands der Konjunktur modelliert. Es stellt sich heraus, dass ein großer Teil der im einfachen Modell vorhandenen Korrelationen durch die Risikofaktoren erklärt werden kann. Schließlich werden für die jeweiligen Modelle Implikationen für Verlustprognosen von Bankenportfolien aufgezeigt.
Article
Full-text available
This paper provides a side-by-side comparison of loan-level statistical models for fixed- and adjustable-rate mortgages. Multinomial logit models for quarterly conditional probabilities of default and prepayment are estimated. We find that the estimated impacts of embedded option values for prepayment and default are generally quite similar across both FRM and ARM loans, providing additional empirical support for the basic predictions of the options theory. We also find that differences in estimates of conditional probabilities of prepayment and default associated with mortgage age, origination period, original LTV, and relative loan size, indicate the continued significance of these other economic and demographic factors for empirical models of mortgage terminations. Copyright 2002 by Kluwer Academic Publishers
Article
In the analysis of data it is often assumed that observations y1, y2, …, yn are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters θ. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples.
Article
In this paper, we collect consumer delinquency data from several economic shocks in order to study the creation of stress-testing models. We leverage the dual-time dynamics modeling technique to better isolate macroeconomic impacts whenever vintage-level performance data is available. The stress-testing models follow a framework described here of focusing on consumer-centric macroeconomic variables so that the models are as robust as possible when predicting the impacts of future shocks.
Article
This paper discusses the use of dynamic modelling in consumer credit risk assessment. It surveys the approaches and objectives of behavioural scoring, customer scoring and profit scoring. It then investigates how Markov chain stochastic processes can be used to model the dynamics of the delinquency status and behavioural scores of consumers. It discusses the use of segmentation, mover–stayer models and the use of second- and third-order models to improve the fit of such models. The alternative survival analysis proportional hazards approach to estimating when default occurs is considered. Comparisons are made between the ways credit risk is modelled in consumer lending and corporate lending.
Article
Graphical methods based on the analysis of residuals are considered for the setting of the highly-used D. R. Cox [J. R. Stat. Soc., Ser. B 34, 187-220 (1972; Zbl 0243.62041)] regression model and for the P. K. Andersen and R. D. Gill [Ann. Stat. 10, 1100-1120 (1982; Zbl 0526.62026)] generalization of that model. We start with a class of martingale-based residuals as proposed by W. E. Barlow and R. L. Prentice [Biometrika 75, 65-74 (1988; Zbl 0632.62102)]. These residuals and/or their transforms are useful for investigating the functional form of covariate, the proportional hazards assumption, the leverage of each subject upon the estimates of β, and the lack of model fit to a given subject.
Article
We propose a new method for analysing multi-period stress scenarios for portfolio credit risk more systematically than in the current practice of macro stress testing. Our method quantifies the plausibility of scenarios by considering the distance of the stress scenario from an average scenario. For a given level of plausibility our method searches systematically for the most adverse scenario for the given portfolio. This method therefore gives a formal criterion for judging the plausibility of scenarios and it makes sure that no plausible scenario will be missed. We show how this method can be applied to a range of models already in use among stress testing practitioners. While worst case search requires numerical optimisation we show that for practically relevant cases we can work with reasonably good linear approximations to the portfolio loss function that make the method computationally very efficient and easy to implement. Applying our approach to data from the Spanish loan register and using a portfolio credit risk model we show that, compared to standard stress test procedures, our method identifies more harmful scenarios that are equally plausible.
Article
We use response data collected by a lender to estimate the probabilities of loan offers being accepted by the applicants and the survival probabilities of default and of paying back early. Combining all those together we estimated the expected profit surface for the lender at the time of application before making an offer to an applicant. The results show how a lender could find the optimal interest rate to increase the expected profit or its market share. We also consider how different optimal decision policies could be applied to different market segments.
Article
We explore the impact of possible non-linearties on credit risk in a VAR set-up. We look at two measures of credit risk: quarterly aggregate liquidation rates and quarterly firm specific probability of defaults, which are derived by a new method from annual default data. We show three important results. First, non-linearities matter for the level and shape of impulse response functions of credit risk following small as well as large shocks to systematic risk factors. Second, in the non-linear model the impact of a shock depends significantly on the starting level of exogenous variables. Depending on actual conditions and the forecast horizon, the level as well as the sign of the impact can change. Third, we account for uncertainty in our estimates and show that ignoring this can lead to a substantial underestimation of credit risk, especially in extreme conditions.
Article
This paper reviews the state-of-the-art of macro stress-testing methodologies. We assess the progress made both in the econometric analysis of balance sheet indicators and in the simulation of value-at-risk measures to assess system-wide vulnerabilities. To illustrate the main analytical approaches in the literature, we estimate two different models for stress-testing purposes using data for Finland over the time period from 1986 to 2003. The Finnish experience in the early 1990s appears particularly suited for macro stress-testing as it includes a severe recession with significantly higher-than-average default rates and banks’ loan losses. We highlight a number of methodological challenges that still remain concerning in particular the correlation of market and credit risks over time and across institutions, the limited time horizon generally used for macro stress-testing and the potential instability of reduced-form parameter estimates because of feedback effects.
Article
Credit and interest rate risk are the two most important risks faced by commercial banks in their banking book. In this paper we derive a consistent and comprehensive framework to measure the integrated impact of both risks. By taking account of the repricing characteristics of assets, liabilities and off balance sheet items, we assess the integrated impact of credit and interest rate risk on banks’ economic value and capital adequacy. We then stress test a hypothetical but realistic bank using our framework and show that it is fundamental to measure the impact of credit and interest rate risk jointly.
Article
Credit scoring discriminates between 'good' and 'bad' credit risks to assist credit-grantors in making lending decisions. Such discrimination may not be a good indicator of profit, while survival analysis allows profit to be modelled. The paper explores the application of parametric accelerated failure time and proportional hazards models and Cox non-parametric model to the data from the retail card (revolving credit) from three European countries. The predictive performance of three national models is tested for different timescales of default and then compared to that of a single generic model for a timescale of 25 months. It is found that survival analysis national and generic models produce predictive quality, which is very close to the current industry standard—logistic regression. Stratification is investigated as a way of extending Cox non-parametric proportional hazards model to tackle heterogeneous segments in the population.
Article
We analyse the behaviour of euro area corporate sector probabilities of default under a wide range of domestic and global macro-financial shocks. Using the Global Vector Autoregressive (GVAR) model and constructing a linking satellite equation for firm-level Expected Default Frequencies (EDFs) we show that, at the aggregate level, the median EDFs react most to shocks to GDP, exchange rate, oil prices and equity prices. Intuitive variations to these results occur when sector-level median EDFs are considered. The satellite-GVAR model emerges as a useful tool for linking global macro-financial scenarios with micro-level information on expected defaults.
Article
How might society ensure the allocation of credit to those who lack meaningful collateral? Two very different options that have each been pursued by a variety of societies through time and space are (i) relatively harsh penalties for default and, more recently, (ii) loan guarantee programs that allow borrowers to default subject to moderate consequences and use public funds to compensate lenders. The goal of this paper is to provide a quantitative statement about the relative desirability of these responses. Our findings are twofold. First, we show that under a wide array of circumstances, punishments harsh enough to ensure all debt is repaid improve welfare. With respect to loan guarantees, our findings suggest that such efforts are largely useless at best, and substantially harmful at worst. Generous loan guarantees virtually ensure substantially higher taxes — with transfers away from the non-defaulting poor to the defaulting middle-class — and greater deadweight loss from high equilibrium default rates. Taken as a whole, our findings suggest that current policy toward default is likely to be counterproductive, and that guarantees for consumption loans are not the answer.
Article
This article uses a new dataset of credit card accounts to analyze credit card delinquency, personal bankruptcy, and the stability of credit risk models. We estimate duration models for default and assess the relative importance of different variables in predicting default. We investigate how the propensity to default has changed over time, disentangling the two leading explanations for the recent increase in default rates--a deterioration in the risk composition of borrowers versus an increase in borrowers' willingness to default due to declines in default costs. Even after controlling for risk composition and economic fundamentals, the propensity to default significantly increased between 1995 and 1997. Standard default models missed an important time-varying default factor, consistent with a decline in default costs. Copyright 2002, Oxford University Press.
Article
This paper presents a new approach to modeling conditional credit loss distributions. Asset value changes of firms in a credit portfolio are linked to a dynamic global macroeconometric model, allowing macroeffects to be isolated from idiosyncratic shocks from the perspective of default (and hence loss). Default probabilities are driven primarily by how firms are tied to business cycles, both domestic and foreign, and how business cycles are linked across countries. We allow for firm-specific business cycle effects and the heterogeneity of firm default thresholds using credit ratings. The model can be used, for example, to compute the effects of a hypothetical negative equity price shock in South East Asia on the loss distribution of a credit portfolio with global exposures over one or more quarters. We show that the effects of such shocks on losses are asymmetric and non-proportional, reflecting the highly nonlinear nature of the credit risk model.
Article
This paper discusses the use of dynamic modelling in consumer credit risk assessment. It surveys the approaches and objectives of behavioural scoring, customer scoring and profit scoring. It then investigates how Markov chain stochastic processes can be used to model the dynamics of the delinquency status and behavioural scores of consumers. It discusses the use of segmentation, mover-stayer models and the use of second and third order models to improve the fit of such models. An alternative survival analysis proportional hazards approach to estimating when default occurs is considered. Comparisons are made between the way credit risk is modelled in consumer lending and corporate lending.
Article
In recent months and years both practitioners and regulators have embraced the ideal of supplementing VaR estimates with "stress-testing". Risk managers are beginning to place an emphasis and expend resources on developing more and better stress-tests. In the present paper, we hold the standard approach to stress-testing up to a critical light. The current practice is to stress-test outside the basic risk model. Such an approach yields two sets of forecasts -- one from the stress-tests and one from the basic model. The stress scenarios, conducted outside the model, are never explicitly assigned probabilities. As such, there is no guidance as to the importance or revelance of the results of stress-tests. Moreover, how to combine the two forecasts into a usable risk metric is not known. Instead, we suggest folding the stress-tests into the risk model, thereby requiring all scenarios to be assigned probabilities.
Article
This paper provides the first rigorous assessment of the homeownership experiences of subprime borrowers. We consider homeowners who used subprime mortgages to buy their homes, and estimate how often these borrowers end up in foreclosure. In order to evaluate these issues, we analyze homeownership experiences in Massachusetts over the 1989–2007 period using a competing risks, proportional hazard framework. We present two main findings. First, homeownerships that begin with a subprime purchase mortgage end up in foreclosure almost 20 percent of the time, or more than 6 times as often as experiences that begin with prime purchase mortgages. Second, house price appreciation plays a dominant role in generating foreclosures. In fact, we attribute most of the dramatic rise in Massachusetts foreclosures during 2006 and 2007 to the decline in house prices that began in the summer of 2005.
Article
The problems of how to evaluate and compare the quality of models formed from panel data are discussed. Using the lessons learnt from the valuation of time series models using post-sample forecasting a variety of tests are suggested using several out-of-sample parts of a panel data set, but un-used regions and time periods. Emphasis is on comparing models and two real-data examples are provided - a transition model of land use in the Amazon region and a production model using Jorgenson's input-output database. Specific consideration is given to data quality and the effect of outliers and to policy model evaluation.
Cyclical default and recovery in stress testing loan losses
  • E Jokivoulle
  • M Viren
Jokivoulle, E., & Viren, M. (2011). Cyclical default and recovery in stress testing loan losses. Journal of Financial Stability, 9(1), 139–149.
Stress testing at major financial institutions: survey results and practice Basel II: international convergence of capital measurement and capital standards Credit scoring with macroeconomic variables using survival analysis
  • T Crook
Bank for International Settlements (BIS) (2005). Stress testing at major financial institutions: survey results and practice. Working report from Committee on the Global Financial System. Basel Committee on Banking Supervision (BCBS) (2006). Basel II: international convergence of capital measurement and capital standards. www.bis.org/publ/bcbsca.htm. Bellotti, T., & Crook, J. (2009). Credit scoring with macroeconomic variables using survival analysis. Journal of the Operational Research Society, 60(12), 1699–1707.
Stress and scenario testing. Consultation paper 08
Financial Services Authority FSA (2008). Stress and scenario testing. Consultation paper 08/24 FSA: UK.
The supervisory assessment program: overview of results
Board of Governors of the Federal Reserve System FRS (2009). The supervisory assessment program: overview of results. FRS: USA.
Basel II: International Convergence of Capital Measurement and Capital Standards at www
Basel Committee on Banking Supervision BCBS (2005). Basel II: International Convergence of Capital Measurement and Capital Standards at www.bis.org/publ/bcbsca.htm
Stress testing at major financial institutions: survey results and practice. Working report from Committee on the Global Financial System
Bank for International Settlements BIS (2005). Stress testing at major financial institutions: survey results and practice. Working report from Committee on the Global Financial System.