## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

State-of-the-art techniques and tools needed to facilitate effective credit portfolio management and robust quantitative credit analysis Filled with in-depth insights and expert advice, Active Credit Portfolio Management in Practice serves as a comprehensive introduction to both the theory and real-world practice of credit portfolio management. The authors have written a text that is technical enough both in terms of background and implementation to cover what practitioners and researchers need for actually applying these types of risk management tools in large organizations but which at the same time, avoids technical proofs in favor of real applications. Throughout this book, readers will be introduced to the theoretical foundations of this discipline, and learn about structural, reduced-form, and econometric models successfully used in the market today. The book is full of hands-on examples and anecdotes. Theory is illustrated with practical application. The authors' Website provides additional software tools in the form of Excel spreadsheets, Matlab code and S-Plus code. Each section of the book concludes with review questions designed to spark further discussion and reflection on the concepts presented. © 2009 Jeffrey R. Bohn and Roger M. Stein. All rights reserved.

To read the full-text of this research,

you can request a copy directly from the authors.

... The power of a scorecard measures the extent to which defaults are avoided when classifying good borrowers. However, even though a scorecard can have strong power, calibration is needed to match actual default rates (Bohn & Stein, 2009). The overall state of the economy during the business cycle creates a problem for the application of credit scoring from an acquisition point of view. ...

... Medema et al. (2009) indicated that a model is well calibrated if the fraction of events which actually occur is unbiasedly estimated by the estimated probability of these events (Medema et al., 2009). Bohn and Stein (2009) asserted that calibration required two steps: mapping the scores of a model to historical empirical probabilities and adjusting for the difference between historical empirical default rates and actual default rates (i.e. the probability needs to be adjusted to reflect the true prior distribution). Bohn and Stein (2009) emphasised that the first objective of credit scorecards was to obtain a high degree of predictive power and suggested simple calibration techniques to map model probabilities to empirical probabilities when econometric assumptions were invalid. ...

... Bohn and Stein (2009) asserted that calibration required two steps: mapping the scores of a model to historical empirical probabilities and adjusting for the difference between historical empirical default rates and actual default rates (i.e. the probability needs to be adjusted to reflect the true prior distribution). Bohn and Stein (2009) emphasised that the first objective of credit scorecards was to obtain a high degree of predictive power and suggested simple calibration techniques to map model probabilities to empirical probabilities when econometric assumptions were invalid. This technique -nonparametric density estimation -improves the alignment between model predictions and actual default probabilities. ...

Application scorecards play a critical part in determining the creditworthiness of applicants for acquisition purposes. However, the level of bad rate in a downturn period and upturn period although monotonic are different across the scores due to procyclicality (which arises from the fluctuation of financial characteristics around a trend in an economic cycle). The procyclicality effect from an acquisition perspective on a bureau scorecard is investigated with emphasis on South African retail banking data, and performance between downturn and upturn periods is compared. This paper contributes by proposing a methodology which incorporates a Bayesian calibration approach to adjust to future expected bad rates: the comparison indicates that calibration is essential to account for procyclicality.

... The best practices of integrated risk management recommend to take into account the mutual interaction of risks. The known experts on credit risks Bohn and Stein (2009) state that the most credit risk models have underestimated credit risk, because they do not take into consideration a liquidity premium. Meantime, the world financial crisis 2007-2008 years, when liquidity of financial markets has quickly evaporated, showed importance of account of liquidity premiums. ...

... Meantime, the world financial crisis 2007-2008 years, when liquidity of financial markets has quickly evaporated, showed importance of account of liquidity premiums. Bohn and Stein (2009) assert that now there are no fully developed models for loan pricing, taking into account the liquidity risk. Therefore, this article is devoted development of approach to loan pricing, coming from the task of integrated management of credit and liquidity risk of a bank. ...

... where R is the risky interest rate, r is the risk free interest rate, el is the specific (on unit of loan sum) expected losses (Bessis, 1988;Bohn and Stein, 2009): ...

In the paper, different approaches to pricing on loan are compared. “Cash Flow at Risk” approach to loan pricing is suggested. Application of this approach ministers to protect a bank against both credit and liquidity risks, and to receive by it interest income with interest rate that is not less than the guaranteed one. Example of interest rate on loan calculation is given. The suggested approach is easy included into RAROC approach.

... Крім моделі очікуваних кредитних збитків, що запозичена стандартами фінансової звітності з ризик-менеджменту, існує модель ціноутворення на кредити з урахуванням ризику [11]. Концепція моделі полягає в тому, що додатковий процентний дохід, генерований кредитним спредом (премією), повинен повністю покрити очікувані кредитні збитки за цим кредитом за строк його існування. ...

... Найкращі практики оцінки кредитного ризику є статистичними [11], що базуються на аналізі великої кількості подій кредитного ризику, наприклад, дефолтів. Базельській комітет рекомендує для управління кредитним ризиком використовувати саме статистичні підходи, за допомогою яких можна виокремити очікувані та неочікувані збитки [2]. ...

This paper examines ways of overcoming inconsistencies between IFRS and modern concepts of credit risk management, namely, expected loss model and risk-adjusted loan pricing. Also, it is considered an issue of acceptable levels of concentration risk in bank credit portfolio.

... Equity, E Scheme 1. The stylized aggregative balance sheet of the borrower (Crosbie & Bohn, 2003) As usually, a dynamics of market value of borrower's total assets is assumed to obey the geometric Brownian motion (Crosbie & Bohn, 2003;Bohn & Stein, 2009): ...

... where pd 1 is the probability of default, A 1 (t) is the market value of the borrower's assets, D is the book value of the borrower's debt at time t. Then, from the Black- Scholes model, it follows that the probability of default at time t in the future is equal to (Crosbie & Bohn, 2003;Bohn & Stein, 2009): ...

A simple approach to explicit estimating a credit limit for a firm that is based on Moody’s KMV model is developed. It allows taking into account term to maturity of loan, quality of assets, a structure of a balance sheet and required level of default probability. The proposed approach describes such well-known intuitive phenomena as the more term to maturity, the less credit limit; the more level of confidence, the lower credit limit, and the more volatility of return on assets, the less credit limit. The result of the estimation of credit limit on an unsecured loan to a firm is given. A contribution of the approach is that it allows taking into account the fact that a firm may invest new debt in new assets with quality that differs from that of existing assets.

... Using results by Bohn and Stein (2009), and expressing the undiscounted expected credit losses through cash flows, write it in the following form: ...

... Using results by Bohn and Stein (2009) and expressing the undiscounted unexpected credit losses through cash flows, write it in the following form: ...

To price bank's assets correctly, it is important to know cost of funds. But funding cost calculation is complicated due to the fact that banks fund long-term assets through short-term liabilities. As a result, assets with a given time to maturity are usually financed by several liabilities with different maturities. To calculate funding cost it needs to know how cash flows are matched between assets and liabilities. For thisis used cash flow matching matrix or funding matrix. In the paper, a new algorithm of filling of a two-dimensional funding matrix that is based on the golden rule of banking and modified RAROC-approach is proposed. It provides positive definiteness and uniqueness of the matrix. The matrix shows terms to maturity and amounts of liability cash flows which fund the asset cash flow with a given term to maturity. Examples of partially and fully filled matrices are presented. It is proposed an approach to risk-adjusted pricing that is based on this funding matrix and RAROC-approach adapted to cash flows. The developed approach to pricing integrates organically credit and liquidity risks. It takes into consideration expected credit losses and economic capital (unexpected credit losses) for all lifetime of asset cash flows and not one-year period traditionally used in RAROC.

... In this paper, we revisit the concept of two calibration steps as used by Bohn and Stein (2009). According to Bohn and Stein (2009) the two steps are a consequence of the fact that, usually, the first calibration of a rating model is conducted on a training sample in which the proportion of good and bad might not be representative of the live portfolio. ...

... On the basis of the data presented in this section, it is also worthwhile to clarify precisely the concept of a two-step (or two-period) approach to the calibration of a rating model as mentioned by Bohn and Stein (2009). The first period is the estimation period, the second period is the calibration and forecast period. ...

PD curve calibration refers to the transformation of a set of rating grade
level probabilities of default (PDs) to another average PD level that is
determined by a change of the underlying portfolio-wide PD. This paper presents
a framework that allows to explore a variety of calibration approaches and the
conditions under which they are fit for purpose. We test the approaches
discussed by applying them to publicly available datasets of agency rating and
default statistics that can be considered typical for the scope of application
of the approaches. We show that the popular 'scaled PDs' approach is
theoretically questionable and identify an alternative calibration approach
('scaled likelihood ratio') that is both theoretically sound and performs
better on the test datasets.
Keywords: Probability of default, calibration, likelihood ratio, Bayes'
formula, rating profile, binary classification.

... While the actual nature of an individual obligor's debt is considerably more complex, with default possible at several times, the preceding assumptions do provide us with a quality, widely used starting point for credit risk modelling. There have been extensions in terms of asset value modelling (see Bluhm et al. (2010); Bohn and Stein (2009);McNeil et al. (2015) for an overview), but the Merton model remains the "prototype" of many credit risk models, such as Bluhm and Overbeck (2003); Frei and Wunsch (2018); Gordy (2000). In particular, the Merton model is at the basis of the capital requirement described by the Basel Committee on Banking Supervision (2005), whose framework Miu and Ozdemir (2017) suggest to employ for IFRS 9 purposes. ...

A recently introduced accounting standard, namely the International Financial Reporting Standard 9, requires banks to build provisions based on forward-looking expected loss models. When there is a significant increase in credit risk of a loan, additional provisions must be charged to the income statement. Banks need to set for each loan a threshold defining what such a significant increase in credit risk constitutes. A low threshold allows banks to recognize credit risk early, but leads to income volatility. We introduce a statistical framework to model this trade-off between early recognition of credit risk and avoidance of excessive income volatility. We analyze the resulting optimization problem for different models, relate it to the banking stress test of the European Union, and illustrate it using default data by Standard and Poor’s.

... Bohn et Stein (2009) ont souligné que le risque est la possibilité que la valeur de l'actif subisse des oscillations sur une période donnée. Cependant, ces définitions ne sont pas les seules, car certains experts ont défini le risque en fonction de la probabilité de défaillance. ...

This article assesses, from a microeconomic point of view, the risk and interest rate determinants applied by banks to small and medium-sized enterprises (SMEs) in 2016 in Senegal. Using Heckman's (1979) model, two equations are estimated. The first empirical model specifies the credit risk of SMEs and the second gives the impact on the interest rate of the credit score and the characteristics of Small and Medium Enterprises. The results provide valuable information on the characteristics that banks and non-banks financial institutions consider important in their decision to lend to SMEs. Knowledge of this information can provide SMEs with a wide range of criteria that must be met to obtain financing from financial institutions in Senegal. According to the findings of the study, credit risk has had a significant influence on the interest rate and loans granted to SMEs by banks in Senegal.

... They are generally grouped into two major categories: active investment management and passive investment management, with the term " passive investment " covering both index investment and portfolio insurance. A general idea of the major trends in investment management is given below (Bohn and Stein, 2009, 16) . ...

... та H. S. Shin[21], активів. J.R. Bohn та R.M. Stein[35] наголошують, що більшість кредитних моделей недооцінюють кредитні спреди, оскільки не враховують премію за ліквідність. Значне падіння рівня ліквідності після 2007 р. показало важливість врахування премій за ліквідність.Те ж саме зауваження можна зробити й щодо недостатнього використання банками ціноутворення, скорегованого на ризик. ...

The article demonstrates that the 2008-2009th financial crisis in Ukraine has had a significant and complex impact on its budget. It is shown that the banking crisis has led to a reduction in taxes revenue in the budget, diversion of public funds for the capitalization of state banks and the nationalization of systemic private banks. The main reasons for the negative impact were poor level of risk management and inefficient level of its implementation in the overall system of financial management. As a result, the risks of the banking system of Ukraine were significantly underestimated. To safeguard the state budget from unexpected increasing the expenses on the capitalization of state banks and the nationalization of private ones from reducing the taxes revenue were proposed to develop strategies of improving controllability of banks, to conduct simulation of the banking system of Ukraine on system dynamic model base and to strengthen supervision of banks by introducing reporting about projected cash flows and cash flows at risk, to develop a corresponding methodology for assessing the risk of net cash bank loss before changes in operating assets and liabilities and for stress-testing of net cash profit and loss of the bank.

... [6] to convert these observed probabilities of default to risk-drift and volatility of the underlying, and Φ and Φ −1 are the cumulative and inverse cumulative normal distribution functions respectively. Weestimate the drift and volatility for the underlying from the simulated values of an all-equity structure. ...

Financing drug development has a particular set of challenges including long development times, high chance of failure, significant market valuation uncertainty, and high costs of development. The earliest stages of translational research pose the greatest risks, which have been termed the "valley of death" as a result of a lack of funding. This thesis focuses on an exploration of financial engineering techniques aimed at addressing these concerns. Despite the recent financial crisis, many suggest that securitization is an appropriate tool for financing such large social challenges. Although securitization has been demonstrated effectively at later stages of drug development for drug royalties of approved drugs, it has yet to be utilized at earlier stages. This thesis starts by extending the model of drug development proposed by Fernandez et al. (2012). These extensions significantly influence the resulting performance and optimal securitization structures. Budget-constrained venture firms targeting high financial returns are incentivized to fund only the best projects, thereby potentially stranding less-attractive projects. Instead, such projects have the potential to be combined in larger portfolios through techniques such as securitization which reduce the cost of capital. In addition to modeling extensions, we provide examples of a model calibrated to orphan drugs, which we argue are particularly suited to financial engineering techniques. Using this model, we highlight the impact of our extensions on financial performance and compare with previously published results. We then illustrate the impact of incorporating a credit enhancement or guarantee, which allows for added flexibility of the capital structure and therefore greater access to lower costing capital. As an alternative to securitization, we provide some examples of a structured equity approach, which may allow for increased access to or efficiency of capital by matching investor objectives. Finally, we provide examples of optimizing the Sortino ratio through constrained Bayesian optimization.

... The traditional approach to estimating the future expected cash flows from loans assumes that each individual payment on the loan has only two states being paid or default. The default is presumed to occur immediately after the event of failure of individual payment on loan, neglecting its overdue term (see, for example, Bohn and Stein, 2009;Jorion, 2003;Resti and Sironi, 2007). ...

A new model for predicting the future expected cash flows from a loan is developed. It is based on a detailed analysis of the events of fulfilling, delinquency and default of each individual payment on the loan. The proposed model has significantly less uncertainty compared with the Markov chain model with the same detailing. The model is expected to have greater predictive power in comparison to the traditional models, and its usage will allow reducing the interest rate on the loan. The results of the estimation of the probabilities of payments over time and the future expected cash flows from the loan with monthly equal principal repayment are given.

... In addition, is the US 1-year Treasury rate, obtained from St. Louis Fed (FRED); T = 1 year is the time horizon under consideration in our model; Junior debt (equity) volatility ( ) is calculated from the time series data of L. By solving the above equations, we can extract the implied asset value A and asset volatility ( ) of each government entity. Lastly, it is worth noting that another approach to back out the implied government asset is the Vasicek-Kealhofer model (Bohn & Stein, 2009); this method solves for the asset value and volatility in a recursive fashion. ...

The current European debt crisis has made sovereign credit risk a popular topic. In this paper we adapt an established structural model for assessing sovereign credit risk and expand it to evaluate the cases of California and Greece. Specifically, major political events such as a bailout or a breakout from a monetary union are not accounted for in current models despite that they may introduce non-linearities in the behavior of the default probability. In this paper, we attempt to account for these extra factors and solve the problem numerically. We rely on a 2-D finite differences method, modeling both the risky assets of our target sovereignty and the ones of its encompassing monetary union. Finally, we detail a method for hedging sovereign credit risk using tradable securities.

... Focus our attention on how to compute cumulative probability of default. For this, it usually used the following relationship (Jorion, 2003;Resti and Sironi, 2007;Bohn and Stein, 2009): ...

In this paper, to estimate the credit risk spreads the interest losses are proposed to recognize immediately after default of a loan, i.e. to consider stopping accrual of interests on the defaulted loan. While a common approach supposes recognition of interest losses on defaulted loan only at maturity and, correspondently, accrual of interests on defaulted loan up to maturity. The proposed approach leads to the fact that the probable losses of bank’s interests turn out to be less than the interest losses, computed by the usually used formula. This difference is explained by the fact that the average (over loan’s lifetime) working, non-defaulted share of loan is less than the share, estimated at loan’s maturity and usually used in calculation. It is shown an importance of credit ratings migration for credit spread valuation. The credit spreads are evaluated for a bank’s fixed rate bullet loan in which both principal and interests are paid at maturity. Examples of calculating the term structure of credit spreads are given.

... Of course, this could be interpreted as evidence of incompatibility as in the case of violation of the likelihood ratio condition in Theorem 2.5 (i). Bohn and Stein (2009) present an alternative approach which uses the 'change of base rate' theorem (Elkan, 2001, Theorem 2). However, the solution by that approach in general does not solve (2.10) because often the outcome is ...

The law of total probability may be deployed in binary classification
exercises to estimate the unconditional class probabilities if the class
proportions in the training set are not representative of the population class
proportions. We argue that this is not a conceptually good approach and suggest
an alternative based on the new law of total odds. The law of total odds can
also be used for transforming the conditional class probabilities if exogenous
estimates of the unconditional class probabilities of the population are given.

... Therefore, one of the key factors in determining whether a pool of assets can be securitized is whether the stochastic properties of the underlying assets' returns over time can be measured and managed. In the multi-trillion-dollar mortgage-backed securities market, the answer was (and still is) yes, as is the case for corporate debt and several other asset classes 29 . We believe the same may be true for biomedical research. ...

Biomedical innovation has become riskier, more expensive and more difficult to finance with traditional sources such as private and public equity. Here we propose a financial structure in which a large number of biomedical programs at various stages of development are funded by a single entity to substantially reduce the portfolio's risk. The portfolio entity can finance its activities by issuing debt, a critical advantage because a much larger pool of capital is available for investment in debt versus equity. By employing financial engineering techniques such as securitization, it can raise even greater amounts of more-patient capital. In a simulation using historical data for new molecular entities in oncology from 1990 to 2011, we find that megafunds of $5-15 billion may yield average investment returns of 8.9-11.4% for equity holders and 5-8% for 'research-backed obligation' holders, which are lower than typical venture-capital hurdle rates but attractive to pension funds, insurance companies and other large institutional investors.

... 5 The quality of rating systems is a multidimensional measure, which, for example comprises characteristics such as unbiasedness of PD estimates, predictive power and size, i.e. the ability to separate future defaulting firms from non-defaulting firms, timeliness of information or adjustments, transparency, and others. For an in-depth discussion, seeBohn/Stein (2009). 6 SeeDas et al. 2009 for a description of CDS instruments. ...

This study suggests a new framework for validating issuer credit ratings as-signed by credit rating agencies (or any other type of rating system). Using a benchmark rating, based on publicly available information and high frequency market data, our framework builds on identifying severe (and permanent) shocks to firms' creditworthiness, particularly including financial distress. This provides a rich set of credit events which can be used to validate properties of credit ratings, because these shocks should lead to rating adjustments, even under a rating-through-the-cycle policy. As an illustration, the framework is applied to assess the information sensitiv-ity of ratings by Standard & Poor's and the timeliness of their adjustments. We analyze instantaneous shocks, a financial status incompatible with being investment grade, and financial distress for a large sample of European com-panies from 2000-2010. S&P does not adjust its corporate rating in at least one third of all cases. Moreover, even if a rating change occurs, this happens typically at a lag of about four to six months. This insensitivity seems neither attributable to private information from monitoring nor to the rating-through-the-cycle approach employed by S&P.

This paper attempts to provide a first step toward understanding the role of credit portfolio management in Nepalese microfinance institutions (MFIs) and overcome those problems associated with credit risk management. The credit portfolio management (CPM) has become most crucial functions of the Nepalese MFIs for sound loan portfolio quality. This study is based on descriptive research design. Several findings are made through the review of the literature that is parallel to achieving the objectives of the study. MFIs are financial intermediaries ("banks") that have a direct impact on economic and social transformation, such as job creation, income generation, social change, and poverty alleviation via financial and non-financial activities. The findings show that a credit appraisal system, scientific interest rate, credit monitoring, loan portfolio diversification system, capital optimization, risk framework development, regulatory management, credit control, credit advisory, and credit research, have reduced credit risk and ensured high-performing loans and financial sustainability. The study recommends that MFI’s portfolio management strategies focus more on the internal causes of delinquency which they have more control over and seek practical and achievable solutions to reimbursement delinquency problems. The study's findings will be useful to BFIs, institutional lenders, microfinance experts, regulators, economists, policymakers, and institutional credit rating agencies. The result reveals that portfolio diversification has a significant impact on credit portfolio management in Nepalese MFIs.

Financial models are an inescapable feature of modern financial markets. Yet it was over reliance on these models and the failure to test them properly that is now widely recognized as one of the main causes of the financial crisis of 2007–2011. Since this crisis, there has been an increase in the amount of scrutiny and testing applied to such models, and validation has become an essential part of model risk management at financial institutions. The book covers all of the major risk areas that a financial institution is exposed to and uses models for, including market risk, interest rate risk, retail credit risk, wholesale credit risk, compliance risk, and investment management. The book discusses current practices and pitfalls that model risk users need to be aware of and identifies areas where validation can be advanced in the future. This provides the first unified framework for validating risk management models.

У статті запропоновано науково-практичний підхід до оцінки кредитоспроможності позичальника українського банку. У результаті проведеного дослідження обґрунтовано наступне: 1) існуючі проблеми оцінки кредитоспроможності позичальника випливають зі спроб банків видавати незабезпечені кредити і при цьому не формувати резерви під кредитний ризик за рахунок власних витрат; 2) потенційна кредитоспроможність будь-якого позичальника дорівнює нулю і передбачає наявність 100% кредитного ризику; 3) зменшити кредитний ризик і підвищити кредитоспроможність позичальника можна тільки шляхом прийняття банком забезпечення за кредитним договором; 4) сума потенційного кредиту повністю залежить від розміру забезпечення, що надається позичальником за кредитним договором; 5) чистий кредитний ризик зобов’язаний повністю покриватися власними витратами банку по формуванню резервів під цей ризик.

This chapter reviews the mathematics of evaluating the credit risk of tranches of structured transactions with simple loss-priority structure using two common tranching approaches. These two approaches include: PD-based tranching where the probability of default of a tranche is the quantity of interest; and EL-based tranching where the expected loss on a tranche is the quantity of interest. The chapter discusses the basic mathematics of tranching and some implications, including the observation that EL-based and PD-based tranching approaches. It shows that for a fixed detachment (attachment) point of a tranche, lowering the attachment (detachment) point necessarily increases the EL of the tranche regardless of the probability distribution of the collateral. The chapter presents the upper-bound for the LGD on the senior most tranche. It illustrates why some tranches with a target EL are unattainable under the EL-based tranching approach, even though they can exist under the PD-based approach.

This paper examines the empirical relationship between credit risk and interest rate risk. We use credit default swap (CDS) spreads as our measure of credit risk. Also, we control for the variation in the so-called fair-value spread that combines multiple sources of default risk, including the market price of risk, the loss given default and the expected default frequency. After taking into account the fair-value spread, the various proxies for the broad state of the macroeconomy and a liquidity risk factor, we find that the interest rate shock serves as a key determinant of CDS spread movements in most subsamples organized by industry type and credit rating status. Moreover, we find that the swap interest rate variables convey additional information about the CDS spread movements beyond the Treasury interest rate variables. These results have implications for the parameterization of baseline interest rate dynamics in the Monte Carlo simulation of economic capital for a given credit portfolio.

Analysts often find themselves working with less than perfect development and/or validation samples, and data issues typically affect the interpretation of default prediction validation tests. Discriminatory power and calibration of default probabilities are two key aspects of validating default probability models. This paper considers how data issues affect three important power tests: the accuracy ratio, the Kolmogorov-Smirnov test and the conditional information entropy ratio. The effect of data issues upon the Hosmer-Lemeshow test, a default probability calibration test, is also considered. A simulation approach is employed that allows the impact of data issues on model performance, when the exact nature of the data issue is known, to be assessed. We obtain several results from the tests of discriminatory power. For example, we find that random missing defaults have little impact on model power, while false defaults have a large impact on power. As with other common level calibration test statistics, the Hosmer-Lemeshow test statistic simply indicates to what degree the level calibration passes or fails. We find that the presence of any data issue tends to cause this test to fail, and, thus, we introduce additional statistics to describe how realized default probabilities differ from those expected. In particular, we introduce statistics to compare over-all default probability level with the realized default rate, and to compare the sensitivity of the default rate to changes in the predicted default probability.

While anomalous events are rare, extreme “non-normality” in real-world markets is more frequently observed than current risk management approaches allow for. Moreover, conventionally derived portfolios carry a higher level of downside risk than managers and investors believe, or current portfolio modeling techniques can identify.
These extreme unpredictable events should trigger a review and challenge conventional ideas about the risk management frameworks, metric and models. Specifically, the recent financial crisis has called into question the adequacy of value at risk (VaR), as a risk metric that theoretically determines the maximum loss a manager might sustain over a certain period of time.
In addition, simple correlations often used in traditional asset allocation models assume a linear relationship between asset classes—i.e. they assume that the relationship between the variables at the extremes is similar to their relationship at less extreme values. But what if one month’s return is ‘influenced’ by the previous month’s return and not following a Gaussian distribution, should, then, traditional asset allocation frameworks be improved in order to allow for serial correlation?
The methodology and an analysis of these challenging questions are presented hereafter. Eventually, a series of conclusions is laid out, highlighting that:
-Returns are not independent, and in all cases they are not normally distributed.
-Correlation breaks down during crisis and over time.
-Risk measures are inadequate.
-Risk measures should account for the liquidity risk.
-Incorporating non-normality can lead to more efficient portfolios.
-A better risk quantifier in non-normal framework: Conditional VaR

There has been a growing recognition that issues of data quality, which are routine in practice, can materially affect the assessment of learned model performance. In this paper, we develop some analytic results that are useful in sizing the biases associated with tests of discriminatory model power when these are performed using corrupt (“noisy”) data. As it is sometimes unavoidable to test models with data that are known to be corrupt, we also provide some guidance on interpreting results of such tests. In some cases, with appropriate knowledge of the corruption mechanism, the true values of the performance statistics such as the area under the ROC curve may be recovered (in expectation), even when the underlying data have been corrupted. We also provide estimators of the standard errors of such recovered performance statistics. An analysis of the estimators reveals interesting behavior including the observation that “noisy” data does not “cancel out” across models even when the same corrupt data set is used to test multiple candidate models. Because our results are analytic, they may be applied in a broad range of settings and this can be done without the need for simulation.

The breadth and dynamics of the recent financial crisis have led to efforts to develop forward-looking tools to monitor systemic risk. In this article, the authors propose a new measure that is an extension of the absorption ratio (AR) introduced in 2010 by Kritzman, Li, Page, and Rigobon. Using principal component analysis (as in the original AR methodology) in conjunction with a structural model of default, the authors develop a measure of systemic risk that may be calculated using only publicly available data. They call the new measure the credit absorption ratio (CAR) and find that increases in the CAR preceded periods of financial distress during the recent crisis. The CAR may be interpreted economically: it highlights states of the financial system during which the credit fundamentals of institutions and markets exhibit heightened coupling and higher potential for cascading distress. The authors also demonstrate that a byproduct of CAR analysis provides a measure of the degree to which specific financial institutions are exposed to systemic risk factors at any point in time. They find that a number of the institutions that exhibited, under the CAR measure, high potential exposure during the lead-up to the recent crisis subsequently experienced higher levels of distress or required external assistance.

In this paper, using a conception of continuous coupon bond with continuous accrual of coupons on simple fixed rate for pricing a risky zero-coupon bond is considered. It is shown that only employing this conception allows obtaining explicit equation for price of risky zero-coupon bond from fixed coupon bond prices without any assumption about default process and providing continuous monitoring default events despite periodicity of bond cash flows that improves estimation of credit risk. To apply the conception, it is proposed a simple condition for inversion of discrete bond into continuous coupon one. It is shown that this inversion is possible only for the following recovery assumption: a recovery rate is a part of the present value of remaining cash flows that will not be paid due to default but not for the fractional recovery of par assumption. Examples of calculating the implied survival probabilities and the credit spreads for Ukrainian Eurobonds are given.

Most credit literature concerns the management of loan or bond portfolios of banks or funds investing in fixed income. Sizeable parts of global corporate financing however is done by trade credit, where suppliers extend credit to their customers by allowing them to pay at a delayed date for goods or services received. This paper provides an overview to the practical issues of building a structured credit risk management in an industrial corporate. The paper looks at the economic theory behind providing trade credit and describes the various organizational models for managing the credit risk. It then proposes a data model structure to perform a consistent credit risk measurement both on a single-obligor and a portfolio level looking at the practical aspects of implementing it as a system (master data management, exposure aggregation, choice of external data, setting up a rating process, building a credit portfolio reporting and loss model). It concludes with an analysis of the best practices of corporate credit risk management including the limit setting, as well as a set of questions to help the corporate credit risk manager to evaluate the internal credit risk practices. Much of the content is based on an interview series conducted with credit managers/ officers from large european industrial corporates, ensuring its consistency with current practices and real world applicability.

In this second installment, the author addresses some of the problems associated with empirically validating contingent-claim models for valuing risky debt. The article uses a simple contingent claims risky debt valuation model to fit term structures of credit spreads derived from data for U.S. corporate bonds. An essential component to fitting this model is the use of expected default frequency; the estimate of the firms' expected default probability over a specific time horizon. The author discusses the statistical and econometric procedures used in fitting the term structure of credit spreads and estimating model parameters. These include iteratively reweighted non-linear least squares are used to dampen the impact of outliers and ensure convergence in each cross-sectional estimation from 1992 to 1999.

This article surveys available research on the contingent-claims approach to risky debt valuation. The author describes both the structural and reduced form versions of contingent claims models and summarizes both the theoretical and empirical research in this area. Relative to the progress made in the theory of risky debt valuation, empirical validation of these models lags far behind. This survey highlights the increasing gap between the theoretical valuation and the empirical understanding of risky debt.

Buy-out literature suggests that secured creditors will recoup substantial proportions of the funds they extend to finance the initial buy-out. This paper uses a unique dataset of 42 failed MBOs to examine the extent of credit recovery by secured lenders under UK insolvency procedures and the factors that influence the extent of this recovery. On average, secured creditors recover 62 per cent of the amount owed. The percentage of secured credit recovered is increased where the distressed buy-out is sold as a going concern and where the principal reason for failure concerns managerial factors. The presence of a going concern qualification in the audit report and the size of the buy-out reduce the recovery rate by secured creditors.

The dominant consideration in the valuation of mortgage-backed securities (MBS) is modeling the prepayments of the pool of underlying mortgages. Current industry practice is to use historical data to project future prepayments. In this paper we introduce a new approach and show how it can be used to value both pools of mortgages and mortgage-backed securities issued by the two government sponsored enterprises (Fannie Mae and Freddie Mac). We distinguish between prepayments that do not depend on interest rates and refinancings that do. Turnover and curtailment are modeled using a vector of prepayment speeds, while refinancings are modeled using a pure option-based approach. We describe the full spectrum of refinancing behavior using a notion of refinancing efficiency. There are financial engineers who refinance at just the right time, leapers who do it too early, and laggards who wait too long. We partition the initial mortgage pool into “efficiency buckets, ” whose sizes are calibrated to market prices. The composition of the seasoned pool is then determined by the excess refinancings over baseline prepayments. Leapers are eliminated first, then financial engineers, and finally laggards. As the mortgage pool ages, its composition gradually shifts towards laggards, and this automatically accounts for what is commonly referred to as “prepayment burnout. ” Our approach has two distinguishing features: (1) our primary focus is on understanding the market value of a mortgage, in contrast with standard models that strive (often unsuccessfully) to predict future cash flows, and (2) we use two separate yield curves, one for discounting mortgage cash flows and the other for MBS cash flows. 1 An Option-Theoretic Prepayment Model for Mortgages and Mortgage-Backed Securities

Reduced-form credit risk models are often thought to be better suited than structural models for pricing corporate bonds. The authors challenge this view. Conditioned not only on equity but on bond and dividend information also, a structural model performs well compared to previously tested reduced-form models. In the pricing of bond portfolios, model errors are to a large extent diversifiable.

The term structure of interest rates contains information about the market's expectations of the direction of future interest rates. Similarly, the term structure of credit spreads contains information about the market's perception of future credit spreads. The term structure of credit spreads is closely linked with conditional default probabilities and this link suggests a downward sloping term structure of credit spreads for high risk issuers, whose default probability conditional on survival is likely to decrease. This paper shows that for sufficiently low credit quality, as defined by the level of credit spreads, this holds true most of the time when spreads are taken from credit default swap (CDS) markets. We also discuss why CDS markets give a better way of analyzing this problem than bond price data.

This article, based on a 27-year study, describes the characteristics of commercial and industrial loan defaults in Latin America (LA), including in particular the loss in the event of default (LIED). For banks, improved understanding of losses enables lenders to make better pricing decisions, to allocate capital more efficiently, and to obtain more accurate estimates of loan losses and valuations of existing loan portfolios. For investors, the benefit is a more informed decision about portfolio diversification. The article provides characteristics of 1,149 Latin American defaults, examines the distribution of defaults by country, depicts the stability of LA LIED by year, and reports the effects of sovereign events on Latin American corporate loan loss rates.

This article presents a new methodology for estimating recovery rates and the (pseudo) default probabilities implicit in both debt and equity prices. In this methodology, recovery rates and default probabilities are correlated and depend on the state of the macroeconomy. This approach makes two contributions: First, the methodology explicitly incorporates equity prices in the estimation procedure. This inclusion allows the separate identification of recovery rates and default probabilities and the use of an expanded and relevant data set. Equity prices may contain a bubble component - which is essential in light of recent experience with Internet stocks. Second, the methodology explicitly incorporates a liquidity premium in the estimation procedure - which is also essential in light of the large observed variability in the yield spread between risky debt and U.S. Treasury securities and the illiquidities present in risky-debt markets.

It is often the case in default modeling that the need arises to calibrate a model to some prior probability of default. In many situations, a researcher may not know the true prior default rate for the population because the data set at hand is itself incomplete, either with respect to default identification (hidden defaults) or default under reporting. In situations where a researcher has access to two incomplete default data sets, for example in the case of two banks that have merged, it is possible to infer the number of “missing” defaults, which we demonstrate in this short note. We discuss an approach to estimating this quantity and show an example in which we infer the number of missing defaults in the combined legacy databases of the former Moody’s Risk Management Services and the former KMV Corporation. While calibration is one application of this approach, the method is a general one that can be applied in other settings as well.

We discuss the challenges in developing decision support tools for commercial underwriting and discuss how several different approaches to the underwriting problem have been addressed. We then describe an expert system-based approach to credit underwriting that has been in commercial usage for over ten years in a variety of financial institutions. The expert system approach addresses many features of the underwriting process that alternative approaches do not. The system is characterized by a functional representation of knowledge and a graph-based inference mechanism. The inference mechanism is unique in its pragmatic approach to the implementation of probability theory. This approach offers flexibility for modeling various aspects of real world credit decisions not always treated by traditional approaches. We give examples of how this approach can be and is currently being applied to facilitating underwriting decisions in commercial lending contexts.

This paper empirically compares a variety of firm-value-based models of contingent claims. We formulate a general model which nests versions of the models introduced by , and , and Mella-Barral and Perraudin (1997). We estimate these using aggregate time series data for the US corporate bond market, monthly, from August 1970 through December 1996. We find that models fit reasonably well, indicating that variations of leverage and asset volatility account for much of the time-series variations of observed corporate yields. The performance of the recently developed models which incorporate endogenous bankruptcy barriers is somewhat superior to the original Merton model. We find that the models produce default probabilties which are in line with the historical experience reported by Moodys.

This paper examines valuation effects on the stocks of fifteen participating money-center banks of 774 announcements of syndicated loans, representing announcements of investment decisions for the lenders. The authors find that announcements of LDC loans in the 1970s, especially those to Latin American borrowers, are associated with negative abnormal returns to the lending banks while announcements of loans to U.S. corporate borrowers in the 1980s, especially those for takover finance, are associated with positive abnormal returns. While a number of testable predictions are consistent with the negative abnormal returns for Latin American loans, they explain response to takeover loans with a return-to-liquidity hypothesis. Copyright 1995 by Ohio State University Press.