# Michel Denuit's research while affiliated with Université Libre de Bruxelles and other places

**What is this page?**

This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

## Publications (356)

This paper studies diversification effects resulting from pooling insurance losses according to the risk allocation rule proposed by Denuit and Dhaene (2012). General comparison results are established for conditional expectations given sums of independent random variables. It is shown that these expectations decrease in the number of terms compris...

Autocalibration is a desirable property since it ensures that the information contained in a candidate premium is used without any bias. It turns out to be intimately related to the method of marginal totals that predates modern risk classification methods. The present note aims to assess the impact of autocalibration on the goodness of lift. It is...

Conditional tail expectations are often used in risk measurement and capital allocation. Conditional mean risk sharing appears to be effective in collaborative insurance, to distribute total losses among participants. This paper develops analytical results for risk allocation among different, correlated units based on conditional tail expectations...

Survivor funds are financial arrangements where participants agree to share the proceeds of a collective investment pool in a predescribed way depending on their survival. This offers investors a way to benefit from mortality credits, boosting financial returns. Following Denuit (2019, ASTIN Bulletin , 49 , 591–617), participants are assumed to ado...

This paper offers a systematic treatment of risk‐sharing rules for insurance losses, based on a list of relevant properties. A number of candidate risk‐sharing rules are considered, including the conditional mean risk‐sharing rule proposed in Denuit and Dhaene and the newly introduced quantile risk‐sharing rule. Their compliance with the proposed p...

This paper exploits the representation of the conditional mean risk sharing allocations in terms of size-biased transforms to derive effective approximations within insurance pools of limited size. Precisely, the probability density functions involved in this representation are expanded with respect to the Gamma density and its associated Laguerre...

Advancements in medicine and biostatistics have already resulted in a better access to insurance for people diagnosed with cancer. This materializes into the “right to be forgotten” adopted in several EU member states, granting access to insurance after a waiting period of at most 10 years starting at the end of the successful therapeutic protocol....

Thanks to its outstanding performances, boosting has rapidly gained wide acceptance among actuaries. To speed up calculations, boosting is often applied to gradients of the loss function, not to responses (hence the name gradient boosting). When the model is trained by minimizing Poisson deviance, this amounts to apply the least-squares principle t...

Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, the sum of fitted values can depart from the observed totals to a large extent. The possible lack of balance when models are trained by minimizing deviance outside the familiar GLM with canonical link setting has bee...

This paper supplements the previous contribution by Denuit and Robert (2021). First, the compound Poisson case is revisited and the strong law of large number is rigorously established for the conditional expectations defining the conditional mean risk allocation. Then, a weak law of large numbers is proposed, providing the actuary with a criterion...

Telematics devices installed in insured vehicles provide actuaries with new risk factors, suchas the time of the day, average speeds and other driving habits. This paper extends themultivariate mixed model describing the joint dynamics of telematics data and claim fre-quencies proposed by Denuit et al. (2019a) by allowing for signals with various f...

In Efron (1965), Efron studied the stochastic increasingness of the vector of independent random variables entering a sum, given the value of the sum. Precisely, he proved that log-concavity for the distributions of the random variables ensures that the vector becomes larger (in the sense of the usual multivariate stochastic order) when the sum is...

Modern data science tools are effective to produce predictions that strongly correlate with responses. Model comparison can therefore be based on the strength of dependence between responses and their predictions. Positive expectation dependence turns out to be attractive in that respect. The present paper proposes an effective testing procedure fo...

Conditional mean risk sharing appears to be effective to distribute total losses amongst participants within an insurance pool. This paper develops analytical results for this allocation rule in the individual risk model with dependence induced by the respective position within a graph. Precisely, losses are modelled by zero-augmented random variab...

Introduction
Un outil en biostatistique pour mesurer la différence d’espérance de vie ou de mortalité chez les patients cancéreux est le nombre d’années de vie perdues (YLL), quantifiant le nombre d’années de vie qu’une cohorte de patients a perdu par rapport à la population générale. L’analyse de survie est une classe de modèles avec deux états et...

This paper purposes to formalize the three business models dominating peer‐to‐peer (P2P) property and casualty insurance: the self‐governing model, the broker model, and the carrier model. The former one develops outside the insurance market whereas the latter ones may originate from the insurance industry, by partnering with an existing company or...

This short note aims to propose a new comparison measure for life tables: the age shift needed to restore equality of survival chance as measured by the precedence probability. Mortality by household income in France and by generation in Belgium is used as an illustration.

This paper considers a peer-to-peer (P2P) insurance scheme where the higher layer is transferred to a (re-)insurer and retained losses are distributed among participants according to the conditional mean risk sharing rule proposed by Denuit and Dhaene (2012). The global retention level of the pool of participants grows proportionally with their num...

Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing model...

Denuit (2019 Denuit, M. 2019. Size-biased transform and conditional mean risk sharing, with application to P2P insurance and tontines. ASTIN Bulletin 49:591–617.[Crossref], [Web of Science ®] , [Google Scholar], 2020a Denuit, M. 2020a. Investing in your own and peers’ risks: The simple analytics of P2P insurance. European Actuarial Journal 10:335–5...

Wüthrich and Buser (DOI:10.2139/ssrn.2870308, 2020) studied the generalization error for Poisson regression models. This short note aims to extend their results to the Tweedie family of distributions, to which the Poisson law belongs. In case of bagging, a new condition emerges that becomes increasingly binding with the power parameter involved in...

This paper considers linear fair risk sharing rules and the conditional mean risk sharing rule for independent but heterogeneous losses that are gathered in an insurance pool. It studies the asymptotic behavior of individual contributions to total losses when the number of participants to the pool tends to infinity. It is shown that (i) insurance a...

The Massart (J Cancer Policy 15:70-71, 2018) testimonial illustrates the difficulties faced by patients having survived cancer to access mortgage insurance securing home loan. Data collected by national registries nevertheless suggest that excess mortality due to some types of cancer becomes moderate or even negligible after some waiting period. In...

This paper proposes a multistate model with a Semi-Markov dependence structure describing the different stages in the settlement process of individual claims in general insurance. Every trajectory, from reporting to closure is combined with a modeling of individual link ratios to obtain the ultimate cost of each claim. Analytical expressions are de...

In actuarial pricing, the objective is to evaluate the pure premium as accurately as possible. The target is thus the conditional expectation \(\mu (\textit{\textbf{X}})=\text {E}[Y|\textit{\textbf{X}}]\) of the response Y (claim number or claim amount for instance) given the available information \(\textit{\textbf{X}}\).

Bagging trees and random forests base their predictions on an ensemble of trees. In this chapter, we consider another training procedure based on an ensemble of trees, called boosting trees. However, the way the trees are produced and combined differ between random forests (and so bagging trees) and boosting trees.

In this chapter, we present the regression trees introduced by Breiman et al. (1984). Regression trees are at the core of this second volume.

Actuarial pricing models are generally calibrated so that they minimize the generalization error computed with an appropriate loss function. Model selection is based on the generalization error.

Two ensemble methods are considered in this chapter, namely bagging trees and random forests. One issue with regression trees is their high variance. There is a high variability of the prediction over the trees trained from all possible training sets . Bagging trees and random forests aim to reduce the variance without too much altering bias.

Random effects are particularly useful in insurance studies, to capture residual heterogeneity or to induce cross‐sectional and/or serial dependence, opening hence the door to many applications including experience rating and microreserving. However, their nonobservability often makes existing models computationally cumbersome in a multivariate con...

We consider the conditional mean risk allocation for an insurance pool, as defined by Denuit and Dhaene (2012). Precisely, we study the asymptotic behavior of the respective relative contributions of the participants as the total loss of the pool tends to infinity. The numerical illustration in Denuit (2019) suggests that the application of the con...

Actuarial ratemaking is usually performed at product and guarantee level, meaning that each product and guarantee is considered in isolation. Moreover, independence between policyholders is generally assumed. In this paper, we propose a multivariate Poisson mixture, with random effects correlated using a hierarchical structure, to accommodate for t...

Wavelet theory is known to be a powerful tool for compressing and processing time series or images. It consists in projecting a signal on an orthonormal basis of functions that are chosen in order to provide a sparse representation of the data. The first part of this article focuses on smoothing mortality curves by wavelets shrinkage. A chi-square...

This paper studies a peer-to-peer (P2P) insurance scheme where participants share the first layer of their respective losses while the higher layer is transferred to a (re-)insurer. The conditional mean risk sharing rule proposed by Denuit and Dhaene (Insur Math Econ 51:265–270, 2012) appears to be a very convenient way to distribute retained losse...

The size-biased, or length-biased transform is known to be particularly useful in insurance risk measurement. The case of continuous losses has been extensively considered in the actuarial literature. Given their importance in insurance studies, this article concentrates on compound sums. The zero-augmented distributions that naturally appear in th...

This book summarizes the state of the art in tree-based methods for insurance: regression trees, random forests and boosting methods. It also exhibits the tools which make it possible to assess the predictive performance of tree-based models. Actuaries need these advanced analytical tools to turn the massive data sets now at their disposal into opp...

Feed-forward neural networks are algorithms with supervised learning. It means that we have to a priori identify the most relevant variables and to know the desired outputs for combinations of these variables. For example, forecasting the frequency of car accidents with a perceptron requires an a priori segmentation of some explanatory variables li...

In this chapter, we study a particular type of neural networks that are designed for providing a representation of the input with a reduced dimensionality. These networks contains a hidden layer, called bottleneck, that contains a few nodes compared to the previous layers. The output signals of neurons in the bottleneck carry a summarized informati...

Gradient boosting machines form a family of powerful machine learning techniques that have been applied with success in a wide range of practical applications. Ensemble techniques rely on simple averaging of models in the ensemble. The family of boosting methods adopts a different strategy to construct ensembles. In boosting algorithms, new models...

This chapter introduces the general features of artificial neural networks. After a presentation of the mathematical neural cell, we focus on feed-forward networks. First, we discuss the preprocessing of data and next we present a survey of the different methods for calibrating such networks. Finally, we apply the theory to an insurance data set an...

In Chap. 1, our empirical analysis was based on neural networks with a single hidden layer. These networks, called shallow, are in theory universal approximators of any continuous function. Deep neural networks use instead a cascade of multiple layers of hidden neurons. Each successive layer uses the output from the previous layer as input. As with...

The learning of large neural networks is an ill-posed problem and there is generally a continuum of possible set of admissible weights. In this case, we cannot rely anymore on asymptotic properties of maximum likelihood estimators to approximate confidence intervals. Applying the Bayesian learning paradigm to neural networks or to generalized linea...

The main objective of time series analysis is to provide mathematical models that offer a plausible description for a sample of data indexed by time. Time series modelling may be applied in many different fields. In finance, it is used for explaining the evolution of asset returns. In actuarial sciences, it may be used for forecasting the number of...

The most frequent approach to data-driven modeling consists to estimate only a single strong predictive model. A different strategy is to build a bucket, or an ensemble of models for some particular learning task. One can consider building a set of weak or relatively weak models like small neural networks, which can be further combined altogether t...

In order to determine an appropriate amount of premium, statistical goodness-of-fit criteria must be supplemented with actuarial ones when assessing performance of a given candidate pure premium. In this paper, concentration curves and Lorenz curves are shown to provide actuaries with effective tools to evaluate whether a premium is appropriate or...

With GLMs, mean responses are modeled as monotonic functions of linear scores. The assumed linearity of the score is not restrictive for categorical features coded by means of binary variables. However, this assumption becomes questionable for continuous features which may have a nonlinear effect on the score scale. This chapter is devoted to Gener...

Data sets exhibiting a hierarchical or nested structure, or including longitudinal or spatial elements often arise in insurance studies. This generally results in correlation among the responses within the same group, casting doubts about the outputs of analyses assuming mutual independence. Random effects offer a convenient way to model such group...

This chapter is devoted to the study of the family of Exponential Dispersion (or ED) distributions that are central to insurance data analytics techniques. The objective functions used to calibrate the regression models described in this book correspond to log-likelihoods taken from this family. This is why a good knowledge of these models is the n...

This chapter discusses a statistical modeling strategy based on extreme value theory to describe the behavior of data far in the tails of the distributions, with a particular emphasis on large claims in property and casualty insurance and mortality at oldest ages in life insurance. Large claims generally affect liability coverages and require a sep...

In this chapter, the modeling of the mean response is supplemented with additional scores linked to other parameters of the distribution, like dispersion, scale, shape or probability mass at the origin, for instance. This allows the actuary to let the available information enter other dimensions of the response, such as volatility or no-claim proba...

technical price and commercial premiums

Generalized Linear Models are widely known under their famous acronym GLMs. Today, GLMs are recognized as an industry standard for pricing personal lines and small commercial lines of insurance business. This chapter reviews the GLM methodology with a special emphasis to insurance problems. The statistical framework of GLMs allows the actuary to ma...

This chapter recalls the basics of the estimation method consisting in maximizing the likelihood associated to the observations. The resulting estimators enjoy convenient theoretical properties, being optimal in a wide variety of situations. The maximum likelihood principle will be used throughout the next chapters to fit the supervised learning mo...

With GLMs, scores are linear functions of the regression parameters. GAMs allow the actuary to include in the score nonlinear effects of the features, to be learned from the data. GAMs can be fitted with the help of local versions of GLMs or by decomposing the nonlinear effects of the features in an appropriate spline basis so that the working scor...

The Belgian Law of 20 July 2007 has drastically changed the Belgian private health insurance sector by making individual contracts lifelong with the technical basis (i.e. actuarial assumptions) fixed at policy issue. The goal of the Law is to ensure the accessibility to supplementary health coverage in order to protect policyholders from discrimina...

Using risk-reducing properties of conditional expectations with respect to convex order, Denuit and Dhaene [Denuit, M. and Dhaene, J. (2012). Insurance: Mathematics and Economics 51, 265–270] proposed the conditional mean risk sharing rule to allocate the total risk among participants to an insurance pool. This paper relates the conditional mean ri...

Dependence measures are often used in practice in order to assess the quality of a regression model. This is for instance the case with Kendall's tau and other association coefficients based on concordance probabilities. However, in case the response variable is discrete, correlation indices are often bounded and restricted to a sub-interval of [−1...

Association measures based on concordance, such as Kendall’s tau, Somers’ delta or Goodman and Kruskal’s gamma are often used to measure explained variations in regression models for binary outcomes. As responses only assume values in {0, 1}, these association measures are constrained, which makes their interpretation more difficult as a relatively...

The LTC insurance policies, which concern millions of individuals, are at present very heterogeneous, using many types of guarantees and many types of benefits underwriting modes. The French market has offered payments of monthly lifetime cash annuities since the beginning. Yet the growing LTC market is currently proposing some indemnity-based prod...

Actuarial risk classification is usually performed at a guarantee and policyholder level: for each policyholder, the claim frequencies corresponding to each guarantee are modelled in isolation, without accounting for the correlation between the different guarantees and the different policyholders from the same household. However, sometimes, a commo...

This paper addresses systematic longevity risk in long-term insurance business. We analyze the consequences of working under unknown survival probabilities on the efficiency of the Law of Large Numbers and point out the need for appropriate and feasible risk management techniques. We propose a setting for risk sharing schemes between the insurer an...

Pay-how-you-drive (PHYD) or usage-based (UB) systems for automobile insurance provide actuaries with behavioural risk factors, such as the time of the day, average speeds and other driving habits. These data are collected while the contract is in force with the help of telematic devices installed in the vehicle. They thus fall in the category of a...

Artificial intelligence and neural networks offer a powerful alternative to statistical methods for analyzing data. This book reviews some of the most recent developments in neural networks, with a focus on applications in actuarial sciences and finance.
The third volume of the trilogy simultaneously introduces the relevant tools for developing and...

This book summarizes the state of the art in generalized linear models (GLMs) and their various extensions: GAMs, mixed models and credibility, and some nonlinear variants (GNMs). In order to deal with tail events, analytical tools from Extreme Value Theory are presented. Going beyond mean modeling, it considers volatility modeling (double GLMs) an...

This paper studies the distribution of particular weighted sums of Bernoulli random variables. The computing methods are applied to derive the probability distribution of the random amount of survivor credits to be shared among surviving participants in single-period tontine schemes. The effectiveness of this new arrangement can then be evaluated b...

Actuarial risk classification studies are typically confined to univariate, policy-based analyses: Individual claim frequencies are modelled for a single product, without accounting for the interactions between the different coverages bought by the members of the same household. Now that large amounts of data are available and that the customer's v...

This short note supplements the paper by Glschössl et al. (Eur Actuar J 1:23–41, 2011) with an efficient method allowing actuaries to include continuous covariates in their life tables, such as the sum insured for instance. Compared to the classical approach based on grouped data adopted in the majority of actuarial mortality studies, individual ob...

This note proposes a practical way for modelling and projecting health insurance expenditures over short time horizons, based on observed historical data. The present study is motivated by a similar age structure generally observed for health insurance claim frequencies and yearly aggregate losses on the one hand and mortality on the other hand. As...

This paper adopts the new loss reserving approach proposed by Denuit and Trufin (2016), inspired from the collective model of risk theory. But instead of considering the whole set of claims as a collective, two types of claims are distinguished, those claims with relatively short development patterns and claims requiring longer developments. In eac...

This article proposes a new loss reserving approach, inspired from the collective model of risk theory. According to the collective paradigm, we do not relate payments to specific claims or policies, but we work within a frequency-severity setting, with a number of payments in every cell of the run-off triangle, together with the corresponding paid...

This short note proposes a new interpretation of risk apportionment, in the target-oriented decision-making model. It is shown that these risk attitudes translate into smaller targets, expressing the decision maker’s conservative behavior.

In this article, the force of mortality at the oldest ages is studied using the statistical tools from extreme value theory. A unique data basis recording all individual ages at death above 95 for extinct cohorts born in Belgium between 1886 and 1904 is used to illustrate the relevance of the proposed approach. No leveling off in the force of morta...

This paper considers the problem of a lifelong health insurance cover where medical inflation is not sufficiently incorporated in the level premium determined at policy issue. We focus on the setting where changes in health benefits, driven by medical inflation, are accounted for by an appropriate update or indexation of the level premium, the poli...

In this short note, we derive the lower and upper bounds on the association measure for zero-inflated continuous random variables proposed by Pimentel et al. (2015). These bounds only involve the respective probability masses at the origin.This provides analysts with the set of values that can be attained, helping them to interpret the obtained res...

This paper proposes a practical way for indexing level premiums in lifelong medical insurance contracts, in order to take into account observed medical inflation. The indexing can be achieved by considering only premiums, without explicit reference to reserves. This appears to be relevant in practice as reserving mechanisms may not be transparent t...

Les raisons de l'élaboration de la loi du 25 juin 1992 sur le contrat d'assurance terrestre (LCAT, qui a aujourd'hui fait place à la loi relative aux assurances du 4 avril 2014) s'ex-pliquent par la volonté du législateur de rétablir un équilibre entre assureur et consommateur. Elle remplaçait une loi antérieure datant du 11 juin 1874 qui ne compor...

In order to generalize previous results by Li et al. (2016), Guo et al. (2016) extended the definition of the Rothschild-Stiglitz type of increase in risk to a background risk framework. They provided several sufficient conditions for such a ranking to hold, involving expectation dependence concepts. In this short note, the corresponding characteri...

Often in actuarial practice, mortality projections are obtained by letting age-specific death rates decline exponentially at their own rate. Many life tables used for annuity pricing are built in this way. The present paper adopts this point of view and proposes a simple and powerful mortality projection model in line with this elementary approach,...

The bounds for risk measures of a portfolio when its components have known marginal distributions but the dependence among the risks is unknown are often too wide to be useful in practice. Moreover, availability of additional dependence information, such as knowledge of some higher-order moments, makes the problem significantly more difficult. We s...

The present paper proposes an evolutionary credibility model that describes the joint dynamics of mortality through time in several populations. Instead of modeling the mortality rate levels, the time series of population-specific mortality rate changes, or mortality improvement rates are considered and expressed in terms of correlated time factors...

In addition to risk aversion, decision-makers tend to be also downside risk averse. Besides the usual size for risk trade-off, this allows several other trade-offs to be considered. The decision to increase the level of self-protection generates five trade-offs each involving an unfavourable downside risk increase and an accompanying beneficial cha...

In this paper, we propose new relational models linking some specific mortality experience to a reference life table. Compared to existing relational models which distort the forces of mortality, we work here on the age scale. Precisely, age is distorted making individuals younger or older before performing the computations with the reference life...

In this paper, we extend the concept of mutual exclusivity proposed by [Dhaene, J. & Denuit, M. (1999). The safest dependence structure among risks. Insurance: Mathematics and Economics 25, 11–21] to its tail counterpart and baptize this new dependency structure as tail mutual exclusivity. Probability levels are first specified for each component o...

Comonotonicity has been successfully applied to derive various approximations in the single-factor mortality projection model proposed by Lee and Carter (1992), after Denuit and Dhaene (2007). However, this approach appears to lead to inaccurate approximations in the multi-factor mortality projection models developed by Cairns et al. (2006). Theref...

Building on the seminal work by Shaked and Shanthikumar (Adv Appl Probab 20:427–446, 1988a; Stoch Process Appl 27:1–20, 1988b), Denuit et al. (Eng Inf Sci 13:275–291, 1999; Methodol Comput Appl Probab 2:231–254, 2000; 2001) studied the stochastic s-increasing convexity properties of standard parametric families of distributions. However, the analys...

Considering the substantial systematic longevity risk threatening annuity providers’ solvency, indexing benefits on actual mortality improvements appears to be an efficient risk management tool, as discussed in Denuit
et al
. (2011) and Richter and Weber (2011). Whereas these papers consider indexing annuity payments, the present work suggests that...

Often, actuaries replace a group of heterogeneous life insurance contracts (different age at policy issue, contract duration, sum insured, etc.) with a representative one in order to speed the computations. The present paper aims to homogenize a group of policies by controlling the impact on Tail-VaR and related risk measures.

In this paper, we consider the composition of an optimal portfolio made of two dependent risky assets. The investor is first assumed to be a risk-averse expected utility maximizer, and we recover the existing conditions under which all these investors hold at least some percentage of their portfolio in one of the assets. Then, we assume that the de...

This paper extends a useful property of the increasing convex order to the multivariate orthant convex order. Specifically, it is shown that vectors of sums of comonotonic random variables dominate in the orthant convex order vectors of sums of random variables that are smaller in the increasing convex sense, whatever their dependence structure. Th...

In this paper, we extend the concept of mutual exclusivity proposed by Dhaene and Denuit (1999) to its tail counterpart and baptise this new dependency structure as tail mutual exclusivity. Probability levels are first specified for each component of the random vector. Under this dependency structure, at most one exceedance over the corresponding V...

Individual risk models need to capture possible correlations as failing to do
so typically results in an underestimation of extreme quantiles of the
aggregate loss. Such dependence modelling is particularly important for
managing credit risk, for instance, where joint defaults are a major cause of
concern. Often, the dependence between the individu...

## Citations

... Property BT is sometimes argued as an undesirable property; see Denuit et al. (2022b). We give a simple example below for the purpose of discussion. ...

... As an important risk sharing rule in economic theory with many attractive properties, CMRS was used by Landsberger and Meilijson (1994) to study Pareto optimality of comonotonic risk allocations, and its properties were studied in detail by Denuit and Dhaene (2012); see Denuit et al. (2022a) for a summary of its properties. Our characterization hence provides a first axiomatic foundation for CMRS and its applications in economic theory and decentralized finance and insur-1 See the later work on risk sharing by Barrieu and El Karoui (2005) for convex risk measures, Carlier et al. (2012) for stochastic dominance, Xia and Zhou (2016) for rank-dependent utilities, Cai et al. (2017) for reinsurance arrangements, and Embrechts et al. (2018) for quantile-based risk measures. ...

... Denuit (2020) derived formulas for the tail conditional expectations (TCE) of some univariate compound distributions. Denuit and Robert (2021) presented some results for the TCE of a compound mixed Poisson model, where both the claim frequencies and sizes depend on several latent variables. Ren (2021) derived the formulas for the TCE and tail variance (TV) of multivariate compound models based on Sundt (1999), where claim frequency is one-dimensional and one claim can yield multiple dependent losses. ...

... To facilitate rate regulation, variable selections or variable importance measures associated with linear models are much more accessible to conduct than non-linear models such as neural networks or decision-tree-based methods. The machine learning models often outperform GLM or other linear models [15][16][17], but they are considered to be black box models and may create obstacles in communication among the fields associated with the regulatory practice. This helps to answer why, in rate regulation, regulators are reluctant to use machine learning models to estimate relativities of major risk factors, which are served as benchmark values for auto insurance companies. ...

... Remark that we do not strictly distinguish between prior and posterior information, here. If we go over to a time-series consideration, where more and more claims experience becomes available of an individual driver, we should clearly distinguish the different sets of information, because otherwise it may happen that in prior and posterior pricing factors we correct twice for the same factor; an interesting paper is Corradin et al. [82]. ...

... Turning to risk sharing, Denuit (2019) linked size-biased transform to individual allocations resulting from the application of the conditional mean risk sharing rule proposed by Denuit and Dhaene (2012). Properties of this risk sharing rule have been obtained by exploiting its relationship with size-biasing in Denuit and Robert (2020a), Denuit and Robert (2020b), Denuit and Robert (2021a), Denuit and Robert (2021b), and Denuit (2021c). The multivariate version of the size-biased transform we use to deal with correlated losses is a particular case of the one proposed in Furman and Zitikis (2007) and Furman et al. (2021a) and Furman et al. (2021b). ...

... Furthermore, in their paper, Denuit and Robert (2021) studied three insurance business models: carrier, broker, and self-governing. In the current paper, the author proposed an actuarial based model on 'conditional mean risk sharing', This paper aims to formalize the three business models dominating peer-to-peer (P2P) property and casualty insurance: self-governing, broker, and carrier. ...

Reference: