# Ali KarimnezhadUniversity of Ottawa · Department of Mathematics and Statistics

Ali Karimnezhad

PhD

## About

40

Publications

6,437

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

162

Citations

Introduction

Additional affiliations

September 2020 - present

January 2020 - present

July 2017 - December 2019

## Publications

Publications (40)

Motivation
The rapid single-cell transcriptomic technology developments has led to an increasing interest in cellular heterogeneity within cell populations. Although cell-type proportions can be obtained directly from single-cell RNA sequencing (scRNA-seq), it is costly and not feasible in every study. Alternatively, with fewer experimental complic...

In genome-wide association studies, hundreds of thousands of genetic features (genes, proteins, etc.) in a given case-control population are tested to verify existence of an association between each genetic marker and a specific disease. A popular approach in this regard is to estimate local false discovery rate (LFDR), the posterior probability th...

Background
Treating cancer depends in part on identifying the mutations driving each patient’s disease. Many clinical laboratories are adopting high-throughput sequencing for assaying patients’ tumours, applying targeted panels to formalin-fixed paraffin-embedded tumour tissues to detect clinically-relevant mutations. While there have been some ben...

In this paper, we investigate Bayesian and robust Bayesian estimation of a wide range of parameters of interest in the context of Bayesian nonparametrics under a broad class of loss functions. Dealing with uncertainty regarding the prior, we consider the Dirichlet and the Dirichlet invariant priors, and provide explicit form of the resulting Bayes...

Background: Successful treatment of cancer depends in part on identifying the particular mutations driving each patient's disease. Many clinical laboratories are adopting high-throughput sequencing as a means of assaying patients' tumours. However, most benchmarking and best practices studies have been conducted on large solid tumour specimens and...

In genome-wide association studies (GWAS), hundreds of thousands of genetic features (genes, proteins, etc.) in a given case-control population are tested in favor of the null hypothesis that there is no association between each genetic marker and a specific disease. A popular approach in this regard is to estimate local false discovery rate (LFDR)...

Next generation sequencing (NGS) has been used to catalogue genetic mutations in cancer. Recent studies employing NGS have identified specific genetic mutations that reliably predict therapeutic success with targeted treatment in many forms of cancer, and particularly in non-small cell lung cancer (NSCLC). Importantly, patients with oncogenic drive...

In this paper we introduce a broad family of loss functions based on the concept of Bregman divergence. We deal with both Bayesian estimation and prediction problems and show that all Bayes solutions associated with loss functions belonging to the introduced family of losses satisfy the same equation. We further concentrate on the concept of robust...

In this paper, we assume that allele frequencies are random variables and follow certain statistical distributions. However, specifying an appropriate informative prior distribution with specific hyperparameters seems to be a major issue. Assuming that prior information varies over some classes of priors, we develop the concept of robust Bayes esti...

In a genome-wide association study (GWAS), the probability that a single nucleotide polymorphism (SNP) is not associated with a disease is its local false discovery rate (LFDR). The LFDR for each SNP is relative to a reference class of SNPs. For example, the LFDR of an exonic SNP can vary widely depending on whether it is considered relative to the...

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significan...

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science.
Given that blanket and variable alpha levels both are problematic, it is sensible to dispense
with significan...

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...

This paper is devoted to robust Bayes sample size determination under the quadratic loss function. The idea behind the proposed approach is that the smaller a chosen posterior functional, the more robust the posterior inference. Such desired posterior functional has been taken, in the literature, as the range of posterior mean over a class of prior...

We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense...

In this paper we investigate the task of parameter learning of Bayesian networks and, in particular, we deal with the prior uncertainty of learning using a Bayesian framework. Parameter learning is explored in the context of Bayesian inference and we subsequently introduce Bayes, con- strained Bayes and robust Bayes parameter learning methods. Baye...

The maximum entropy (ME) method is a recently-developed approach for estimating local false discovery rates (LFDR) that incorporates external information allowing assignment of a subset of tests to a category with a different prior probability of following the null hypothesis. Using this ME method, we have reanalyzed the findings from a recent larg...

Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Enhancer Hoffman annotation.
(TIF)

Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Fetal DHS annotation.
(TIF)

Local false discovery rate estimates using the maximum entropy method for nine annotation categories.
Columns include the SNP id (legendrs), chromosome (chr), position (pos), minor allele frequency (maf), slope coefficient (beta) and p-value (p_dgc) for association with CAD from the consortium, z-squared (z_sq), and then various LFDR estimates. The...

This paper deals with Bayes, robust Bayes, and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference, the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not ab...

This paper deals with prior uncertainty in the parameter learning procedure in Bayesian networks. In most studies in the literature, parameter learning is based on two well-known criteria, i.e., the maximum likelihood and the maximum a posteriori. In presence of prior information, the literature abounds with situations in which a maximum a posterio...

In this paper we deal with Bayes, E-Bayes and robust Bayes prediction under precautionary loss functions. It is well-known that in the Bayesian framework, the Bayes rule is obtained by considering a specific prior distribution over the parameter of interest but in practice, the use of a specified prior with specific hyperparameters is critical. Spe...

Bayesian networks are graphical probabilistic models representing the joint probability function over a set of random variables using a directed acyclic graphical structure. In this paper, we consider a road accident data set collected at one of the popular highways in Iran. Implementing the well-known parents and children algorithm, as a constrain...

Robust Bayesian methodology deals with the problem of explaining uncertainty of the inputs (the prior, the model, and the loss function) and provides a breakthrough way to take into account the input's variation. If the uncertainty is in terms of the prior knowledge, robust Bayesian analysis provides a way to consider the prior knowledge in terms o...

This paper deals with Bayes, robust Bayes and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference the goal is to obtain optimal rules
under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not able...

Prediction of a future observation on the basis of currently observed data is demanded in many theoretical and applied problems. In this paper, we introduce prediction of a future observation from scale models under the general entropy prediction loss function and deal with Bayes and Posterior Regret Gamma Minimax prediction and obtain general form...

In this paper, we consider the prediction problem of a future observation in a family of scale parameter models under a class of precautionary prediction loss function in the context of Bayes and robust Bayes methodology. Under three members of the precautionary prediction loss functions, which are suitable members when considering scale invariant...

Let X be a random variable from a normal distribution with unknown mean θ and known variance ρ2. In many practical situations, θ is known in advance to lie in an interval, say [-m,m], for some m > 0. As the usual estimator of θ, i.e., X under the LINEX loss function is inadmissible, finding some competitors for X becomes worthwhile. The only study...

For estimating an unknown scale parameter of Gamma distribution, we introduce the use of an asymmetric scale invariant loss function reflecting precision of estimation. This loss belongs to the class of precautionary loss functions. The problem of estimation of scale parameter of a Gamma distribution arises in several theoretical and applied proble...

Let X 1,...,X n be a random sample from a normal distribution with unknown mean θ and known variance σ 2. The usual estimator of the mean, i.e., sample mean X̄, is the maximum likelihood estimator which under squared error loss function is minimax and admissible estimator. In many practical situations, θ is known in advance to lie in an interval, s...

## Projects

Projects (4)

The concentration of this project is on the theoretical development of robust Bayesian inference.

Research on the maximum entropy principle, including the minimization of relative entropy.
Older research on the maximum entropy principle:
https://davidbickel.com/category/methods/maximum-entropy/

Research on imprecise probability, especially with robust Bayes methods. Older research:
https://davidbickel.com/category/methods/imprecise-probability/