Ali Karimnezhad

Ali Karimnezhad
University of Ottawa · Department of Mathematics and Statistics

PhD

About

40
Publications
6,437
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
162
Citations
Additional affiliations
September 2020 - present
University of Ottawa
Position
  • Professor (Associate)
January 2020 - present
Health Canada
Position
  • Statistician
July 2017 - December 2019
University of Ottawa
Position
  • PostDoc Position

Publications

Publications (40)
Article
Motivation The rapid single-cell transcriptomic technology developments has led to an increasing interest in cellular heterogeneity within cell populations. Although cell-type proportions can be obtained directly from single-cell RNA sequencing (scRNA-seq), it is costly and not feasible in every study. Alternatively, with fewer experimental complic...
Article
In genome-wide association studies, hundreds of thousands of genetic features (genes, proteins, etc.) in a given case-control population are tested to verify existence of an association between each genetic marker and a specific disease. A popular approach in this regard is to estimate local false discovery rate (LFDR), the posterior probability th...
Article
Full-text available
Background Treating cancer depends in part on identifying the mutations driving each patient’s disease. Many clinical laboratories are adopting high-throughput sequencing for assaying patients’ tumours, applying targeted panels to formalin-fixed paraffin-embedded tumour tissues to detect clinically-relevant mutations. While there have been some ben...
Article
Full-text available
In this paper, we investigate Bayesian and robust Bayesian estimation of a wide range of parameters of interest in the context of Bayesian nonparametrics under a broad class of loss functions. Dealing with uncertainty regarding the prior, we consider the Dirichlet and the Dirichlet invariant priors, and provide explicit form of the resulting Bayes...
Preprint
Full-text available
Background: Successful treatment of cancer depends in part on identifying the particular mutations driving each patient's disease. Many clinical laboratories are adopting high-throughput sequencing as a means of assaying patients' tumours. However, most benchmarking and best practices studies have been conducted on large solid tumour specimens and...
Preprint
In genome-wide association studies (GWAS), hundreds of thousands of genetic features (genes, proteins, etc.) in a given case-control population are tested in favor of the null hypothesis that there is no association between each genetic marker and a specific disease. A popular approach in this regard is to estimate local false discovery rate (LFDR)...
Conference Paper
Next generation sequencing (NGS) has been used to catalogue genetic mutations in cancer. Recent studies employing NGS have identified specific genetic mutations that reliably predict therapeutic success with targeted treatment in many forms of cancer, and particularly in non-small cell lung cancer (NSCLC). Importantly, patients with oncogenic drive...
Article
In this paper we introduce a broad family of loss functions based on the concept of Bregman divergence. We deal with both Bayesian estimation and prediction problems and show that all Bayes solutions associated with loss functions belonging to the introduced family of losses satisfy the same equation. We further concentrate on the concept of robust...
Article
In this paper, we assume that allele frequencies are random variables and follow certain statistical distributions. However, specifying an appropriate informative prior distribution with specific hyperparameters seems to be a major issue. Assuming that prior information varies over some classes of priors, we develop the concept of robust Bayes esti...
Article
In a genome-wide association study (GWAS), the probability that a single nucleotide polymorphism (SNP) is not associated with a disease is its local false discovery rate (LFDR). The LFDR for each SNP is relative to a reference class of SNPs. For example, the LFDR of an exonic SNP can vary widely depending on whether it is considered relative to the...
Preprint
Full-text available
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...
Preprint
Full-text available
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...
Article
Full-text available
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significan...
Article
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significan...
Preprint
Full-text available
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance tes...
Article
This paper is devoted to robust Bayes sample size determination under the quadratic loss function. The idea behind the proposed approach is that the smaller a chosen posterior functional, the more robust the posterior inference. Such desired posterior functional has been taken, in the literature, as the range of posterior mean over a class of prior...
Preprint
Full-text available
We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense...
Article
Full-text available
In this paper we investigate the task of parameter learning of Bayesian networks and, in particular, we deal with the prior uncertainty of learning using a Bayesian framework. Parameter learning is explored in the context of Bayesian inference and we subsequently introduce Bayes, con- strained Bayes and robust Bayes parameter learning methods. Baye...
Article
Full-text available
The maximum entropy (ME) method is a recently-developed approach for estimating local false discovery rates (LFDR) that incorporates external information allowing assignment of a subset of tests to a category with a different prior probability of following the null hypothesis. Using this ME method, we have reanalyzed the findings from a recent larg...
Data
Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Enhancer Hoffman annotation. (TIF)
Data
Scatter plot of the LFDR-ME estimates by minor allele frequency and the decrease in LFDR estimates using the ME method, when using the Fetal DHS annotation. (TIF)
Data
Local false discovery rate estimates using the maximum entropy method for nine annotation categories. Columns include the SNP id (legendrs), chromosome (chr), position (pos), minor allele frequency (maf), slope coefficient (beta) and p-value (p_dgc) for association with CAD from the consortium, z-squared (z_sq), and then various LFDR estimates. The...
Article
This paper deals with Bayes, robust Bayes, and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference, the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not ab...
Article
This paper deals with prior uncertainty in the parameter learning procedure in Bayesian networks. In most studies in the literature, parameter learning is based on two well-known criteria, i.e., the maximum likelihood and the maximum a posteriori. In presence of prior information, the literature abounds with situations in which a maximum a posterio...
Article
In this paper we deal with Bayes, E-Bayes and robust Bayes prediction under precautionary loss functions. It is well-known that in the Bayesian framework, the Bayes rule is obtained by considering a specific prior distribution over the parameter of interest but in practice, the use of a specified prior with specific hyperparameters is critical. Spe...
Article
Bayesian networks are graphical probabilistic models representing the joint probability function over a set of random variables using a directed acyclic graphical structure. In this paper, we consider a road accident data set collected at one of the popular highways in Iran. Implementing the well-known parents and children algorithm, as a constrain...
Article
Robust Bayesian methodology deals with the problem of explaining uncertainty of the inputs (the prior, the model, and the loss function) and provides a breakthrough way to take into account the input's variation. If the uncertainty is in terms of the prior knowledge, robust Bayesian analysis provides a way to consider the prior knowledge in terms o...
Article
This paper deals with Bayes, robust Bayes and minimax predictions in a subfamily of scale parameters under an asymmetric precautionary loss function. In Bayesian statistical inference the goal is to obtain optimal rules under a specified loss function and an explicit prior distribution over the parameter space. However, in practice, we are not able...
Article
Prediction of a future observation on the basis of currently observed data is demanded in many theoretical and applied problems. In this paper, we introduce prediction of a future observation from scale models under the general entropy prediction loss function and deal with Bayes and Posterior Regret Gamma Minimax prediction and obtain general form...
Article
In this paper, we consider the prediction problem of a future observation in a family of scale parameter models under a class of precautionary prediction loss function in the context of Bayes and robust Bayes methodology. Under three members of the precautionary prediction loss functions, which are suitable members when considering scale invariant...
Article
Full-text available
Let X be a random variable from a normal distribution with unknown mean θ and known variance ρ2. In many practical situations, θ is known in advance to lie in an interval, say [-m,m], for some m > 0. As the usual estimator of θ, i.e., X under the LINEX loss function is inadmissible, finding some competitors for X becomes worthwhile. The only study...
Article
For estimating an unknown scale parameter of Gamma distribution, we introduce the use of an asymmetric scale invariant loss function reflecting precision of estimation. This loss belongs to the class of precautionary loss functions. The problem of estimation of scale parameter of a Gamma distribution arises in several theoretical and applied proble...
Conference Paper
Let X 1,...,X n be a random sample from a normal distribution with unknown mean θ and known variance σ 2. The usual estimator of the mean, i.e., sample mean X̄, is the maximum likelihood estimator which under squared error loss function is minimax and admissible estimator. In many practical situations, θ is known in advance to lie in an interval, s...

Network

Cited By

Projects

Projects (4)
Project
The concentration of this project is on the theoretical development of robust Bayesian inference.
Project
Research on the maximum entropy principle, including the minimization of relative entropy. Older research on the maximum entropy principle: https://davidbickel.com/category/methods/maximum-entropy/
Project
Research on imprecise probability, especially with robust Bayes methods. Older research: https://davidbickel.com/category/methods/imprecise-probability/