Science topic
Computational Statistics - Science topic
Explore the latest questions and answers in Computational Statistics, and find Computational Statistics experts.
Questions related to Computational Statistics
I have already estimated the threshold that I will consider, created the pdf of the extreme values that I will take into account and currently trying to fit to this a Generalized Pareto Distribution. Thus, I need to find a way to estimate the values of the shape and scale parameter of the Generalized Pareto Distribution.
I'm currently working on my master's thesis, in which I have a model with two IVs and 2 DVs. I proposed a hypothesis that the two IVs are substitutes for each other in improving the DVs, but I cannot figure out how to test this in SPSS. Maybe I'm thinking to 'difficult'. In my research, the IVs are contracting and relational governance, and thus they might be complementary in influencing my DVs or they might function as substitutes.
I hope anyone can help me, thanks in advance!
Hello Everyone,
I want to produce the following figure using R for my paper. But I don't how to produce this figure without overlapping labels in R. Does anybody help how to do that in plot function “plot ()”? There are other packages available to produce this figure but I am interested in plot function in R.
Here is my R script:
plot(SO~TO, xlim = c(0.4, 0.9), ylim=c(0.1, 0.5), col="green3", pch=19, cex=2,data=TOSO)
text(SO~TO, labels=X,data=TOSO, cex=0.8, font=1, pos=4)
Thank you in advance.
Himanshu
I explored different packages (such as mgcv, MCMCglmm, glmmADMB, etc) but they all have limitations. Either they don't allow for zero-inflated, zero-altered distributions or they don't allow for temporal auto-correlation through functions as corAR() or corARMA().
I am working with ENMeval package (Muscarella et al., 2014) in R to develope Species Distribution Models. This package developes SDM in raw Maxent output, but I need logistic or probability maps in order to conduct further analysis in ArcGIS.
While running Multinomial logistic regression in spss an error displaying in parameter estimate table. For Wald statistics some item value is missing because of the zero standard error and displaying a message below this table "Floating point overflow occurred while computing this statistic. Its value is therefore set to system missing". Does anyone know how to resolve this error?
I have a set of nonlinear ordinary differential equations with some unknown parameters that I would like to determine from experimental data. Does anybody know of any good freely available software, or good reference books?
Xlstat Add-Ins have two different kinds of the trend analysis...(MK and Seasonal MK).. what's the difference in the way of calculation? and what type of data should be an entry? (monthly or seasonally)?
Rolle's theorem is applicable in R. Is it also applicable for a function f going from R^n to R^n?
We used SPSS to conduct a mixed model linear analysis of our data. How do we report our findings in APA format? If you can direct us to a source that explains how to format our results, we would greatly appreciate it. Thank you.
We want to calculate the Root Mean Square of Error (RMSE) from the model summary table in Multilayer Perceptron (SPSS) in the following format demonstrated in the Table.
Hi,
I want to know how to rank the relative importance of predictors calculated by summing the Akaike weights of the different models where they are included.
I just find a comment at this respect the in following link, but it would be helpful to find a citation.
thanks
Hi All,
I'm looking for assistance on the below queries related to seasonality of VAR and Inventory Management/Supply Chain Related models
I've a multivariate time series with 2 variables.
1)Could you let me know if VAR model will work with multivariate time series with Seasonality?
2)Anything needs to be done explicitly for VAR to handle seasonality and random components of the constituent timeseries(s)?
3)What are the other multivariate time series similar to VAR model?
4) Are there any R packages that supports Inventory Management other than SCPerf and InventoryModelPackage? I'm looking for implementing inventory optimization using R. Any leads/sample code could be helpful for me.
Regards
Lal
I want to do a PERMANOVA using AIC as the selection criterion. I did PERMANOVA with adonis of package vegan but I can´t find how to use AIC:
> m <- adonis(dune ~ ., dune.env)
> extractAIC(m)
Error in UseMethod("extractAIC") :
no applicable method for 'extractAIC' applied to an object of class
"adonis"
Do you know how to do it? Or, is there another way to do PERMANOVA in R (choosing the distance matrix) that allow using AIC?
Hi,
I was wondering if there is an R-package (and functions therein) that implements Bayesian Phylogenetic Mixed Models (BPMM) or is the general R-package "MCMCglmm" for Bayesian Mixed Models currently the best option?
As you know, Linear Discriminant Analysis (LDA) is used for a dimension reduction as well as a classification of data.
When we use LDA as a classifier, the posterior probabilities for the classes are normally computed in the statistical library such as R. (For example, https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/predict.lda.html)
Then how are the posterior probabilities computed? How is it estimated in the projected low-dimensional space?
Hello everyone :)
I am currently conducting a comprehensive meta-analysis on customer loyalty, with a huge amount of articles, that are using SEM to evaluate the strengths of the relationships between the different variables I am interested in (satisfaction, loyalty, trust…etc).
I saw, that for most of the meta-analysis, the effect size metric is r. But since all my articles of interest are using SEM, I just could report the Beta coefficients, t-values and p-values. Is it okay to use these kinds of metrics to conduct a meta-analysis?
I saw an article of Peterson (2005), explaining how to transform a beta into a r coefficient for the articles where the r is not available. This is a first start, but this is not giving me a comprehensive method for conducting a meta-analysis only with SEM articles (what metrics should I code? what are the statistics to compute?...etc).
My question is then: is it possible to conduct a meta-analysis with articles using SEM? If yes, do you have references explaining how to code the metrics and compute the statistics for the meta-analysis?
Thanks in advance for your help ! :)
Kathleen Desveaud
Using R code in Vine copula package we can have tree gaph of dependent copula, can we draw the same tree in Matlab....any help?
STATA does not support the moving block bootstrapping where one can specify the length of the block so I have to the command myself,
I would be very thankful of anyone can help
Cheers
In decision making applications based on neutrosophic logic, how to sort in best to worse order for following:
(T, F, I) : (1,0,0) (1,0,1), (1,1,0), (1,1,1), (0,0,0), (0,1,1), (0,1,0)
Hi!
We are trying to estimate body mass (W) heritability and cross-sex genetic correlation using MCMCglmm. Our data matrix consists of three columns: ID, sex, and W. Body mass data is NOT normally distributed.
Following previous advice, we first separated weight data into two columns, WF and WM. WF listed weight data for female specimens and “NA” for males, and vice-versa in the WM column. We used the following prior and model combination:
prior1 <- list(R=list(V=diag(2)/2, nu=2), G=list(G1=list(V=diag(2)/2, nu=2)))
modelmulti <- MCMCglmm(cbind(WF,WM)~trait-1, random=~us(trait):animal, rcov=~us(trait):units, prior=prior1, pedigree=Ped, data=Data1, nitt=100000, burnin=10000, thin=10)
The resulting posterior means of posterior distribution were suspiciously low (e.g. 0.00002). We calculated heritability values anyway, using the following:
herit1 <- modelmulti$VCV[,'traitWF:trait WF.animal']/
(modelmulti$VCV[,'traitWF:trai tWF.animal']+modelmulti$VCV[,' traitWF:traitWF.units'])
herit2 <- modelmulti$VCV[,'traitWM:trait WM.animal']/
(modelmulti$VCV[,'traitWM:trai tWM.animal']+modelmulti$VCV[,' traitWM:traitWM.units'])
corr.gen <- modelmulti$VCV[,traitWF.traitW M.animal']/
sqrt(modelmulti$VCV[,'traitWF: traitWF.animal']*modelmulti$VC V[,'traitWM:traitWM.animal'])
We get heritability estimates of about 50%, which is reasonable, but correlation estimates were extremely low, about 0.04%.
Suspecting the model was wrong, we used the original dataset with all weight data in a single column and tried the following model:
prior2 <- list(R=list(V=1, nu=0.02), G=list(G1=list(V=1, nu=1, alpha.mu=0, alpha.V=1000)))
model <- MCMCglmm(W~sex, random=~us(sex):animal, rcov=~us(sex):units, prior=prior2, pedigree=Ped, data=Data1, nitt=100000, burnin=10000, thin=10)
The model runs, but it refuses to calculate “herit” values, with the error message “subscript out of bounds”. We’d also add that in this case, the posterior density graph for sex2:sex.animal is not shaped like a bell.
What are we doing wrong? Are we even using the correct models?
Eva and Simona
Good morning,
is there any way to check for multivariate outliers when data is not only composed by continuous variables? My dataset includes categorical variables (with 2 and 3 levels) and continuous variables. I guess whether exists any modification from Mahalanobis distances or any other test.
I know that at least there is one, developed by Leon & Carriere for looking for outliers in data including nominal and ordinal variables (reference below). But i did not find any software including it.
Leon, A.R. y K.C., Carriere. 2005. A generalized Mahalanobis distance for mixed data. Journal of Multivariate Analysis. 2005. 92, 174-185.
I work mainly in R and MPlus, so it would be great if you can give a solution within tghis software.
Thanks!
I wonder how to compute statistical weight of a negative ion (anion) of hydrogen. I know that statistical weight of the electron is ge=2, of proton is also gp=2 as they both have spin equal to 1/2. Statistical weight of hydrogen atom is gH=gp×ge=4. Assuming that the electrons in negative ion are on different energy levels I get gA=gp×ge×ge=8. Is that correct?
Cronbach's alpha for all the dimensions is satisfactorily above 0.7
All factor loadings are satisfactory.
The Model fit indices are not satisfactory when CFA is run on AMOS, and error message "the following covariance matrix is not positive definite" is displayed.
Please help.
Hi,
Recently I was reading a paper from Will G Hopkins about magnitude-based inferences and downloaded his related spreadsheet. This concept is new for me and still gives me a lot of doubts. Indeed I did not understand what he means as "value of effect statistic" and what he considers beneficial (+ive) or harm (-ive). Someone can help me ?
Thanks in advance
Hi
Your project sound really interesting. For your interest, we have developed highly novel methodology, referred as HMM-GP (Paper as attach file), for artificially generating synthetic daily streamflow sequences. HMM-GP, a suite of stochastic modelling techniques, integrates highly competent Hidden Markov model (HMM) with the generalised Pareto Distribution (GP). The application of HMM model retains the key statistical characteristics of the observed (input) streamflow records in the synthetic (output) streamflow series but essentially re-orders the magnitude, spacing and frequency of streamflow sequences to simulate realistically possible multiple alternative (artificial) flow scenarios. These synthetic series could be utilised in a range of hydrological/hydraulic applications. Moreover, within the HMM-GP modelling framework, Generalized Pareto Distribution (GP) fitted to values over 99 percentile allows highly accurate simulation of extreme flows/events.
I would be very happy to hear from you if you have any comments/questions for me.
Best wishes
Sandhya
In case you are having an heavy tailedness with residual distributions or you suspect Endogenity apply GMM.
Hello,
I want to implement a function using Matlab, that can be used to perform sampling without replacement with unequal weights.
- Without replacement:
When sampling without replacement each data point in the original dataset can appear at most once in the sample. The sample is therefore no larger than the original dataset.
- With unequal weights:
When sampling with unequal weights the probability of an observation from the
original dataset appearing in the sample is proportional to the weight assigned to that observation.
where the samples' weights should be changed at each iteration.
I have found this function, in the link:
function I=randsample_noreplace(n,k,w)
I = randsample(n, k, true, w);
while 1
[II, idx] = sort(I);
Idup = [false, diff(II)==0];
if ~any(Idup)
break
else
w(I) = 0; %% Don't replace samples
Idup (idx) = Idup; %% find duplicates in original list
I = [I(~Idup), (randsample(n, sum(Idup), true, w))];
end
end
where a vector of probabilities will be generated using random() function.
for example:
n=3; p=0.5; M=20; N=1;
random('Binomial',n,p,[M,N])
Any suggestion would be appreciated.
I am working with dichotomous data (8 items).
To find out which model fits my data best, I used the WLSMV estimator in lavaan and specified two models:
1-factor-model
2-factor-model
First I specified a model with 1 factor (myModel) with WLSMV estimation and the ordered=c(...) argument in lavaan. This is the code:
fit <- cfa(myModel, data=data.deskr, ordered=c("T1_OD1", "T1_OR1", "V1_OD1", "V1_OR1", "T2_OD1", "T2_OR1", "V2_OD1", "V2_OR1"), group.equal = c("loadings"))
It worked well- I got an output for this model including fit-indices etc. Then I specified a 2-factor-model in exactly the same way. It also worked well. The 2-factor model is called `myModel2'/ "fit2".
Then I wanted to compare these two models by using: anova(fit, fit2)
The warning message is:
Error in lav_test_diff_af_h1(m1 = m1, m0 = m0) : lavaan ERROR: unconstrained parameter set is not the same in m0 and m1
Is there somebody who can help me?
Thank you :)
Dear friends, Hello. On my Logistic Regression classification task the some parameters are quality (e.g., race, city, country, etc.). Please, answer for my questions:
1. We have to code values of these parameters as numbers 1, 2, 3,… Is it important to install these numerical values according similarity of its quality values or from classification point of view it isn’t important?
2. Is it possible to code missing values of some parameters for some objects as zero?
Thanks in advance for your answers.
I am making a machine readable database of synaptic electrophysiology data from different sources. It is possible to find two experiments that have the same experimental condition, therefore data normalization is necessary. I am working on a list of possible covariates that influence the synaptic signals and possible suggestion for data normalization with respect to that covariate (please find the attached pdf document).
Please let me know if you know a covariate that is missing from this list, also your proposals if you had any.
I have temperature time series data, in which temperature is fluctuating with the time. I want to plot pdf of the temperature without any predefined fit.
We are doing a mini project about differential equation Modeling. It is to use yeast data for logistic differential equation. We need to estimate parameters using Incomplete/Missing data set. Two questions:
1. Except least squares, MLE, what else is good for parameter estimation (quick & dirty)?
2. How to deal with data set with missing data points? (My students are using Neuron network and Gradient Descent)
Thank you!
I am trying for regression analysis in MATLAB and want to plot PPV vs Scaled distance using power function with log-log scale. If there is any specific command for this, kindly provide me.
I want to carried out a research on classification with missing data. But I want to explore a new method. I need you suggestions please.
Hello
Before applying multifactorial analyzes, we must ensure that the data follow a Gaussian distribution; Please, send me documents
I want to fit the nested linear-nonlinear Poisson models to the spike train of a neuron. How can I test the model performance by computing how much the log likelihood increase from a fixed mean firing rate model? i.e., what test can I use to see if the increase was significant? Is it possible to use the one-sided signed rank test? If yes, how?
My question is about Bayesian VARs (to be specific, how to utilize them in Eviews).
To begin with, I would like to know whether Bayesian VARs are superior to conventional VARs, or when to use Bayesian VARs, rather than conventional VARs.
And suppose that I have to use Bayesian VARs in my research. In Eviews, unlike unrestricted VARs, I have to specify priors, and so on, when employing Bayesian VARs. Are there any "cookbook" (e.g. t-value greater than 2.0) procedure in using Bayesian VARs in Eviews? I will be delighted if you provide me with non-technical lecture notes or spell out the general procedure.
Many thanks in advance,
Mizuki Tsuboi
My Dear dollgeues I have study in simulation about bivariate distributions(pareto, exponential) and need any help about this work.
While the estimate and 95% Confidence Interval are available, it is unclear what the degrees of freedom would be. For example, with completely made up data, one might want to compare the association between sleep disturbance and depression (e.g., OR=1.2 [1.1, 1.3] k=10) as well as sleep disturbance and anxiety (e.g., OR=1.25 [1.15, 1.35], k=15).
I have hierarchical model with prior parameters mu and tau. I should do simulation study and I should suppose true value of mu and tau and I should use these true values to generate datasets. My question:
How I could choose true values of mu and tau?
Is there rule to choose them?
Thanks
I have a 347x225 matrix, 347 samples (facebook users), and 225 features (their profile), and I used the PCA function for the dimension reduction in Matlab.
^{x = load (dataset)}
^{coeff = pca (x)}
It generated a 225x98 matrix. But I don't understand what exactly it is generating and I am unable to understand what to do next. Cananyonee help me with the understanding? My main goal is to reduce the dimension of my original matrix.
and I don't what is
^{coeff = pca(X,Name,Value)}
^{[coeff,score,latent] = pca(___)}
^{[coeff,score,latent,tsquared] = pca(___)}
^{[coeff,score,latent,tsquared,explained,mu] = pca(___)}
Assume a joint pdf of a bivariate data is a known distribution but not bivariate Gaussian. I have read that Copula might be used to measure dependency between the two variables. But is there a way to remove the dependency between the two variables? Can we convert dependent bivariate variables to independent bivariate variables?
Hello,
I'm working on a panel data, containing different banks in different years, and I'm trying to get a regression using all the data I have, for that I have to run an homogeneity test in order to see if the same coefficient can apply to all the banks.
Trying to run the test using R using the pooltest function, I get the following error " Error in FUN(X[[i]], ...) : insufficient number of observations"
I went to see the function code and found the error only appears when (nrow(X) <= ncol(X) which is not my case. since my data has 60 rows and 15 columns.
What to do ?
Need to interpret the trends shown in charts.
As far as I have understood, in Weka there is only one option i.e., Replace with mean or mode.If I want to experiment with other imputation methods like regression, machine learning techniques, which tool would you suggest me to use?
Problem: Given a Direct Graph G (as shown in Fig Below). Each vertex represent a sensor node (have a limited battery). A link -> has a weight (represents the probability that will send the data packet to ). This weight will be fixed for a t seconds and then will be changed after that, as the battery of the nodes will be consumed per time. For example, as shown in Figure below, say we have node , pr( -> )=0.3; pr( -> )=0.5; pr( -> )=0.2; ; after a time t, this probability distribution will be changed ( according to the energy level of the battery ).
please see the link or the attached file.
For the same field, I have both yearly and quarterly data sets. The first set is detailed by product, so I can use panel techniques, and my results are globally satisfying. The second set is not detailed, and looks quite suspicious, in particular concerning seasonality. No satisfying estimation can be found. How can I transform the yearly formulations into quarterly ones? My equations use an error correction framework,, so the dynamics are essential.
Someone told me to use boostrap instead, but are they equivalent? Is bootstrap a better estimator?
how to find VaR vector out of copula, say i have find the parameter of t copula...
Hello,
I have a lot of historical data, from the past 20 years, I've been told that Bayesian could increase the predictions of several interesting subjects like the sample size or the model parameters estimation.
I would like to wake up all of these data (I have like hundreds of variables for ~100000 data).
Do you know what good uses of historical data from experiments I could do by the use of Bayesian ?
Thank you very much for your answers.
Yacine HAJJI
Suppose we have to algorithm (A and B) to solve a multi-objective problem. Each algorithm provides a set of solutions. Which statistical test is appropriate to compare these algorithms? Is Wilcoxon test appropriate?
I'm looking for a equation of ARL of EWMA chart and to be easy calculate it by manually.
Hi everyone,
My data exhibits bimodal distribution. Both of them are beta distribution at different intervals and does not overlap. My doubt is how to obtain the first moment (mean) ,second moment (variance), third moment and so on of the total distribution?
Thank you in advance
Sam
This is w.r.t a hybrid of ANN and logistic regression in a binary classification problem. For example in one of the papers I came across (Heather Mitchell) they state that "A hybrid model type is constructed by using the logistic regression model to calculate the probability of failure and then adding that value as an additional input variable into the ANN. This type of model is defined as a Plogit-ANN model".
So, for n input variables, I'm trying to understand how an additional input n+1 to a ANN is treated by the activation function (sigmoid) and in the summation of weights multiplied by inputs process. Do we treat this probability variable n+1 as an additional feature that will have its own weight associated with it or it is treated in a special way ?.
Thank you for your assistance.
The Issue is that the inverse of a Co-variance Matrix In Mahalanobis Distance sometimes leads to extreme values (Inf or NaN for example) when I try to calculate it. Is it something that is expected of an inverse of a matrix ?. If yes, then how to deal with these extreme values.
Info : The Co variance Matrix is obtained from a set of feature vectors; comprising of Intensity, Homogeneity and Entropy in my case
If the answer could be demonstrated with an example, it would be very helpful.
Hi .
I am trying to think about the suitable model. I want to Chose and for that I ask myself: What are the differences between the bivariate Probit Modell (biprobit in STATA) and the bivariate ordered Probit Modell (bioprobit in STATA)?
Both can be used as Seemingly Unrelated Regressions, but in the bioprobit part , it says there have to be some valid exclusions restrictions....Is this really needed? And is there also a specification test in STATA for ordered bivariate models?
Hello researchers,
I'm trying to fit a Multivariate probit Model for a project on household consumption that I'm working on. I was wondering if there is a way I can fit a Bayesian variation of the model.
I will appreciate any help you can provide me with.
Kind regards
I have 11 sleep outcomes - all binary .i.e. initial insomnia? yes/no. I want to see whether the 126 individuals I have cluster onto the different sleep variables. I have tried EFA before, and the structure is proven to be inappropriate for this, so am now trying cluster analysis potentially with Manhattan distance method.
I am not sure how to go about this in STATA and would appreciate the help to be able to see whether my variables are clustering and from there, work these into regressions.
Thanks.
I have quarterly data for some variables and annual data for other variables. Please how do I go about the estimation. Which procedure will be best?
For the variable capital gain there are 22,792 observations and 90% of them contain data points with a value of 0, what is the best way to normalize the data before building a artificial neural net model?
Using nnstart in MatLab I have generated a MatLab function (shown below), how do I now use it to predict outcomes?
Hi all,
I want to find out the significant effect that will reduce the open circuit potential (OCP) of metal-coating system. I have four factors with two levels and a limited time period for experiment, measured at an interval. The samples fail if their OCP is above a critical potential value in the time frame of my experiment. I read that minitab could help in this analysis and I would like to do the similar analysis in session 5 of the paper cited below. Is it possible for my case to do DOE life testing? I'm not good in statistic and I don't know how to get minitab to do the analysis for me. Please help!
Guo, H., & Mettas, A. (2012). Design of experiments and data analysis. Paper presented at the 58th Annual Reliability & Maintainability Symposium (RAMS 2012).
I see there is the lcmm (latent classes mixed models) package for R. What issues did you encounter?
I am looking for an example (dataset and code) of fitting a spatial hurdle model (zero-inflated model with a single source of zeros) using the INLA package for R. With this model, two equations are jointly fitted:
(1) the probability to observe at least one event (the single source of zeros)
(2) the (strictly > 0) count of events
I am aware of the R-!NLA documentation available at http://www.r-inla.org/
Hi everyone,
I am giving a lecture next week on transforming non-normal data to normal. Transforming skewed data to normal is fairly easy to do using the Box-Cox transformation. However, I cannot find a transformation that helps with kurtotic data? I have seen some recommend the modulus transformation. I tested it using a monte carlo simulation and it failed to normalize symmetrical but highly leptokurtic data (approx 0 skew and 10 kurtosis).
Does anyone know of a transformation that normalizes kurtotic data?
thanks,
Cristian
I get the number of components for my dataset using BIC but i want to know if the Silhouette coefficient method is the right option to validate my results.
Thanks!
I have daily data from Jan/1/2008 to Jan/1/2012 i would like to create dummy variable for the whole period after a specific date that is after March 2011, in addition i would like to create another dummy variable for the period from March 2011 to June 2011,
How to do that using Stata 13
Thanks in advance
Dear Members of RG community,
I try to calculate incremental variance explained by variables in multivariate multiple linear regression model, but I don't have Sum of squares parameters like multiple linear regression. I'd like something like:
library(car)
#Create variables and adjusted the model
set.seed(123)
N <- 100
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N, 30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y1 <- 0.2*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 10)
Y2 <- -0.3*X2 + 0.2*X3 + rnorm(N, 10)
Y <- cbind(Y1, Y2)
dfRegr <- data.frame(X1, X2, X3, Y1, Y2)
(fit <- lm(cbind(Y1, Y2) ~ X1 + X2 + X3, data=dfRegr))
#How do we get the proportion now?
af <- Anova(fit)
afss <- af$"test stat"
print(cbind(af,PctExp=afss/sum(afss)*100))
#
Obviously doesn't work. There are some kind of approach for this?
Thanks
I had Z=XCF(42)
I want to find approximately equal value of Z in ACF(Auto correlation).
how to find it?
Probability limits can be used if theta follows say a gamma or beta distribution, but I am not sure of using probability limit approach for some charts like EWMA or CUSUM. I would like to know if control limit of the form E(theta) +/- L*SD(theta) is always applicable when theta is not normally distributed.
I want to show the difference between two sets of Analytic Hierarchy process (AHP) data with proper statistical methods. Which method will be the most compatible and appropriate? Two sided hypotheses test of means or multinomial logit models or any other methods ?
We know that merging two Poisson processes results in another Poisson process with a rate that is the sum of the two original rates. (https://www.probabilitycourse.com/chapter11/11_1_3_merging_and_splitting_poisson_processes.php)
What type of process do we get by merging two processes with lognormal (interarrival time) distributions? How are the parameters of this process related to the parameters of the original lognormal processes?
I am searching for a free Excel addon or Excel sheet which allows the conversion of normal distributed datsa into lognormal distributed data.
I am trying to obtain different ordinal regression models with outcome as dependant variable and would like to compare them in order to know which one is better. C-statistic could be a way of comparing their performance but I have doubts on how to compute this statistic in such a model.