Science topics: Theory of ComputationRandomized
Science topic
Randomized - Science topic
Explore the latest questions and answers in Randomized, and find Randomized experts.
Questions related to Randomized
HI,
I am looking for ways to add a random effect in a SUR model, using R or SAS.
To be more specific, I have panel data measured at an individual-and-daily level, and I want to stack 3 equations with different dependent and independent variables in a SUR model, with an individual random-effect coefficient.
If you guys have any example codes that I can refer to, it would be a great help!
Thank you:)
NOTE 1 [file ions.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
Setting the LD random seed to -397045813
Generated 100474 of the 100576 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 1
Generated 66298 of the 100576 1-4 parameter combinations
# 174
Dear BARTŁOMIEJ KIZIELEWICZ, ANDRII SHEKHOVTSOV, JAKUB WIĘCKOWSKI, JAROSŁAW WĄTRÓBSKI, and WOJCIECH SAŁABUN
I read your paper:
The Compromise-COMET method for identifying an adaptive multi-criteria decision model
My comments
1-In the abstract you say “which identifies the adaptive decision mode based on many normalization techniques and finds a compromise solution”
And how do you find a unique compromise solution by comparing rankings from several methods?
Are you looking for a composite ranking? But even if you get one what is it good for? The fact that you got a composite reranking is of no use, because you do not know which is the real ranking.
2 – Page 3 “Effective resource utilization is paramount to prevent irreversible environmental damage”
There is no doubt that resources are paramount, but not only for environmental damage. They are fundamental for any resource be it money, people, water, fuel, etc. Unfortunately, maybe 99% of the more than 200 MCDM methods consider that resources are infinite, and thus, they are not contemplated. The exception is PROMETHEE and Lineal Programming (LP).
3- Page 3 “management, and mitigating negative impacts .This underscores the relevance of MCDA methods,
which can facilitate selecting optimal decisions that align with”
sustainability goals.
Only sustainably goals? In reality, in real projects, all criteria are goals, and consequently, they must have a target.
4 - “adaptive compromise method for decision modeling”.
And what is it? You do not explain, at least what it means.
5- “Existing methods so far are susceptible to the rank-reversal paradox”
As per my research on RR it is not a paradox, but a natural and at random geometrical occurencre. As a fact since you are always working with the same number of alternatives or dimensions, to talk about RR appears irrelevant, since you are not adding or deleting alternatives. Normalization only may change the order or position of alternatives, and this is not related to RR, since dimensions are preserved.
6- “While current approaches offer discrete ratings and compromise rankings for a fixed set of alternatives, they falter when evaluating new alternatives”
Naturally, because by adding or deleting alternatives you are mapping data in a space of say 2 dimensions or alternatives, in another of 3 dimensions.
This means that in the 2D all feasible solutions of the problems are contained in a planar polygon. When you pass to 3D, the polygon becomes a polyhedron. Therefore, if in the polygon you find for instance that A2 >A1, this ranking may or not be preserved in 3D.
It is easy to see this in its geometrical constructions, and thus, the act of adding an alternative, delivers more information that could alter the original ranking, , in the same way that a cube provides more information than a and expanded rectangle.
Of course, you could not accept my theory, and in this case, I would be interesting to know yours, that is, why adding a new alternative may produce RR
7- “Each previous evaluation set or alternative requires recalibration”
What is a recalibration? Do you mean to run again the software?
8- “The paper presents the C-COMET method, offering a unique approach to establish adaptive decision models, impervious to the Rank Reversal Paradox”
Were you able to prove this assertion?
9- “method is the Analytic Hierarchy Process (AHP) approach, which is based on mathematical modeling of the relative importance of criteria and alternatives”
I am puzzled, since how can you consider a right mathematical modelling using AHP when the resulting initial matrix is FORCED to be transitive, irrelevant of what the DM estimates?
10- “Therefore mentioned authors proposed a new MCDM approach free of the Rank Reversal Paradox for a safer and more reliable decision”
Interesting, and how this can be done? I do not know what method these authors proposed; if in reality it works, at present it will be largely known. In my opinion, this is impossible, because violates the geometrical principles of working with multi dimensional spaces. By the way, I can prove mathematically and with examples what I say regarding RR
11- “Sequential Interactive Model for Urban Systems (SIMUS)”
I am afraid that this is not exact. SIMUS suffers from RR as any other method, if it weren’t, my RR theory would be invalid, however, due to its algebraic structure, it that does not compare alternatives, but selects them using the economics concept of Opportunity Cost and ranks criteria in each iteration, through a ratio analysis, and it could be the reason for its resistance to RR, as I have demonstrated using examples and in 66 combinations of adding and deleting, as shown in my book published in 2019 and also in its second edition in 2024
Recently, in an actual work I consider a case starting with 2D and adding a new one up to 10D.The results clearly shows that sometimes the invariance of the ranking is preserved for several dimensions, while in others it changes adding only an additional dimension. Why the randomness? Because it depends on the values of the new vector inputted and its interactions with the existing vectors. For this reason, nobody can say that a new alternative is better or worse that those existing.
As a fact, in my actual example, as new alternatives or dimensions are added, the rankings tend to be decreasing in length and at the very end, in 10D there is only one alternative. The reason could be that as we increase the alternatives, the next one incorporates the values of the precedent, as a cube also contains the information from the precedent square. In addition, it appears that the more the dimensions the larger the am amount of information, which is lineal. However, adding only one more alternative, the feasible solutions space be very very complex, and it could be that in 10D it is not possible to determine the coordinates in 10 dimensions due to the complexity of the polytopes.
As a bottom line, I am not saying that my theory on RR can explain everything, but I understand that it helps to understand the RR issue.
12- In page 6 you say MEREC or Entropy, meaning that both address the same issue. I disagree.
MEREC works removing one criterion at a time and then restoring it and using the next. The procedure is attractive, but in reality, in a set of say 9 criteria, the method is applied to nine different problems, because in each one are considered only 8 criteria instead of nine in the original problem. And thus, in each run the software will work on 9 different scenarios
These are some of my comments. I hope they can help.
I am willing to share with anybody my findings
Nolberto Munier
In a linear mixed model, I used an alpha lattice designer. If the environment is a combination of season and location, can I consider it a random factor in my analysis? May I be correct?
After copolymerization, how can I recognize the type of synthesized copolymer (alternate and random)?
somebody says if known population we should go by Random Population. somebody says if it is unknown population we should go non random population is it true
I used CMA software for standard mean difference. One moderator show multi-colinearty issue. Kindly suggest some way
This is whít is going on on my profile. Pls, help.
Peter Krasztev
I'm analysing a dataset from a field survey designed to test how tw types of marine protected areas affect species composition of marine seagrasses, and now struggle how to properly deal with the nested nature of our data.
Our design is a mixed model nested ANOVA (following the terminology in Quinn and Keough 2002), with three factors:
1) Management - fixed factor with three levels (open, closure and park)
2) Site: random factor with a total of 12 levels, nested within 'Management'. For each level of management there are 4 unique 'site' levels.
3) Transect; fixed factor with 3 levels (shallow, mid, reef) which is crossed with 'Management'.
Along each 'Transect' there's seagrass species-level shot count data from ca 10 stations (replicates). Sampling was done 1 time in each station, so there's no repeated measures.
We're trying to test the effects of 'Management', 'Transect' and their interaction on seagrass species composition using PERMANOVA as implemented in the adonis() routine in the vegan package for R. The standard code for a design with a blocked (crossed) random factor would be:
adonis(species ~ management * transect, strata = env$site, data = d)
However, in our case the random factor is nested under the main factor - not crossed with it. As I understand it is possible to constrain the permutations using the 'permutations = how' argument, and then specify a custom permutation design. See, for example, here:
But I've never worked with customized permutation designs before and struggle to find tutorials, so would really appreciate any form of feedback.
Anyone can provide some advise?
I've also looked into the nested.npmanova() function in the BiodiversityR package. This can properly handle a design like ours with 2 factors (one main, one nested) - but we have 3 factors...
We're also open to instead using the mvabund() routine, i.e. a GLM- rather than distance-based framework, if it can help us properly deal with the nested nature of our random 'site' factor. But so far I've only found examples where it can be used to handle crossed random factors.
A forest fire occured in Turkey. The total area was about 1700 ha. I am interested in sampling the site and also sampling control sites outside the burned area. My question is, would I represent the burn area if I sampled 4 plots about 6 ha in area. Within each plot, I would take 4 composite sample (each sample would be a mix of several soil cores). The plots would be the experimental unit. But they are very large. I wonder if this is too large? I would be comparing nutrient concentrations in the burn plots versus similar control plots (outside the burn). Fixed effect: burn treatment; Random effect: plot.
The sum of two independent random variables is also a gamma random variable. What would it be when the sum of two independent gamma random variables with different parameters. eg: X~Gamma(a,b) and Y~Gamma(c,d) ,Then the distribution function of X+Y?
The protocol fro the new NEBNext UltraExpress® RNA Library Prep Kit NEB #E3330S/L closely follows to the previous version NEBNext Ultra II RNA library Prep Kit # E7770 S/L, but random primer step is missing. How 1st strand synthesis works without it? Is random primer added into some mix now?
do ANOVA test, the random variables (replications) are significantly different. How to process the original data?
Good day! The question is really complex since CRISPR do not have any exact sequence - so the question is the probability of generation of 2 repeat units, each of 23-55 bp and having a short palindromic sequence within and maximum mismatch of 20%, interspersed with a spacer sequence that in 0.6-2.5 of repeat size and that doesn't match to left and right flank of the whole sequence, in a random sequence.
I'm very embarrassed to admit this, but I don't understand how random hexamer primers (RHP) work in reverse transcription. I made RT with gene-specific or oligo-dT oligos hundreds of times, the whole idea is absolutely clear. But when we come to RHP...
Okay, let's say we have set of random hexamers, the most downstream one (green on the upper picture) anneals to our RNA template and serves as a primer for reverse transcriptase. But what about others, annealing somewhere upstream (purple on the picture)? What happens then RT enzyme reaches them, why don’t they (especially GC-rich ones) interfere with revertase movement? At least in case of PCR such oligos annealing inside the amplified region effectively block the amplification.
On the other hand, if all of these random hexamers are capable of priming reverse transcription, in the end we will have a whole bunch of short cDNA fragments, barely usable for subsequent PCR amplification.
I’m afraid I miss something very important (and simple). I would greatly appreciate any clarification!
Stan
HI there, I've came across several articles discuss about random audit an Non random to tax evasion or compliance. Most of the articles is relating about effect of audit (random or non random) conducted by tax department in Norway.
1) what is random audit.
2) What is the method of random audit
3) Does taxpayers notified that they has been audited via random selection? Since the article found most of random audit leads to tax evasion by taxpayers. I expect the taxpayers know that they has been selected randomly and wont be selected again in a near corner so that they tend to underreport income and overstate of relief and deductions for anticipating audit wont come again.
Hope anyone here could make it clear for me. TQ in advance
I want to simulate a network with approximately 50 gNBs and 500 UEs with different deployment options such as random, uniform, and hexagonal for the gNBs, and uniform, random deployment for the UEs and study the impact of interference, mobility, etc. Are there any options available in NetSim to quickly deploy such networks and study their performance? Thank you.
Thank you in advance for your support.
We conducted a stratified random survey in 13 strata and obtained a sample size for each stratum.
At the time of the survey we had a variable non-response rate in each stratum (2-33%). I hope you can give me some guidance on how to calculate the weights to include the non-response rate in the weights, in order to make estimates on the total sample (N=16,000).
Regards!!
I try to explaim better. I have the scores for 10 items. Each item score are decided between the accordance of two persons (say, e.g., a therapist and a patient). Therefore, there is much more variability, depending on the subjectivity of the evaluators. How can I account for this variability and subjectivity in the evaluation scores? Should be a good way to take into account this variability using a regression mixed model, taking into account random effect? But, in order to carry out a Confirmatory Factor Analysis for the validation of the istruments, can I combine this two technics? And how?
Thank you
In CNN(convolution neural network), can the feature map obtained determinately by a random initialization convolution kernel? if not, how to decide the weights in convolution kernel to obtain the feature maps we need? By trial and rerror, are we shotting if our eyes closed?
Variable x is a controllable, and variable y is a random varialbe. Also, y=b*x+u, and u is a random iterm. So x and y is correlated. But, if we calculate the covariance between x and y, according to the definition Cov(x,y)=E[(x-Ex)(y-Ey)], the value should be zero. Since Ex=x.
Is the conclusion correct?Thanks
I'm having issues with the MATLAB livelink for COMSOL. I want to model a composite RVE of a random fiber reinforced composite in COMSOL using the random sequential adsorption algorithm code developed in MATLAB. How do i go about it?
In some of the Phd dissertations, I see randomized controlled mixed methods study is conducted applying only feasibility phase with 3-5 participants as a pilot trial. Then the main study is started. Is this method ok? I think they should conduct a pilot trial first which includes feasibility and represents main study including randomized controlled trial design. For example 30 intervention and 30 control groups. How a new developed health education program could be designed best ? Which of the quidelines should be followed ?
Thank you so much for your response in advance!
I have recently started working with arabidopsis and every time I pour the agar plates, I start seeing contamination after 3rd or 4th day.
Usually, there is no contamination after I pour the agar and let it sit for one day.
The contamination occurs on some of the seedlings, as well as some random parts of the plate.
I try not to pass my hands from on top of the plates, I UV the hood and the plates for 30 mins, and always clean the hood with 70% ethanol before starting.
I am open to any suggestions on how to improve myself.
Thank you.
Hello, is it possible to predict future outputs, or even restore the parameters, of a random generator with the function f(x) = ax**2 + bx +c (mod m) when the first 10 generations are known? Thanks.
For the i.i.d. random variable m_1,...,m_L sampled from the binomial distribution with the parameters n_1 (number of Bernoulli trials) and P (prob. of success) what is the distribution of the product \prod_i (m_i/n_1)? We can assume that n_1 is large.
Hi all!
I've been collecting data on a group of 8 chimpanzees at Chester Zoo for my dissertation. The group consists of 4x males and 4x females, all of which have different hierarchical status' and ages.
I have been doing random focal observations with a checksheet consisting of 4 state behaviours (timed) and 6 behaviours (frequencies). I would start a random focal observation when a stressful context arose (such as high visitor numbers, anticipation to feeding, or feeding time)and denote the durations or frequencies of behaviours exhibited by that individual for 15 minutes. Then at the following visit, I would observe the same individual at the same time but under a non-stressful context (therefore utilising the Matched Control Method).
This process repeated for 4 months and I now have a complete data set.
I am <really> struggling on 1. How to use SPSS, and 2. What tests would be ideal to use? As you can imagine there is quite alot of data which hold different values so you can hopefully see my confusion around this.
Ideally, the statistical analysis of my data will reveal which contexts in the zoo precipitate an increase in stress the most (e.g. high visitor numbers, anticipation to feeding, feeding). I also want to be able to compare this data to the hierarchial status' and ages of the individuals.
Any help would be so appreciated. Thanks in advance!
In a random effects regression we have the assumption that the individual specific heterogeneity is not correlated with the predictor variables:
Yit = 𝛽1Xit,1+ 𝛽2Xit,2+…+ 𝛽kXit,k+ 𝛼𝑖 + 𝑢𝑖t
i = entity-individual
t= measurement at time t
αi ~ N(0,σα), (i=1….n) is the unknown intercept for each entity ( n entity-specific intercepts)
Yit is the dependent variable where i = entity and t = time
Xit is an independent variable
𝑢𝑖t idiocynraticerror
Assumption: cov(αi ,Xit) = 0
Do we also make this assumption when using linear mixed effects models?
I am having trouble differentiating between a random effects model and a linear mixed effects model. I am currently using this model https://bashtage.github.io/linearmodels/panel/panel/linearmodels.panel.model.RandomEffects.html#
for my research. Can somebody tell me if this is a random effects model or a linear mixed effects model and what are the differences between the two models?
Hello everyone! As you understand, high-precision positioning using global navigation satellite systems or simply high-precision determination of a random variable. At what point does your estimates precision fall into the "highly precision" category? Is this always a convention associated with the method of determining a random variable or is there a general formulation for classifying estimates as highly precision?
Similar to what we typically do with CCD, but in a randomized manner!
I've been working on a research project using a multi-level regression model, and I'm currently considering the presentation of Hausman test results. I've noticed in some papers that the authors conducted the test but did not include numerical results, only mentioning that the random effects model was deemed more suitable.
I'm curious about reporting Hausman test results. Should these results be reported separately for each model? I understand that if the test is insignificant (p > 0.05), it suggests that the random effects model is more suitable, but I wonder if there's a convention for reporting the test value and additional statistical evidence to support this conclusion, such as a chi-square value.
The problem of writing a Sas command to analyze a factorial design in a completely randomized format in two years
Factorial design in a completely random format with 3 factors that was implemented in two years
Now, to analyze it in SAS, I don't know how to use a simple split plot, should I consider the year in a chopped form? Are other invoices complete?
Does Randomness exist or is an illusion? Did God have any choice in whether to create/allow Randomness or not? Is there any connection between Free will and Randomness?
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
I've been reading about Claude Shannon and Information Theory. I see he is credited with developing the concept of entropy in information theory, which is a measure of the amount of uncertainty or randomness in a system. Do you ever wonder how his concepts might apply to the predicted red giant phase of the Sun in about 5 billion years? Here are a few thoughts that don't include much uncertainty or randomness -
In about 5 billion years the Sun is supposed to expand into a red giant and engulf Mercury and Venus and possibly Earth (the expansion would probably make Earth uninhabitable in less than 1 billion years). It's entirely possible that there may not even be a red giant phase for the Sun. This relies on entropy being looked at from another angle - with the apparent randomness in quantum and cosmic processes obeying Chaos theory, in which there's a hidden order behind apparent randomness. Expansion to a Red Giant could then be described with the Information Theory vital to the Internet, mathematics, deep space, etc. In information theory, entropy is defined as a logarithmic measure of the rate of transfer of information. This definition introduces a hidden exactness, removing superficial probability. It suggests it's possible for information to be transmitted to objects, processes, or systems and restore them to a previous state - like refreshing (reloading) a computer screen. Potentially, the Sun could be prevented from becoming a red giant and returned to a previous state in a billion years (or far less) - and repeatedly every billion years - so Earth could remain habitable permanently. Time slows near the speed of light and near intense gravitation. Thus, even if it's never refreshed/reloaded by future Information Technology, our solar system's star will exist far longer than currently predicted.
All this might sound a bit unreal if you're accustomed to think in a purely linear fashion where the future doesn't exist. I'll meet you here again in 5 billion years and we can discuss how wrong I was - or, seemingly impossibly, how correct I was.
The protocol was designed for RNA sequencing can't be applied because RIN number was low and the cDNA for the same sample was obtained by random hexamer primer. What can i do to fix the situation because i can't do the process of RNA extraction again
Specifically, how does subject-level random intercept and random slope influence the goodness-of-fit (R-squared) of the model?
And, if subject A contributes 10 data-points and subject B contributes 5 to the whole dataset, wouldn't A account for more of the total residual error than B? How do multilevel models control for this?
I am doing a study on tacit knowledge and I am using Polinode for my social network analysis project .
I am really stuck, I need some help working out what the networks are. Please does anyone know what metrics to use to check if the map is a random, scale-free network or a small-world network - as the calculations are already done by the Polinode program I do not need the equation ( although if anyone can explain the equation that would be great).
When planning a randomized, double-blind trial with multiple arms, there are often challenges for blinding the study treatment/groups due to different formulations/packaging/dosage.
E.g. Study groups
1. treatment A- (spray bottle)
2. treatment B- (drops)
3. treatment C- (drops)
4. Active comparator- (drops)
5. Placebo- (both options possible- spray bottle/drops)
Instead of a double-dummy approach, can we follow group blinding with two or more blinding practices in the same trial?
One blinding group is for treatment A and Placebo as spray bottles.
The blinding second group is for Treatment B and C and Active as drops.
The objective of blinding is to keep the subject and investigator/study team unaware of the treatment assigned, and the same can still be met with the above (with certain limitations of course...)
If you know of any reference trial with such an approach, kindly share.
Thanking you in anticipation.
Who agrees randomness indicates eternal consciousness of each individual being? How? Why? I agree randomness indicates eternal consciousness of each individual being because the individualized spirit(the most fundamental essence of individuation) makes all beings unique and makes the past impossible to use to determine the future.
Sources:
Diagonal reference models are especially suited for the estimation of effects of movements across levels of categorical variables like education or social class. In social stratification, it enables us to estimate the weight of origen and destination. Their use is straightforward with DRM in Stata and the function Dref of gnm in R. However, I am working with a dataset with 30 countries and I would like to model those weights as random effects. I haven't find a multilevel extension of DRM or a workaround. Any idea?
Hello,everyone.
I am currently dealing with a non-convergence problem during meso-scale numerical simulation of a three-point bending test of concrete using a random aggregate model in ABAQUS, where the material chosen is a concrete damage plasticity model that is embedded in ABAQUS, and the load-CMOD curves obtained are incorrect, with a peak load of only about 60N. However, I got the correct results using the same material properties for the compression numerical simulation. In 3TB the contact between the support, the loading device and the specimen is face to face contact.
Please advise me what I should do next to modify the model?


Let the random variable X follow a three-parameter log-normal distribution with mean µ, σ^2, and τ. What is the expected value of ln(X)?
Hello,
I am currently working on the data analysis for my Ph.D. project comparing the probability of occurrence of species density and richness (in hectare basis) between three different land use types using count data. Due to the design of the field study, I decided to use GLMM with Poisson distribution as I have various random effects and sites as a random effect that need to be accounted for. The model seems to be doing the job, however, I am not really sure how to report the results. I am using the lme4 package in the R console to analyze my data.
Thank you
Hello,
I need to estimate a generalized linear mixed model (GLMM).
When using R, I need to choose a specific fixed effect (one of the independent variables in the model) to have a random slope according to a random variable such as subject ID.
But when using SPSS, I can't choose a specific fixed effect to have a random slope.
I asked chatGPT about it and it responded as follows:
The difference you're observing in the specification of random slopes for specific fixed effects between SPSS and R might be related to how the two software packages handle mixed-effects modeling. This difference is not necessarily due to the capabilities of the software but rather the way they present the options to the user. Here's why this might be the case:
SPSS:
In SPSS, the interface for specifying mixed-effects models is designed to be user-friendly and may abstract away some of the technical details. It allows users to specify random slopes and intercepts for subjects using the Repeated option or Mixed Linear Models (MIXED) procedure.
SPSS may automatically set up random slopes for all fixed effects by default, which is a more automated approach, but it doesn't give you explicit control over which fixed effects have random slopes.
R (lme4 package):
R, specifically with the lme4 package, provides more flexibility and control in specifying mixed-effects models. This can be both an advantage and a challenge for users.
In R, you have to explicitly specify which fixed effect(s) should have random slopes by including them in the model formula. This explicit control allows for more customized modeling but can be more complex and requires users to have a good understanding of their data and the modeling process.
1. Can someone please confirm this answer?
2. Is there a way to estimate a mixed-effect logistic regression model in R that would be the same as the estimated model in SPSS?
I would appreciate any clues about this issue! Thanks!
If we have some joint observation of two continuous random variables, Is there any R code or how that can I calculate (empirical estimation) the conditional cumulative residual entropy (CRE)?
I have field data for random samples from a specific forest, represented by vegetal surveys, where each vegetal survey contains one, two or three dominant vegetal species with geographic coordinates. How can I create a vegetation map using these data on GEE?
In trainable weka segmentation in ImageJ/FIJI, the default classifier is the fast random forrest with 200 trees and 2 random features per node. Do I have to change the number of features if I am segmenting the image to more than 2 classes?
Thanks
I have a thesis right now, I don't know if its possible to do, I'm trying to create a website builder, but instead of using Draggable Pre-Templates or Libraries, I would make a UI Component Generator with Different Properties and Designs.
But as I did some research, I realized its going to be messed up upon generation, I wanted it Linear in sequence and not just random Components with Random Designs, I wanted an organized linear pattern of generated UI Components. and I was thinking of using Seeds to find previously generated UI Components and saving it in a History Panel of panel. and being able to search it.
Needs some opinions and ideas because we're blasting our way to graduation..
Thank you! any help is appreciated!
We want to run a mixed effects model for our experimental design using lme4 package in R and want to confirm if our model is specified correctly.
Our design involves two random factors (participants and stimuli) and two fixed factors – first fixed factor is ‘condition’ with 3 levels and the second fixed factor is ‘group’ with 2 levels. The condition fixed factor is a within-subjects factor and the culture fixed factor is a between-subjects factor. Stimuli are crossed across conditions and counterbalanced between participant. The full data set is attached in this post.
We want to test the main effect of condition and the interaction of culture and condition. The model we specify is provided below. We have based this on a paper by Westfall and colleagues (Judd, C. M., Westfall, J., & Kenny, D. A.; 2016)and adapted the code from an app they developed.
link to app: https://jakewestfall.shinyapps.io/crossedpower/),
We are adapting their code for the ‘Counterbalanced’ design as it fits most closely to our design. We also plan to contrast code the IVs, as specified in the app. Is the code below to test interaction effects specified correctly? Also, should we specify a separate model to look at main effect of condition?
model <- lmer (y ~ condition*group + (1 + condition | subj_id) + (1 + condition | scenario), data = Study1)
modelrestricted <- update(model, .~. -condition:group)
KRModcomp(model, modelrestricted)
Can any one help me to know a method to recognize the type of obtained copolymer after synthesis?
I have a basic nonlinear model ( 📷). Dependent variable: crown diameter (cw) and independent variable: trunk diameter (dbh). I had 200 plots and within each plot I measured the variables: crown diameter (cw), trunk diameter (dbh), total height (h) and crown ratio (cr) for each tree. Below is part of my data. My target is to nonlinear mixed effects crown diameter models. Total height (h) and crown ratio (cr) variables are my random variables.
c📷
Now I have the following questions:
1- For the linear model, are the following functions written correctly (my basic question is about how to write the random effect function)?
📷
2- How are the functions written for the non-linear model (it gives an error with the nlmer function).
📷
📷
3- We have 4 types of random effects including:
📷
Which of the above random effects should be considered for my study?
Please open the attached file.
Option to make a random distribution is not available in COMSOL. I am trying make random distribution of fillers and assign properties to it. Any input regarding this would be much appreciated.
I am trying to build random parameters model with heterogeneity in mean and variance (RPNB-HMV) models for road traffic crashes on a highway in India.
Random parameter Negative Binomial models are the most popular models for assessing unobserved heterogeneity in crash prediction models.
I want to know some free or open-source software like R for fitting such models.
E.g. A sample syntax of the RPNB-HMV model in R software.
I am supposed to create a fake data set with 4 predictors that yields two strong significant relationships, 1 weak significant relationship, 1 non-significant relationship, and a significant interaction.
I have some materials but I am at a total loss at how to do this.
I have created an excel book with the four variables and generated random numbers using the RANDBETWEEN function and have imported that data to JASP but from there it doesn't matter how many times I run it, I can't get the results I need.
Does anyone have any suggestions?
Is it possible to create a random 2- dimensional shape using mathematical equations Or in software like 3D-max and AutoCAD? like this one:
Reservoir computers and extreme learning machines are typical examples of random neural networks (RaNN). In both these architectures, only the output layer is trained. So, are there neural architectures that only train the input layer?
Additionally, is there a reference available that discusses the use of non-probability samples for CFA analyses?
I read a previous discussion in this forum about how having a large sample size is the priority over probability sampling, and I would be interested to see what others have written on this topic.
Population and Sampling
In my research, participants will be recruited from general population through advertisements in social media in Pakistan. Potential participants can apply online or via phone. They will be then invited to complete GAD-7 on the internet as screening questionnaire along with some others for reporting their possible depressive and anxiety features respectively, and to provide their contact details.
Randomisation of participants
A list of random numbers will be generated by a research assistant who was blinded to the study conditions, using a computerised true random number generator (www.random.org). The researcher who will administer the diagnostic interview will have no access to the list. Upon completing the intake phone interview, participants recruited will be randomly assign to experimental and control group according to the generated list.
Dear all
I have a set of balance panel data, i:6, t: 21 which is it overall 126 observation. I decided that 1 dependent variable (y) and 6 independents variables (x1,x2......).
First: I do unit root test it shows:
y I(I)
x1 I(0)
x2 I(I)
x3 I(I)
x4 I(0)
X5 I(I)
x6 I(0)
If I would like to run panel data regression (Pooled, Fixed Effect and Random Effect), is that the correct form for inputting the model in Views:
d(y) c x1 d(x2) d(x3) x4 d(x5) x6
or
Shall I sort all variables in the same difference level, adding "d" to all ?
please correct if I am wrong, these are the steps I would like to conduct the statical part of a panel data:
1. Test Unit Root
2. Panel Regression?
3. ARDL

Hi everyone,
I am trying to derive the error performance of a wireless communications system, and I run into a series of Independent and Identically Distributed (i.i.d.) random variables (RVs) as follows:
Z = X_1 + X_2 + ... + X_N,
where N denotes the number of RVs being summed together. The distribution of each X_1, .., X_N may be Rayleigh, Rice, etc.
Now, I know that the Central Limit Theorem (CLT) can be applied assuming high N, such that the mean of Z becomes:
E[Z