Questions related to Tables
I want to use the PAM-13 in a study, but I'm challenged as to how I calculate the score. Different article states that I should:
To calculate the total PAM score, the raw score is divided by the number of items answered (excepting non-applicable items) and multiplied by 13. Then, this score is transformed to a scale with a theoretical range 0–100, based on calibration tables, with higher PAM scores indicating higher patient activation
The raw scores can be converted into four activation levels: 1 (≤47.0) not believing activation important, 2 (47.1–55.1) a lack of knowledge and confidence to take action, 3 (55.2–67.0) beginning to take action and 4 (≥67.1) taking action. (
The problem is that if I take the maximum raw score (52) and as instructed devide it by the number og items answered (13) and then multiplie that number by 13 i reach 52 again and would be in the activation level categorized as level 2. So I guess there must be a step I'm missing.
Do anyone know how to get the calbration table? Maybe that is what I'm missing.
I would like to ask when is it necessary to produce tables of frequency in thematic analysis process in qualitative research?
I want to use the Kruskal-Wallis test on SPSS 21, but when the p value is below 0.001, it returns 0. I have been looking for a way to get accurate p values, and the only solution I found online involves editing the properties of the output table. Unfortunately, this works only for the output of certain tests (e.g., t-test). The output table of nonparametric tests is not editable (at least on SPSS 21). Is there another way to fix this?
Edit: the problem arises when running the test using the Independent Samples, non-legacy dialog. With the K Independent Samples Legacy Dialog, it is possible to change the format of p values to display more precise results, as suggested in the responses below.
The design is basically to determine the optimum values of dosage, pH and settling time for percentage removal of some basic water quality parameters.
The final outcomes from the 15 runs have been generated. Using STATISTICA 12 software, the regression and ANOVA tables have been generated for all the response parameters.
It is observed that in some cases, linear and quadratic terms are both significant while in others, only the interaction terms.
My question: where all important factors suggest a valid model, how do you proceed to generate the model equation from this table? Can this be automatically obtained from the software or it must be manually done?
- I have used GC-MS to explore the oil components of plants and uncover their antibacterial activity.
- I have shown the results in the table including (Retention time, compound name, molecular formula, and relative content).
- My question is what is the IK of the identified compound should I add to the table, and how it could be calculated??
I am trying to export all images of the image collection of VIIRS. I am following the answer from this post (https://gis.stackexchange.com/questions/248216/exporting-each-image-from-collection-in-google-earth-engine). Instead of setting a bounding box as a clipping layer I have a .shp of my study area. When I run the code I am receiving this error:
In users/fitoprincipe/geetools:batch Line 133: collection.size is not a function
My code does not have the collection.size, as you can see bellow:
var batch = require('users/fitoprincipe/geetools:batch')
// Load Landsat 8 imagery and filter it
var collection = ee.ImageCollection('NOAA/VIIRS/DNB/MONTHLY_V1/VCMCFG')
var count = collection.size()
// export collection to google drive
crs: 'EPSG: 7767',
How can I batch download every monthly image of the VIIRS collection?
Is it necessary in thematic analysis (Braun and Clarke, 2006) in qualitative research to produce frequency tables of the codes?
Through literature investigation, I find that the MCB (minimum covering ball) algorithm proposed in paper “A faster dual algorithm for the Euclidean minimum covering ball problem” (Marta Cavaleiro,2017) is the best at present.
However, I don't know the expected complexity of the algorithm, so I did some experiments to explore the expected complexity of the algorithm and got a conjecture ( O( log(log (m)) * n2.5) , where m is the number of points and n is the dimension). I want to exchange my conjecture with you and hope to get your advice!
The MCB algorithm is implemented through iteration. Then, the expected complexity of MCB = the expected number of iterations * the complexity of each iteration. Each iteration only takes O(n2). However, the expected number of iterations is unknown, so I conducted two sets of experiments to explore the relationship between the expected number of iterations and the number of points, and dimension.
The first set of experiments: randomly generate m points in 10-dimension, find the MCB of these points, record the number of iterations, and repeat such experiments 1000 times (each time with different randomly generated data of m points in 10 dimensions), and calculate the average number of iterations of these 1000 repeated random experiments. Following such a method, change m from 20 all the way to 100000 while keeping the dimension constant, and observe how the average number of iterations varies with m.
The results of the first set of experiments are shown in Table 1. I plot the data in Table 1, as shown in Figure 1 (the abscissa is the number of points, and the ordinate is the average number of iterations) and found that the average number of iterations increases with the number of points similar to a double logarithmic function, especially with y=4.2log2(log2.5(x)) which basically overlaps! Whether in the range of 20 to 20,000 points (as in Figure 1, Figure 2), or even at 100,000 points (as in Figure 3), this function fits the average number of iterations! A small number of data points did not fit the function exactly, but when I adjusted the number of repeated experiments to 10,000 (according to statistics, the more repetitions the closer to the theoretical value), the data points fit! (For example, the average number of iterations of an experiment with a point of 4000 is closer to the function value than that of a thousand repetitions.).
Therefore I believe that the function does fit the variation of the average number of iterations with respect to the number of points in that dimension, so I think that the function of the average number of iterations of MCB algorithm with respect to number of points is O( log(log(m)).
The second set of experiments: randomly generate 1000 points in n-dimension, find the MCB of these points, record the number of iterations, and repeat such experiments for 1000 times (each time with the data of different n-dimension 1000 points randomly generated), and calculate the average number of iterations of the 1000 repeated random experiments. Following such a method, change n from 2 all the way to 900 while keeping the number of points constant, and observe how the average number of iterations varies with n.
The results of the second set of experiments are shown in Table 2. I plot the data in Table 2, as shown in Figure 4 (the abscissa is the dimension, and the ordinate is the average number of iterations). It is found that the average number of iterations increases with the number of points, which is similar to the power function, especially it basically coincides with y = 4.2x1/2.
Since the higher dimensional experiments take too long, I have only done experiments in 2 ~ 900 dimensions. However, I have reason to believe that the function of the expected number of iterations of MCB with respect to the dimension is O(n1/2).
At the same time, in order to verify whether the 1000 repeated experiments have statistical significance, I conducted 10000 repeated random experiments on some experiments (see Table 1 and Table 2). The final results are basically similar to those of 1000. Therefore, I think the 1000 repeated random experiments are statistically significant.
Combining the conclusions of these two set of experiments (the function of the expected number of iterations on the number of points is O( log(log(m)) ), the function of the dimension is O(n^(1/2)), and the expected number of iterations is O( log(log(m)) * n1/2 ).
Then, the expected complexity of MCB = the expected number of iterations * the complexity of each iteration = O( log(log(m)) * n1/2 ) * O(n2) = O( log(log(m)) * n2.5 )
I don't know if my conjecture is right, so I want to share it with you and hope to get your correction!
Whatever it is, I would very much like to get your opinion!
Because I don't know how to insert the picture into the text, it may bring you some inconvenience. I am very sorry. The pictures and tables will give you a lot of information, so you can understand my thinking.
I usually work, after QIIME2, with Calypso for microbiome statistical analysis of my AVs table. Does anyone a Calypso user? Beacuse is no longer available since almost two months. Is it an updating problem? Can someone suggest me some other software? I l already try use MANTA but my feature-table exceeds size limits.
Many thanks to who will be able to help me!
I have two independents groups of two nominal variables (2x2 table).
To calculate the effect size should I use Odds Ratio or Phi Coefficient? Or both?
I am running a panel threshold regression, i need to report all my results in my word document. I need to compare each threshold used, i need to report the results of the linearity test... I don't know how to complete the table with outreg2 or i am asking if it is possible to use it for a non linear regression. Thank you in advance for your help.
Here is an example of what i want to report
N = 78, T = 18
Panel Var. = country
Time Var. = year
Number of moment conditions = 416
Bootstrap p-value for linearity test = 0
manufacturingvalueadded | Coef. Std. Err. z P>|z| [95% Conf. Interval]
Lag_y_b | .946936 .0334954 28.27 0.000 .8812862 1.012586
corrup_b | 1.480812 .2360899 6.27 0.000 1.018084 1.943539
lntotalpopulation_b | 10.3324 1.685093 6.13 0.000 7.029675 13.63512
agedependencyratioyoung_b | .3139841 .046488 6.75 0.000 .2228694 .4050989
mineralrents_b | -.2442282 .0611859 -3.99 0.000 -.3641504 -.124306
arableland_b | .1240312 .0781495 1.59 0.112 -.029139 .2772013
tradeopeness_b | .0183878 .003537 5.20 0.000 .0114553 .0253202
domesticcredit_b | -.0093323 .0053828 -1.73 0.083 -.0198823 .0012177
FDIinflowsnetpercentgdp_b | -.0214076 .0057223 -3.74 0.000 -.0326231 -.0101921
lnGdppercapitapppconstant_b | -.4619588 .3066621 -1.51 0.132 -1.063006 .1390879
cons_d | 37.65625 27.39948 1.37 0.169 -16.04574 91.35824
Lag_y_d | -.3890428 .0393969 -9.87 0.000 -.4662593 -.3118264
corrup_d | -2.611355 .7031698 -3.71 0.000 -3.989542 -1.233167
lntotalpopulation_d | -1.827323 .5602273 -3.26 0.001 -2.925348 -.7292976
agedependencyratioyoung_d | -.1209525 .1082349 -1.12 0.264 -.333089 .0911841
mineralrents_d | .1083279 .2067438 0.52 0.600 -.2968826 .5135383
arableland_d | .0983405 .0438156 2.24 0.025 .0124635 .1842176
tradeopeness_d | -.017207 .0086147 -2.00 0.046 -.0340915 -.0003225
domesticcredit_d | -.0190802 .011135 -1.71 0.087 -.0409045 .002744
FDIinflowsnetpercentgdp_d | .0538446 .0160583 3.35 0.001 .022371 .0853183
lnGdppercapitapppconstant_d | 1.163407 1.613874 0.72 0.471 -1.999727 4.326542
r | 3.59596 .1853073 19.41 0.000 3.232764 3.959155
Thanks for answeing this question.
I am currently using the DHE to measure the ROS in adherent cancer cells. It did not work well. I put my protol in detail below. Any suggestion will be greatly appreciated.
1. Seed 250,000 cells in a 60mm dish and allow to adapt for 48 hours
2. Discard culture medium and wash with pre-warmed D-PBS for 1 time
3. Detach cells by 1ml trypsin and terminate it by adding 1ml complete culture medium
4. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
5. Wash the cells in phenol red-free and FBS-free HBSS
6. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
7. Add DHE to phenol red-free and FBS-free HBSS (final conc: 10uM)
8. Add 500ul DHE-containing HBSS to resuspend the cells
9. Incubate and culture the cells in dark in a 5ml cap-loose falcon for 30 mins at 37 degree
10. Add DMSO (control), Lapatinib (HER2 inhibitor, final conc: 500nM) and Arsenite (postive control, final conc: 100uM) and continue to culture for an additional 2 hrs
11. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
12. Resuspend cells in PBS on ice and keep in dark
13. Immediately do the flow analysis by excitation at 488nm (500nm also used) and PE channel for detection
I can hardly tell any difference between positivie control and DMSO. Please could anyone kindly share your experience or protocl?
Thanks in advance,
I have to report the result of my mediation analysis but i can only find the diagram reporting style and i need table styles. can anyone please help?
How can I present the Kruskal-Wallis Tested result in the table. I am interested to present the values in a table and want to show the deference of variables by using post-hoc test. However, there is usually two values (Multiple Comparisons z' values, and Multiple Comparisons p values (2-tailed)). So, would any body guide me how to structure my table please?
I’m performing multiple backward/forward regression step by step. So I’m checking the significance of all regression coefficients and looking through some table entitled as “Stepwise regression summary” for each step. And my “stop-rule” means all regression coefficients are significant and in those table, for this step, p-level should be less than 0.05. I suppose that this p-level concerns to previous column entitled “F - to entr/rem”. I understand it is some-kind of F-statistics and did not especially think about it. But for one little sample, I got such table with significance F-statistics for one variable in model and for eleven variables in the model (10 significant coefficients and 1 non-significant). It is a big difference between 1 or 11 predictors in the model. Could you please help me with this “F - to entr/rem” statistics? What does it mean? It is differ from common F-statistics for every obtained regression model. (I use Statistica 12.0 program software).
Hello guys, I wanted to ask whether calculating and mentioning the combined mean ( mean of means) holds any significance when summarizing data in tables of a systematic review.
I have already conducted a meta-analysis of all my variables and i wanted to give a different dimension to the study through my tables by mentioning the combined mean. If it does not have any significance, what statistical tests can I do on a number of means and their standard deviations?
Google Scholar provides pages of references based on a search using keywords. I would like to export these references as one single table with the different fields separated from each other (authors, year, publication title, etc.) to allow for making diagrams etc. in Excel for instance.
I am conducting a 2x2x2 between-subjects Anova in SPSS, and every time I run it am receiving gaps in the interactions between my variables rows. How do I fix this please? Thank you.
I was determining the cut-off of a diagnostic parameter using the Youden index (by comparing sensitivity (Sn) and specificity (Sp) from the AUC/ROC analysis). After the cut-off was obtained, I applied this value to create a 2 x 2 table generating PPV, NPV, LR+, LR-, and accuracy values. However, after I calculated manually using Excel and MedCalc, it turned out that the Sn and Sp values listed in the AUC/ROC analysis differed from those calculated manually.
To exemplify, in the table below (https://doi.org/10.2147/IJGM.S351505), could you help me determine the correct way to calculate PPV, NPV, and AC values, as well as LR+ and LR-?
I have three multiple regressions - each with their own DV. They are all related but I want to collapse them into one table. Has anyone seen a paper that has done this or could share an example?
I would like to get an opinion from you guys on this issue. I would appreciate any feedbacks.
Recently, a narrative review paper was published (2022) on the same subject that I published (2021). When reading the paper, I found a lot of similarities. I realized that the authors cited 32 papers that I also used (they used 193 references and provided an up-to-date on the subject). So far, OK. The majority of people would say this is not a problem. My results were organized in a table after search methodology. Then, I realized that a lot of the data in the table is in their text, often verbatim, but they cited the original papers. However, when I looked at the references, I realized that chunks of their references are in the same order as in my paper. For example, one of the chucks have 7 references in the same order as in my table of results. The same happens in other parts of the paper. Their research did not use any methodology (so we don't know how they got to their results), and my table is not in alphabetical or chronological order. So, even if researchers write about the same subject and use similar papers, it is very rare to see chunks of references in the same order. I also see a lot of rephrasing of ideas from my discussion and conclusion. Due to all these, I feel like there is a strong evidence of plagiarism. It looks like they used my table of results to compose part of their paper, but did not give me credit for it, although they cited my paper for other unrelated minor details. Reading their paper carefully, I also identified several sentences which citations are wrong. For example, a big paragraph refers to an original paper as the source of info, but when you go there, that info is not from that paper. Knowing well the subject, I found several mistakes like that. There is also a lot of citations using non-peer-reviewed articles from blog/news posts. So, the authors were very careless with citations and referencing previous works. Review articles are difficult to prove plagiarism, and often people regard it as subjective and say there are no clear guidelines to identify it. Softwares don't catch everything. So, my question is: are these reasons that I explained (especially having those chunks of references in the same order) enough proof to contact the editors of their journal?
Thanks for the feedback! I appreciate it! :-)
When designing a concrete bellows, we use tables with Stern or Bolomey coefficients. Do you know other tables?
I have used BLAST for thousands of sequences and I am now analysing the output in excel, which consists of thousands of species at genus level across multiple sites, I currently have a table with species names found at each site, but would like to compile them into a simplistic and effective table which includes all names and their presence/absence across all sites. Does anyone know any effective ways in which to sort that kind of information?
My research is related to vicarious embarrassment, which is an embarrassment because of the actions of other people (the protagonist). My hypothesis is that the friendship relationship with the protagonist who violates the norm will have a higher vicarious embarrassment than the vicarious embarrassment in the protagonist's condition as a stranger.
I ran a repeated measures analysis with a within-subject factor RELATION that has two levels: friendship X strangers. My dependent variable is vicarious embarrassment. The results of the analysis show that the relationship factor has a significant impact on VE. The difference between the conditions of friendship X strangers is also significant.
Besides the main effect analysis, I suspect that friendship collectivism functions as a covariate. I included it in the analysis as a covariate.
When friendship collectivism was included in the repeated measures GLM analysis as a covariate the results were as follows.
in the table of tests of within subjects contrasts, the RELATION factor changed to be insignificant, and the interaction between RELATION X friendship collectivism was also not significant. I also don't know how to interpret the parameter estimates table. Can anyone help interpret these results? Thank you very much for your time and knowledge.
- Who else here is using the PiMP platform (designed by Glasgow Polyomics, UK) to analyze their metabolomics data? I used this platform and extracted the data in a .csv format. Then I loaded them on metaboanalyst.ca, but the metabolites names and pathways that were apparent on PiMP, disappeared, showing only ms1_peak_id on all graphs and tables.
- How to export the results .csv field with metabolite names rather than ms peak id? Also i tried copying the peak's id to identify it on KEGG or HMBD, without success of identification. I am newly involved in such analyses, so any help or advices to handle this is welcome.
- How do I identify peaks from biological compounds? Thank you All
the chemical equation (in the small picture) is included in a table for the selection of organic reduction Half-Reactions in a textbook.
my question is what does (d) stand for??...since the author makes it equal to 40 in the methanogenesis example (see the large picture)?
For example, I am writing about research in Literature review that developed tool, in the resarch, there is no table, can I organise this tool in a table in Literature review?
I have trouble configuring material files with damask software. So I need your help.I copied the example of damask 3.0 official website (the example of official website failed to run successfully) to try to use 'from_table' to give custom grain orientation, but always reported an error: 'ValueError: Length mismatch: Expected axis has 3 elements, new values have 1 elements'.
(1) The following is the instruction I use:
import numpy as np
t = damask.Table.load('ori.txt')
config_material = damask.ConfigMaterial.from_table(t,O='q')
(2)The following is the instruction I used. The attachment is the Eulerian angle file of grain orientation I used like that: first.png
(3)The following figure is an example of the official website:secong.png
I am currently investigating cognitive differences between unforced, forced errors and winners in table tennis matches. I find very little literature on the topic.
Mortar workability is determined through flow table test on the freshly prepared mortar. With passage of time, the flowability of the freshly prepared mortar decreased. In this contest, what is the exact definition of pot-life of mortar and what is the physical significance of this ?
I think I might be able to inherit a good optical table. However it has a 50mm hole pitch and this is not ideal for most of my work which tends to use 25mm pitch. Does anyone have experience of putting a bread board on top of such a table to ‘convert’ the hole pitch? I’m interested to know if there were any problems in terms of damping or any other issues that came up.
For my master thesis I am running an ordinal logistic regression analysis with about 30 independent variables. Because of the amount of independent variables I get a lot of missing cases in the Case Processing Summary. Nonetheless, my model seems to fit well as the goodness-of-fit and test of parallel lines parameters are met.
The problem is that in the Case Processing Summary, out of the dataset population of about 7500 there are 5500 cases missing, meaning 5500 cases were not included in the analysis.
My question is how do I make a table 1 (characteristics of the population in the analysis) with only taking into the measures the cases that were included in the analysis?
Hopefully someone is familiar with the correct procedure and can help me on my way.
Is it necessary to take permission for image and table from journal if I want some table and image from other paper?
I am writing a review paper in which I want some pictures and table from other paper so what should I do for it so that I can get the picture
I just run ARDL model in which I have defined "Ln(Budget Deficit/GDP) as Dep-Var and Ln(G/GDP)+Ln(OilRevenue/GDP)+Ln(10%rich/10%poor)+.... some other political-economic indicators as independent variables, to find out what are most influential determinants of creating Budget Deficit.
Time series are annually and and I got five ARDL estimations by combining various of Economic and Political variables together. All the post estimations are passed, Speed of ADJ between zero and -1, with meaningful Coefficients using AIC and EC options and etc.
let assume that my estimation is up to 2019 Now I am trying to find out how could I predict or forecast the dependent variable for say next 5 years, i.e. up to 2024
I was un-succeed to define tssapend and other sequential orders to reach conclusion.
should I define independent variables separately for each next five years and then put them in the data table and use predict order?
or Stata can solve the model without giving amount for independent variables in next yeras?
would be very thankful if could help me to solve this issue.
I am getting the error 'missing value where TRUE/FALSE needed' in G2Sd package of R programming when I input the table containing mesh size in microns and its weight in g. I used the code library(G2Sd) and then granstat(web_interface = TRUE). When the web interface dialog box is opened I gave the input and this error occurred. How to rectify it and also is there any example dataset of the table to be given as input?
Hey, I have been looking for it a lot. Are there sigma + - constants for meta substitution?
And also I couldn't find the sigma * constants.
Hello, I have two cohorts and I'd like to check if the distribution of cases is similar between them. For clarity I'll post the table.
I'd like to know if the HEMS and GEMS groups differ significantly in the distribution of counts of various coded call (represented by their category number, 7, 14, 15, etc.).
I'm not sure how to run and check that with those variables - sorry for the basic question, I'd just like to prove similarity between the two groups and I'm a bit of a statistical novice.
I sampled benthic foraminifera from two sites, in each site from 3 different habitats.
I want to check if the differences in the numbers of species are significant or not.
My data is basically composed of present/ absent numbers and the number of species in each habitat, for example:
Habitat 1: 6, 8, 14, 16
Habitat 2: 3, 4, 7, 9
Habitat 3: 4, 6, 9, 10
* There is another table for the other site
* Different numbers within each habitat represent different sampling months.
* From each habitat, at least 3 replicates were sampled. The decision to sum all species present in the replicates is due to the fact that sometimes there is a big variability between the replicates and since the comparison between habitats and sites is more relevant for our question.
What test would be the best to validate if the differences are significant or not?
Thanks in advance,
I have measured breast tumor size using different modalities CESM and MRI. I have also the size on Histology.
I have a table of paired Samples Test, and the Sig. is .037 MRI and .523 for CESM when compared with the gold standard of histology. What does this mean??
I am currently using the conn toolbox in Matlab. In the 2nd level result, we set the connection threshold at uncorrected p-value < 0.001 and the cluster threshold at uncorrected p-value < 0.05 to confirm the following results.
If you look at the result table as shown below, the values of p-uncorrected and p-FDR are shown.
I have two questions.
1. What does the pFDR value shown in the results table below calculate and mean?
2. Both the connection threshold and the cluster threshold are specified as uncorrected, but the value of pFDR was calculated in the table below; why? What does the pFDR from this mean?
For example, if I have a pre-post design experiment with 2 groups A and B. I can get change scores(Post minus Pre). And I can count the number of positive changes and negative changes. Hence I create a 2*2 table(GroupA/B*Positive/Negative Changes)
Positive Change Score n1 n2
Negative Change Score n3 n4
Then can I use Chi-Square Test?
I have learned that ANCOVA is a better choice, but the change scores violate normality test assumptions so maybe ANCOVA is not a good one.
I'll be really grateful if anyone could help me with FEPA-Standard 42-1:2006 acceptable ranges for F4 to F220?
there are different references on internet with different tables of F grit which really made me confused.
gmx_mmpbsa_ana always displays the following error, and closes the program when I try to view the results:
/usr/local/bin/gmx_MMPBSA_ana:8: PerformanceWarning: indexing past lexsort depth may impact performance.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/GMXMMPBSA/analyzer/customitem.py", line 148, in _show_line_table
from GMXMMPBSA.analyzer.plots import Tables
File "/usr/local/lib/python3.8/dist-packages/GMXMMPBSA/analyzer/plots.py", line 38, in <module>
from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as FigureCanvas, NavigationToolbar2QT
ModuleNotFoundError: No module named 'matplotlib.backends.backend_qtagg'
Aborted (core image recorded)
Producing a standard score for a neuropsych test normally involves browsing norms tables, finding a particular table that matches a demographic, finding the appropriate raw to standard conversion, then manual entry. Unfortunately, this process can be quite error prone.
Is it possible to instead use a calculated field and lookup tables to both (1) choose the correct table of normative data and (2) choose the correct standard score for a given raw score?
I want to present the results of multinomial logistic regression at a conference in a visual way,
is it enough to present the table of the results in the power point,
I am conducting a systematic review of the diagnostic studies to identify vascular complications after liver transplantation in pediatrics. Among the findings, there were 4 studies where controls (x2) and comparisons between index/reference tests (x2) were found.
I still find it difficult to extract data from these studies and present it in a word document as a table or something. Is it handy to conduct a meta-analysis and/or standardization of the data somehow?
What is your point of view and suggestions on extracting relevant data + analyzing these?
My understanding of these is very basic.
I am running some generalised estimating equation analyses using SPSS.
For the most part, the p-values for each variable are the same across the 'tests of model effects' and 'parameter estimates' tables in the output.
However, for one variable (continuous in nature), the p-value is different across the two output tables (significant in model effects but insignificant in parameter estimates).
Does anyone have any idea what might be causing these discrepancies or how I could rectify the problem?
I'm formatting a paper using the MDPI's LaTeX template and there is a huge blank space at the left of each page. It seems to be a narrow column that I cannot remove. Indeed, in the comments of the template the "left column" is mentioned, but the options "figure*" and "table*" do not fit the corresponding figure or table to the full width of the page. Besides, this apparent "left column" does not only appear on the first page (where different citations can be included) but also in the rest the manuscript (where only a blank region is visible).
Please, I wonder if you could help me to find the LaTeX commands or to modify the template so as to:
- Fit the full manuscript to the overall width of the page
- Fit at least large figures and tables to the overall width of the page
Thank you in advance!
i. real data of a particular year after base year,
ii. Hypothetical scenario taking certain data for a particular year,
iii. Case base study of a particular factor, or
iv. other options?
I will be thankful and appreciated the comments and suggestions from the dignified academicians.
On the anatomage screen/table, there is a 3D image of a person in which student use it to do on- screen dissection. The image comes from a donor or a person who has probably passed away.
How can we insert our own image on to the anatomage table?
I have been studying the impact of globalisation on political, economic and financial instability, so I need this data:
Table 3B: Political Risk Points by Component, 1984-2018 (Zip file)
Table 4B: Financial Risk Points by Component, 1984-2018 (Zip file)
Table 5B: Economic Risk Points by Component, 1984-2018 (Zip file)
Unfortunately, I do not have access to the (ICRG) database, so I would greatly appreciate your help in obtaining this data, if you can. Thank you in advance.
I have used the Minitab program for Plackett-Burman screening; on data analysis I used the forward selection method & I obtained 4 significant factors in the pareto charts, but 2 of them were not significant in the ANOVA table (as in the attached photos).
My question is:
which one can I depend on (the pareto chart or the ANOVA table)?,
on another word, should I make the factorial analysis by using the 4 significant factors appeared in the pareto chart or by using only the 2 significant factors (the 2nd & the 3rd ) in the ANOVA table?
Thanks for help
Thanks for all of u., I have another question plz.; If I obtained 4 highly significant factors (p = 0.0001) & 1 significant factor (p= 0.042) from the Blackett Burman screening step, should I complete my factorial design using the highly significant factors only or use all the 5 factors? which is the best?
Imagine I have study 1 with results from location A. And study 2 with results from locations B,C and D. I want to compare all of them with a random effect meta-analysis. Is it possible to create 1 table entrance to study 1, and 3 table entrances to study 2?
I performed 20-something moderated multiple regressions using the PROCESS macro for SPSS. Does anyone have a template for how to put these results into a table?
As I am new to docking part, wanted to know that is a more negative docking score in RMSD table is better or positive score is better.
As i have read two recent papers in which both are explaining different information related to docking.
I want the clarity in the docking score which better for research purpose?
My dissertation project seeks to understand how and which women’s inclusion models have been applied to support gender equality and the meaningful participation of women during the three previous rounds of peace negotiations in Yemen. The project’s framework is drawn from Paffenholz (2014) and Reilly. et.al (2015) toward an inclusive peace process and the project’s framework outlines five models of inclusion (1) direct participation at the negotiation table, (2) observation, (3) official and unofficial consultation, (4) negotiation commissions, (5) problem-solving workshops. The purpose of using this framework is to define the meaningful participation of women in the UN-led peace process in Yemen.My research hypothesis is: The concept of meaningful participation and its practical application is still not clear in attempts to involve women in peace processes. And main research questionis: What has enabled and obstructed women’s meaningful participation in the Yemeni peace process?
I am currently working on a study that compares the knowledge of medical residents regarding child abuse, which involves eight specialties.
Upon having my data statistically treated, I received data that was treated thru ANOVA, and for post hoc examination, Fisher's LSD was used. I was able to see some research articles that used the "superscript" method to identify if the values were statistically significant or not, in order to avoid putting the large tables in their manuscript.
However, upon consultation with a senior/adviser, she was advising me to put the multiple comparison tables in my manuscript and do an analysis of the post hoc table.
I have a dilemma since in one table, I am dealing with eight categories, which were compared to each other. How can I present the data without bulking my manuscript up?