Science topics: Tables
Science topic

Tables - Science topic

Tables are presentations of nonstatistical data in tabular form.
Questions related to Tables
  • asked a question related to Tables
Question
5 answers
I want to use the PAM-13 in a study, but I'm challenged as to how I calculate the score. Different article states that I should:
To calculate the total PAM score, the raw score is divided by the number of items answered (excepting non-applicable items) and multiplied by 13. Then, this score is transformed to a scale with a theoretical range 0–100, based on calibration tables, with higher PAM scores indicating higher patient activation
The raw scores can be converted into four activation levels: 1 (≤47.0) not believing activation important, 2 (47.1–55.1) a lack of knowledge and confidence to take action, 3 (55.2–67.0) beginning to take action and 4 (≥67.1) taking action. ( )
The problem is that if I take the maximum raw score (52) and as instructed devide it by the number og items answered (13) and then multiplie that number by 13 i reach 52 again and would be in the activation level categorized as level 2. So I guess there must be a step I'm missing.
Do anyone know how to get the calbration table? Maybe that is what I'm missing.
Kind regards
Anna
Relevant answer
Answer
Thank you for your suggestion, I will try that too:-)
  • asked a question related to Tables
Question
4 answers
I would like to ask when is it necessary to produce tables of frequency in thematic analysis process in qualitative research?
Relevant answer
Answer
A lot depends on the kind of thematic analysis you are doing, and Braun and Clarke (2021) now distinguish between three variations on this method. If you follow Braun and Clarke's own preferred version, which they call Reflexive Thematic Analysis, then you would definitely not count codes. Alternatively, if you did use codebook to count themes, then you should be careful about calling it Thematic Analysis.
  • asked a question related to Tables
Question
13 answers
Hello,
I want to use the Kruskal-Wallis test on SPSS 21, but when the p value is below 0.001, it returns 0. I have been looking for a way to get accurate p values, and the only solution I found online involves editing the properties of the output table. Unfortunately, this works only for the output of certain tests (e.g., t-test). The output table of nonparametric tests is not editable (at least on SPSS 21). Is there another way to fix this?
Thanks,
Chen
Edit: the problem arises when running the test using the Independent Samples, non-legacy dialog. With the K Independent Samples Legacy Dialog, it is possible to change the format of p values to display more precise results, as suggested in the responses below.
Relevant answer
Answer
But to be clear, the issue is about the precision of the p-value, not the accuracy per se. The level of precision to report the p-value for very small values will depend on the place of publication.
To me, for very small p-values, I would want to report e.g. p < 0.0001. I suppose reporting a level below that is usually silly. But if it's true that a p-value of 0 in SPSS can be reported as p < 0.001 , that's probably sufficient for most purposes.
  • asked a question related to Tables
Question
5 answers
The design is basically to determine the optimum values of dosage, pH and settling time for percentage removal of some basic water quality parameters.
The final outcomes from the 15 runs have been generated. Using STATISTICA 12 software, the regression and ANOVA tables have been generated for all the response parameters.
It is observed that in some cases, linear and quadratic terms are both significant while in others, only the interaction terms.
My question: where all important factors suggest a valid model, how do you proceed to generate the model equation from this table? Can this be automatically obtained from the software or it must be manually done?
Relevant answer
Answer
PPS I forgot there's a nice example of all of this in Montgomery Design and analysis of experiments available for download from the z-library. Best wishes, David Booth also there's a wonderful R package that does all of this for you. See the attachment
  • asked a question related to Tables
Question
3 answers
- I have used GC-MS to explore the oil components of plants and uncover their antibacterial activity.
- I have shown the results in the table including (Retention time, compound name, molecular formula, and relative content).
- My question is what is the IK of the identified compound should I add to the table, and how it could be calculated??
  • asked a question related to Tables
Question
5 answers
I am trying to export all images of the image collection of VIIRS. I am following the answer from this post (https://gis.stackexchange.com/questions/248216/exporting-each-image-from-collection-in-google-earth-engine). Instead of setting a bounding box as a clipping layer I have a .shp of my study area. When I run the code I am receiving this error:
In users/fitoprincipe/geetools:batch Line 133: collection.size is not a function
My code does not have the collection.size, as you can see bellow:
var batch = require('users/fitoprincipe/geetools:batch')
// Load Landsat 8 imagery and filter it
var collection = ee.ImageCollection('NOAA/VIIRS/DNB/MONTHLY_V1/VCMCFG')
.filter(ee.Filter.date('2013-05-01', '2021-12-31'))
.filterBounds(table)
.select('avg_rad')
.first()
.clip(table);
var count = collection.size()
print("Collection", count)
// export collection to google drive
batch.Download.ImageCollection.toDrive(
collection,
'Folder',
{name: 'ntl_{system:index}',
crs: 'EPSG: 7767',
type: 'float',
scale: 460,
region: table});
How can I batch download every monthly image of the VIIRS collection?
Relevant answer
Answer
  • asked a question related to Tables
Question
12 answers
Is it necessary in thematic analysis (Braun and Clarke, 2006) in qualitative research to produce frequency tables of the codes?
Thanks
Relevant answer
Answer
It depends on what kind of thematic analysis you are doing. Braun and Clarke now distinguish three different varieties of thematic analysis, where one form is based on counting codes. But that is not their favored approach, which they call Reflexive Thematic Analysis, and which is much more inductive and interpretive.
  • asked a question related to Tables
Question
2 answers
Through literature investigation, I find that the MCB (minimum covering ball) algorithm proposed in paper “A faster dual algorithm for the Euclidean minimum covering ball problem” (Marta Cavaleiro,2017) is the best at present.
However, I don't know the expected complexity of the algorithm, so I did some experiments to explore the expected complexity of the algorithm and got a conjecture ( O( log(log (m)) * n2.5) , where m is the number of points and n is the dimension). I want to exchange my conjecture with you and hope to get your advice!
The MCB algorithm is implemented through iteration. Then, the expected complexity of MCB = the expected number of iterations * the complexity of each iteration. Each iteration only takes O(n2). However, the expected number of iterations is unknown, so I conducted two sets of experiments to explore the relationship between the expected number of iterations and the number of points, and dimension.
The first set of experiments: randomly generate m points in 10-dimension, find the MCB of these points, record the number of iterations, and repeat such experiments 1000 times (each time with different randomly generated data of m points in 10 dimensions), and calculate the average number of iterations of these 1000 repeated random experiments. Following such a method, change m from 20 all the way to 100000 while keeping the dimension constant, and observe how the average number of iterations varies with m.
The results of the first set of experiments are shown in Table 1. I plot the data in Table 1, as shown in Figure 1 (the abscissa is the number of points, and the ordinate is the average number of iterations) and found that the average number of iterations increases with the number of points similar to a double logarithmic function, especially with y=4.2log2(log2.5(x)) which basically overlaps! Whether in the range of 20 to 20,000 points (as in Figure 1, Figure 2), or even at 100,000 points (as in Figure 3), this function fits the average number of iterations! A small number of data points did not fit the function exactly, but when I adjusted the number of repeated experiments to 10,000 (according to statistics, the more repetitions the closer to the theoretical value), the data points fit! (For example, the average number of iterations of an experiment with a point of 4000 is closer to the function value than that of a thousand repetitions.).
Therefore I believe that the function does fit the variation of the average number of iterations with respect to the number of points in that dimension, so I think that the function of the average number of iterations of MCB algorithm with respect to number of points is O( log(log(m)).
The second set of experiments: randomly generate 1000 points in n-dimension, find the MCB of these points, record the number of iterations, and repeat such experiments for 1000 times (each time with the data of different n-dimension 1000 points randomly generated), and calculate the average number of iterations of the 1000 repeated random experiments. Following such a method, change n from 2 all the way to 900 while keeping the number of points constant, and observe how the average number of iterations varies with n.
The results of the second set of experiments are shown in Table 2. I plot the data in Table 2, as shown in Figure 4 (the abscissa is the dimension, and the ordinate is the average number of iterations). It is found that the average number of iterations increases with the number of points, which is similar to the power function, especially it basically coincides with y = 4.2x1/2.
Since the higher dimensional experiments take too long, I have only done experiments in 2 ~ 900 dimensions. However, I have reason to believe that the function of the expected number of iterations of MCB with respect to the dimension is O(n1/2).
At the same time, in order to verify whether the 1000 repeated experiments have statistical significance, I conducted 10000 repeated random experiments on some experiments (see Table 1 and Table 2). The final results are basically similar to those of 1000. Therefore, I think the 1000 repeated random experiments are statistically significant.
Combining the conclusions of these two set of experiments (the function of the expected number of iterations on the number of points is O( log(log(m)) ), the function of the dimension is O(n^(1/2)), and the expected number of iterations is O( log(log(m)) * n1/2 ).
Then, the expected complexity of MCB = the expected number of iterations * the complexity of each iteration = O( log(log(m)) * n1/2 ) * O(n2) = O( log(log(m)) * n2.5 )
I don't know if my conjecture is right, so I want to share it with you and hope to get your correction!
Whatever it is, I would very much like to get your opinion!
Because I don't know how to insert the picture into the text, it may bring you some inconvenience. I am very sorry. The pictures and tables will give you a lot of information, so you can understand my thinking.
Relevant answer
Answer
MEB has 2 parameters center and radius. they are related to the diameter of the set of points which is computed in O(n^2d) time, where n is the number of points in d dimensional space and then bubble sort also takes O(n^2) time.
  • asked a question related to Tables
Question
2 answers
I usually work, after QIIME2, with Calypso for microbiome statistical analysis of my AVs table. Does anyone a Calypso user? Beacuse is no longer available since almost two months. Is it an updating problem? Can someone suggest me some other software? I l already try use MANTA but my feature-table exceeds size limits.
Many thanks to who will be able to help me!
Relevant answer
Answer
Hi! It was suddenly closed and the authors do not respond requests. We must find other options and yes, microbiomeanalyst is an excellent alternative. Each has some pros/cons. Calypso had already implemented more advanced/recent ways for transformations and statistical analyses, but all this is now nothing if the service is gone, while microbiomeanalyst strengths are outstanding speed, complete pipeline for the most common analyses, graphical display is really nice and of course: reliability of the service! Unfortunately it is still very common to have articles about bioinformatic programs and platforms online that after few years and good impact, then are closed and web pages deleted. This is due to the lack of funds to maintain the people, servers or upgrading. Or because the system is then used as a paid service in derived business /startups to make some money out of it (involving some of the authors) to sell the services/uses as it seems to be the case for Calypso https://microba.com/. This is very common even if development is part of the main result of a published manuscript and the research/development was paid with public funds. But the truth is that public funds are very unreliable/temporary and the researchers, databases or services suffer such cuts. For example in Australia, people working in research had a harsh time last years!... https://www.theguardian.com/australia-news/2021/oct/11/australian-scientists-fear-job-insecurity-as-morale-plummets-amid-covid-survey-finds However this also creates a huge problem about reproducibility in science. There are some changes and requirements now asking the authors/developers to deploy the data and software in public open data repositories https://en.wikipedia.org/wiki/Open-access_repository github, zenodo, etc. when reported in a manuscript, but usually is not the same version upgraded or published. I contacted the first and corresponding authors of Calypso when the web page was suddenly shut down last year. We were in the middle of a manuscript analyses were we were using it. There was no response from any of them and we must stop using some results from it, as we could not prepare the final figures with the same system/formatting. Annoying! For Calypso there was a compressed file uploaded in 2016 https://zenodo.org/record/50931#.YxhhvuxBz0p including the programs and web page content in java and it is possible to run this locally, it may need to include several missing libraries and preparing all with Docker but it is possibly an old version. Of course there are many other developed web platforms to analyze 16S amplicon data or standalone programs appearing very often, and we must keep watching so, there are colleagues working hard and publishing nice proposals , https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btac494/6649618 but so far, in my opinion, the best, reliable, complete, highly cited, and easy to use web platform to analyse outputs of dada2, qiime2, mothur, of amplicon sequencing frequencies is in my opinion microbiomeanalyst
  • asked a question related to Tables
Question
10 answers
I have two independents groups of two nominal variables (2x2 table).
To calculate the effect size should I use Odds Ratio or Phi Coefficient? Or both?
Relevant answer
Answer
Ahmad Alshallawi – you say Use phi coefficient only.
This kind of advice is not very useful because you don't say why. Is phi more interpretable to the general reader? Is it more robust? Is it generalisable to tables with different marginals?
As you can see from the discussion, there is no single best answer, and the person who asked the question will have to make a decision based on the hypothesis and the audience they are communicating to.
So why phi? Have you an argument you would like to bring to the discussion?
  • asked a question related to Tables
Question
4 answers
Hello,
I am running a panel threshold regression, i need to report all my results in my word document. I need to compare each threshold used, i need to report the results of the linearity test... I don't know how to complete the table with outreg2 or i am asking if it is possible to use it for a non linear regression. Thank you in advance for your help.
Here is an example of what i want to report
N = 78, T = 18
Panel Var. = country
Time Var. = year
Number of moment conditions = 416
Bootstrap p-value for linearity test = 0
---------------------------------------------------------------------------------------------
manufacturingvalueadded | Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------------------+----------------------------------------------------------------
Lag_y_b | .946936 .0334954 28.27 0.000 .8812862 1.012586
corrup_b | 1.480812 .2360899 6.27 0.000 1.018084 1.943539
lntotalpopulation_b | 10.3324 1.685093 6.13 0.000 7.029675 13.63512
agedependencyratioyoung_b | .3139841 .046488 6.75 0.000 .2228694 .4050989
mineralrents_b | -.2442282 .0611859 -3.99 0.000 -.3641504 -.124306
arableland_b | .1240312 .0781495 1.59 0.112 -.029139 .2772013
tradeopeness_b | .0183878 .003537 5.20 0.000 .0114553 .0253202
domesticcredit_b | -.0093323 .0053828 -1.73 0.083 -.0198823 .0012177
FDIinflowsnetpercentgdp_b | -.0214076 .0057223 -3.74 0.000 -.0326231 -.0101921
lnGdppercapitapppconstant_b | -.4619588 .3066621 -1.51 0.132 -1.063006 .1390879
cons_d | 37.65625 27.39948 1.37 0.169 -16.04574 91.35824
Lag_y_d | -.3890428 .0393969 -9.87 0.000 -.4662593 -.3118264
corrup_d | -2.611355 .7031698 -3.71 0.000 -3.989542 -1.233167
lntotalpopulation_d | -1.827323 .5602273 -3.26 0.001 -2.925348 -.7292976
agedependencyratioyoung_d | -.1209525 .1082349 -1.12 0.264 -.333089 .0911841
mineralrents_d | .1083279 .2067438 0.52 0.600 -.2968826 .5135383
arableland_d | .0983405 .0438156 2.24 0.025 .0124635 .1842176
tradeopeness_d | -.017207 .0086147 -2.00 0.046 -.0340915 -.0003225
domesticcredit_d | -.0190802 .011135 -1.71 0.087 -.0409045 .002744
FDIinflowsnetpercentgdp_d | .0538446 .0160583 3.35 0.001 .022371 .0853183
lnGdppercapitapppconstant_d | 1.163407 1.613874 0.72 0.471 -1.999727 4.326542
r | 3.59596 .1853073 19.41 0.000 3.232764 3.959155
Relevant answer
Answer
Yes, it is the case. Thank you again for your advice.
  • asked a question related to Tables
Question
1 answer
Dear all,
Thanks for answeing this question.
I am currently using the DHE to measure the ROS in adherent cancer cells. It did not work well. I put my protol in detail below. Any suggestion will be greatly appreciated.
1. Seed 250,000 cells in a 60mm dish and allow to adapt for 48 hours
2. Discard culture medium and wash with pre-warmed D-PBS for 1 time
3. Detach cells by 1ml trypsin and terminate it by adding 1ml complete culture medium
4. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
5. Wash the cells in phenol red-free and FBS-free HBSS
6. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
7. Add DHE to phenol red-free and FBS-free HBSS (final conc: 10uM)
8. Add 500ul DHE-containing HBSS to resuspend the cells
9. Incubate and culture the cells in dark in a 5ml cap-loose falcon for 30 mins at 37 degree
10. Add DMSO (control), Lapatinib (HER2 inhibitor, final conc: 500nM) and Arsenite (postive control, final conc: 100uM) and continue to culture for an additional 2 hrs
11. Centrifudge at 2000 rpm (table centrifudge) for 1 min to discard the supernatant
12. Resuspend cells in PBS on ice and keep in dark
13. Immediately do the flow analysis by excitation at 488nm (500nm also used) and PE channel for detection
I can hardly tell any difference between positivie control and DMSO. Please could anyone kindly share your experience or protocl?
Thanks in advance,
Wei
Relevant answer
Answer
May you please elaborate your problem/issue which may directly be addressed.
  • asked a question related to Tables
Question
2 answers
I have to report the result of my mediation analysis but i can only find the diagram reporting style and i need table styles. can anyone please help?
Relevant answer
Answer
Layla Alanazi Thank you for your help
  • asked a question related to Tables
Question
2 answers
How can I present the Kruskal-Wallis Tested result in the table. I am interested to present the values in a table and want to show the deference of variables by using post-hoc test. However, there is usually two values (Multiple Comparisons z' values, and Multiple Comparisons p values (2-tailed)). So, would any body guide me how to structure my table please?
Relevant answer
Answer
I think you need to do the mean comparison for the non-parametric data. then you put your means in a table and indicate means non-significantly different with similar letters.
  • asked a question related to Tables
Question
4 answers
I’m performing multiple backward/forward regression step by step. So I’m checking the significance of all regression coefficients and looking through some table entitled as “Stepwise regression summary” for each step. And my “stop-rule” means all regression coefficients are significant and in those table, for this step, p-level should be less than 0.05. I suppose that this p-level concerns to previous column entitled “F - to entr/rem”. I understand it is some-kind of F-statistics and did not especially think about it. But for one little sample, I got such table with significance F-statistics for one variable in model and for eleven variables in the model (10 significant coefficients and 1 non-significant). It is a big difference between 1 or 11 predictors in the model. Could you please help me with this “F - to entr/rem” statistics? What does it mean? It is differ from common F-statistics for every obtained regression model. (I use Statistica 12.0 program software).
Relevant answer
Answer
Here are the rest of the papers. It took me a while to find. None of these support using backward elimination
Best wishes David Booth
  • asked a question related to Tables
Question
2 answers
Hello guys, I wanted to ask whether calculating and mentioning the combined mean ( mean of means) holds any significance when summarizing data in tables of a systematic review.
I have already conducted a meta-analysis of all my variables and i wanted to give a different dimension to the study through my tables by mentioning the combined mean. If it does not have any significance, what statistical tests can I do on a number of means and their standard deviations?
Relevant answer
Answer
Yes the mean of means here represent a measure for group mean, another estimation useful is range, which is max-min of each set to describe your data process, a differente case appears if u use the standart deviation of groups. Imagine u do the same analysis with individuals means as with group means. the study is different but its know on literature.
  • asked a question related to Tables
Question
3 answers
Google Scholar provides pages of references based on a search using keywords. I would like to export these references as one single table with the different fields separated from each other (authors, year, publication title, etc.) to allow for making diagrams etc. in Excel for instance.
Relevant answer
Answer
Hi César Ducruet - it may be a tad too technical, but an alternative might be to use the [Google Scholar API](https://serpapi.com/google-scholar-api).
  • asked a question related to Tables
Question
2 answers
I am conducting a 2x2x2 between-subjects Anova in SPSS, and every time I run it am receiving gaps in the interactions between my variables rows. How do I fix this please? Thank you.
Relevant answer
Answer
Hello Meg,
The most likely explanation is that you have one or more empty cells for the interactions.
Here's a simple example of a 2 x 2 design, showing n of cases per cell:
Factor A, Level 1: Factor B, Level 1 (n=4) Factor B, Level 2 (n = 4)
Factor A, Level 2: Factor B, Level 1 (n=8) Factor B, Level 2 (n = 0)
Note that this arrangement is sufficient to yield non-zero results for the two main effects, but SPSS default analysis (type III SS) will yield SS of 0 (and 0 df) for the interaction term, no matter what the scores associated with the 16 cases.
Try running crosstabs for the various factor combinations and see if this isn't what is happening with your data set.
Of course, SPSS univariate also offers the option of suppressing specific terms (via a "custom" model), but the output anova table will not list those suppressed terms.
Good luck with your work.
  • asked a question related to Tables
Question
3 answers
I was determining the cut-off of a diagnostic parameter using the Youden index (by comparing sensitivity (Sn) and specificity (Sp) from the AUC/ROC analysis). After the cut-off was obtained, I applied this value to create a 2 x 2 table generating PPV, NPV, LR+, LR-, and accuracy values. However, after I calculated manually using Excel and MedCalc, it turned out that the Sn and Sp values listed in the AUC/ROC analysis differed from those calculated manually.
Are Sn and Sp values used to determine the cut-off value different in terms of use from performance indicators calculated by the 2 x 2 table? Then if we desire to put those diagnostic performances in the research paper, which value should we employ, those from the AUC/ROC analysis or the manual calculation of the 2 x 2 table?
To exemplify, in the table below (https://doi.org/10.2147/IJGM.S351505), could you help me determine the correct way to calculate PPV, NPV, and AC values, as well as LR+ and LR-?
Relevant answer
Answer
Hello Muhammad,
Here's an explanation for your apparent discrepancy, taken from this source:https://www.sciencedirect.com/topics/medicine-and-dentistry/youden-index
"The ROC curve is a summary of information and some information is lost, particularly the actual value of the cutoff. It is therefore important to supplement the ROC curve with the Cumulative Distribution Analysis (CDA) (Fig. 19, lower panel) which displays the sensitivity and specificity against the cutoff values on the X-axis and illustrates the parameters and the cutoff. The CDA is thus a more useful tool in describing the effect of a particular cutoff and its change."
If you want definitive values for the chosen cut score, based on your sample, I would generate them from the 2 x 2 tables that you construct, one for each potential indicator and selected threshold.
How to compute:
PPV: (Test positive & True Positive) / N of test positive
NPV: (Test negative & True negative) / N of test negative
AC: (Test positive & True Positive + Test negative & True Negative) / N (all cases)
LR+: sensitivity / (1 - specificity)
LR-: (1 - sensitivity) / specificity
Good luck with your work.
  • asked a question related to Tables
Question
1 answer
I have three multiple regressions - each with their own DV. They are all related but I want to collapse them into one table. Has anyone seen a paper that has done this or could share an example?
Relevant answer
Answer
Never made one myself, but this article has one such table: https://doi.org/10.2147/NDT.S81024
Hope it helps!
  • asked a question related to Tables
Question
7 answers
I would like to get an opinion from you guys on this issue. I would appreciate any feedbacks.
Recently, a narrative review paper was published (2022) on the same subject that I published (2021). When reading the paper, I found a lot of similarities. I realized that the authors cited 32 papers that I also used (they used 193 references and provided an up-to-date on the subject). So far, OK. The majority of people would say this is not a problem. My results were organized in a table after search methodology. Then, I realized that a lot of the data in the table is in their text, often verbatim, but they cited the original papers. However, when I looked at the references, I realized that chunks of their references are in the same order as in my paper. For example, one of the chucks have 7 references in the same order as in my table of results. The same happens in other parts of the paper. Their research did not use any methodology (so we don't know how they got to their results), and my table is not in alphabetical or chronological order. So, even if researchers write about the same subject and use similar papers, it is very rare to see chunks of references in the same order. I also see a lot of rephrasing of ideas from my discussion and conclusion. Due to all these, I feel like there is a strong evidence of plagiarism. It looks like they used my table of results to compose part of their paper, but did not give me credit for it, although they cited my paper for other unrelated minor details. Reading their paper carefully, I also identified several sentences which citations are wrong. For example, a big paragraph refers to an original paper as the source of info, but when you go there, that info is not from that paper. Knowing well the subject, I found several mistakes like that. There is also a lot of citations using non-peer-reviewed articles from blog/news posts. So, the authors were very careless with citations and referencing previous works. Review articles are difficult to prove plagiarism, and often people regard it as subjective and say there are no clear guidelines to identify it. Softwares don't catch everything. So, my question is: are these reasons that I explained (especially having those chunks of references in the same order) enough proof to contact the editors of their journal?
Thanks for the feedback! I appreciate it! :-)
Relevant answer
Answer
Dear Prof. Andréa Fuzimoto,
As your paper's published date is older than that paper, you have strong evidence. If I were you, I will directly do at least one of the following actions:
  • Write directly to the editor(s) of the journal to expos this case.
  • Write to the Ministry of Higher Education and Scientific Research of these authors.
  • Write to the universities of these authors.
I found the following important note on the Wikipedia:
"A study found that over half of the uploaded papers appear to infringe copyright because the authors uploaded the publisher's version."
To see this note, you can refer to the following link
Finally, kindly let me repeat what our colleague, Prof. Markel Basabe Lopez, said:
"contact them and expose the case, at least to have them know how you find such actions in relation to your work. Even if they don't edit the paper, they might take such things into account in the future and you will have expressed your part, which is relieving indeed! Then, the ball is in their court."
  • asked a question related to Tables
Question
4 answers
When designing a concrete bellows, we use tables with Stern or Bolomey coefficients. Do you know other tables?
Relevant answer
Answer
Om Prakash Chhangani, Mr Łasica asks about the values of the water demand coefficients of individual aggregate fractions or cements , which should be taken into account when calculating the total amount of water needed in a concrete mix of a particular consistence class. In Poland, we usually use Bolomey or Stern tables, where such coefficients are given for various classes of consistence determined by the Slump test or Ve-Be test method. However these coefficients should be corrected - especially in the context of modern cements of avious types and classes. Hence the question of other tables than those of the old Bolomey or Stern.
  • asked a question related to Tables
Question
2 answers
I have used BLAST for thousands of sequences and I am now analysing the output in excel, which consists of thousands of species at genus level across multiple sites, I currently have a table with species names found at each site, but would like to compile them into a simplistic and effective table which includes all names and their presence/absence across all sites. Does anyone know any effective ways in which to sort that kind of information?
Thanks
Relevant answer
Answer
have a grad student put the data into a database then let the database functions do the sort for you
  • asked a question related to Tables
Question
2 answers
My research is related to vicarious embarrassment, which is an embarrassment because of the actions of other people (the protagonist). My hypothesis is that the friendship relationship with the protagonist who violates the norm will have a higher vicarious embarrassment than the vicarious embarrassment in the protagonist's condition as a stranger.
I ran a repeated measures analysis with a within-subject factor RELATION that has two levels: friendship X strangers. My dependent variable is vicarious embarrassment. The results of the analysis show that the relationship factor has a significant impact on VE. The difference between the conditions of friendship X strangers is also significant.
Besides the main effect analysis, I suspect that friendship collectivism functions as a covariate. I included it in the analysis as a covariate.
Problem:
When friendship collectivism was included in the repeated measures GLM analysis as a covariate the results were as follows.
in the table of tests of within subjects contrasts, the RELATION factor changed to be insignificant, and the interaction between RELATION X friendship collectivism was also not significant. I also don't know how to interpret the parameter estimates table. Can anyone help interpret these results? Thank you very much for your time and knowledge.
Relevant answer
Answer
Bruce has a point but I cannot see how that will make this interpretation any easier.. perhaps someone will explain that to me. Best wishes David Booth
  • asked a question related to Tables
Question
1 answer
It's nowhere to be found.
I would, in general, be very interested in a comprehensive list/table of inorganic iodide, chloride and bromide BPs.
Relevant answer
Answer
Selenium iodide is a binary inorganic compound of selenium and iodine with the chemical formula Se
2I
2.[3] The compound decomposes in water.
Selenium iodideNamesOther names
Diselenium diiodide
Identifiers
CAS Number
13465-67-3
3D model (JSmol)
Interactive image
SMILES
I[Se][Se]I
Properties
Chemical formula
Se
2I
2[1]Molar mass411.7289 g/mol[2]AppearanceGray crystalsMelting point70 °C (158 °F; 343 K)
  • asked a question related to Tables
Question
3 answers
  1. Who else here is using the PiMP platform (designed by Glasgow Polyomics, UK) to analyze their metabolomics data? I used this platform and extracted the data in a .csv format. Then I loaded them on metaboanalyst.ca, but the metabolites names and pathways that were apparent on PiMP, disappeared, showing only ms1_peak_id on all graphs and tables.
  2. How to export the results .csv field with metabolite names rather than ms peak id? Also i tried copying the peak's id to identify it on KEGG or HMBD, without success of identification. I am newly involved in such analyses, so any help or advices to handle this is welcome.
  3. How do I identify peaks from biological compounds? Thank you All
Relevant answer
Answer
In that case, first you can identify the metabolite name of ms1 peak using NIST library or XCMS software wherein you have to upload raw data, and proceed further.
  • asked a question related to Tables
Question
1 answer
the chemical equation (in the small picture) is included in a table for the selection of organic reduction Half-Reactions in a textbook.
my question is what does (d) stand for??...since the author makes it equal to 40 in the methanogenesis example (see the large picture)?
Relevant answer
Answer
If we imagine 150 cubic meters in time probably d stand for day
  • asked a question related to Tables
Question
4 answers
For example, I am writing about research in Literature review that developed tool, in the resarch, there is no table, can I organise this tool in a table in Literature review?
Relevant answer
Answer
The guidance/information in relation to using tables for literature review may further help, namely:
“Many reviews contain a summary table designed to present an overview of the articles discussed in the review and their key findings. This can add clarity and make the process of following the author’s development of the review easier for the reader. The headings of the table will depend on the purpose of the review…..” (Bolderston, 2008, p. 90).
“Some authors use summary tables of included studies in the main body of their review and then include full reports as appendices or supplements. A table of excluded studies (i.e. studies that appear to meet the inclusion criteria but which, on closer examination, are excluded for good reasons) is frequently included in standards of reporting…..” (Booth et al, 2012, p. 30).
Chapter 7 : Building Tables to Summarize Literature (Galvan, 2017, pp. 65-71).
“The objective in using different formats and summary tables is to synthesise large amounts of information in a clear and concise way that makes it easy for the reader to both understand and to make comparisons between the broader approaches being reviewed. It is much easier to summarise with the use of tables, graphics, and other illustrative material, and it should also make the narrative easier to follow for the reader” (van Wee and Banister, 2016, p. 285).
  • Bolderston, A. (2008) Writing an effective literature review, Journal of Medical Imaging and Radiation Sciences, 39, 2, pp. 86-92.
  • Booth, A., Papaioannou, D. and Sutton, A. (2012) Systematic Approaches to a Successful Literature Review. London: SAGE Publications Ltd.
  • Galvan, J. L. (2017) Writing Literature Reviews : A Guide for Students of the Social and Behavioral Sciences. Sixth edn. New York, NY: Routledge.
  • van Wee, B. and Banister, D. (2016) How to write a literature review paper?, Transport Reviews, 36, 2, pp. 278-288.
  • asked a question related to Tables
Question
2 answers
I have trouble configuring material files with damask software. So I need your help.I copied the example of damask 3.0 official website (the example of official website failed to run successfully) to try to use 'from_table' to give custom grain orientation, but always reported an error:    'ValueError: Length mismatch: Expected axis has 3 elements, new values have 1 elements'.
(1) The following is the instruction I use:
import damask
import numpy as np
t = damask.Table.load('ori.txt')
config_material = damask.ConfigMaterial.from_table(t,O='q')
(2)The following is the instruction I used. The attachment is the Eulerian angle file of grain orientation I used like that: first.png
(3)The following figure is an example of the official website:secong.png
Relevant answer
Answer
Karo Sedighiani ,thanks,i got it.
  • asked a question related to Tables
Question
2 answers
Hi everyone,
How can I use images from wikicommons or wikipedia as part of my research without it being considered plagiarism or copy right infringement?
I wanted to use images and add them to a table in my own work.
Thanks for answering
Relevant answer
Answer
CITING IMAGES THAT ARE FREE TO USE (public domain, Creative Commons licenses, etc.)
Figure legend:
Figure 1. Put any desired caption here. Reprinted from Wikimedia Commons, 2010.1
Reference list (full citation in AMA format):
1. Amos E. A small and simple white mortar and pestle, on bamboo. Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/a/af/White-Mortar-and-Pestle.jpg. Published November 11, 2010. Accessed December 1, 2015.
Or, put entire citation below image if you do not want to include it in the reference list.
Example:
Figure 2. Anatomy of the knee in humans (https://en.wikipedia.org/wiki/Clarke%27s_test#/media/File:Blausen_0597_KneeAnatomy_Side.png) by Blausen.com staff, December 3, 2013. Creative Commons BY 3.0 license.
  • asked a question related to Tables
Question
12 answers
I am currently investigating cognitive differences between unforced, forced errors and winners in table tennis matches. I find very little literature on the topic.
Relevant answer
Answer
(PDF) Relationship between Physical Performance and Unforced Error during the Competition in National Turkish Junior Badminton Players (researchgate.net)
  • asked a question related to Tables
Question
5 answers
Hi there,
looking for an easy-to-use uSv/hr table for Geiger counters to measure normal radiation activity. Is there a table showing where dangerous zones begin, somewhere available? Cherish your feedback.
Relevant answer
Answer
These limits are readily accessible from the Publications provided by IAEA, Vienna and ICRP Recommendations. Please have a look through the same. All the best.
  • asked a question related to Tables
Question
2 answers
Mortar workability is determined through flow table test on the freshly prepared mortar. With passage of time, the flowability of the freshly prepared mortar decreased. In this contest, what is the exact definition of pot-life of mortar and what is the physical significance of this ?
Relevant answer
Answer
Thank you Prof. P. Pravin Kumar Venkat Rao for your valuable suggestion.
  • asked a question related to Tables
Question
5 answers
Can I use the symbol % to explain the table in my Ph.D. thesis? Or I have to write in sentences...
Relevant answer
Answer
Yes, if that fits your data.Regards
  • asked a question related to Tables
Question
1 answer
I think I might be able to inherit a good optical table. However it has a 50mm hole pitch and this is not ideal for most of my work which tends to use 25mm pitch. Does anyone have experience of putting a bread board on top of such a table to ‘convert’ the hole pitch? I’m interested to know if there were any problems in terms of damping or any other issues that came up.
Relevant answer
Answer
Shouldn't be a problem if the weight of the breadboard is much smaller than the maximum weight that the optical table can handle. You can use 12.5 mm thick Aluminum breadboard for this reason (the weight can be found on the Thorlabs webpage). If the table is passive, you may need to re-adjust the pressure.
  • asked a question related to Tables
Question
7 answers
Hi all,
For my master thesis I am running an ordinal logistic regression analysis with about 30 independent variables. Because of the amount of independent variables I get a lot of missing cases in the Case Processing Summary. Nonetheless, my model seems to fit well as the goodness-of-fit and test of parallel lines parameters are met.
The problem is that in the Case Processing Summary, out of the dataset population of about 7500 there are 5500 cases missing, meaning 5500 cases were not included in the analysis.
My question is how do I make a table 1 (characteristics of the population in the analysis) with only taking into the measures the cases that were included in the analysis?
Hopefully someone is familiar with the correct procedure and can help me on my way.
Kind regards
Relevant answer
Answer
Even if you were the first researcher to ask what determines a certain type of health problem, you would have the same general approach.
  1. Selecting predictor based on theoretical grounds
  2. Critically examining data quality : put aside variables in which data quality is poor or in which there is so much missing data that you cannot make an unbiased estimate of the effect of the variable
  3. Prioritising predictors that are amenable to intervention, modification or treatment
  4. Identifying potential confounder variables
  5. defining a causal structure that includes the possibility of effect modification and confounding
You are better off doing this first, so that you do not end up having to posit bizarre models to explain whatever turned up significant in your data. Models should be fitted to data, not invented to explain them.*
Anyone who fishes will tell you that casting you net widely simply gets you significant quantities of old supermaket trollies, beer bottles and drek.
*Yes I know. It is when your model cannot explain the data that you re entitled to theorise.
  • asked a question related to Tables
Question
4 answers
Is it necessary to take permission for image and table from journal if I want some table and image from other paper?
I am writing a review paper in which I want some pictures and table from other paper so what should I do for it so that I can get the picture
Relevant answer
Answer
Hey Bush. It is mandatory. Kindly follow the link for more information:
  • asked a question related to Tables
Question
2 answers
Where can I find The 2006 CHF look-up table as CSV file to be used in Machine learning modeling?
I will be very grateful for your contribution.
Relevant answer
Answer
Thanks @Paul Hurley.
  • asked a question related to Tables
Question
11 answers
Hi,
I just run ARDL model in which I have defined "Ln(Budget Deficit/GDP) as Dep-Var and Ln(G/GDP)+Ln(OilRevenue/GDP)+Ln(10%rich/10%poor)+.... some other political-economic indicators as independent variables, to find out what are most influential determinants of creating Budget Deficit.
Time series are annually and and I got five ARDL estimations by combining various of Economic and Political variables together. All the post estimations are passed, Speed of ADJ between zero and -1, with meaningful Coefficients using AIC and EC options and etc.
let assume that my estimation is up to 2019 Now I am trying to find out how could I predict or forecast the dependent variable for say next 5 years, i.e. up to 2024
I was un-succeed to define tssapend and other sequential orders to reach conclusion.
should I define independent variables separately for each next five years and then put them in the data table and use predict order?
or Stata can solve the model without giving amount for independent variables in next yeras?
would be very thankful if could help me to solve this issue.
Best
Relevant answer
Answer
Hi Ardeshir Saedi . I hope u have got a workable solution for this. I am now facing the same issue can u please advise me on how to forecaste after ardl estimation.
  • asked a question related to Tables
Question
1 answer
I am getting the error 'missing value where TRUE/FALSE needed' in G2Sd package of R programming when I input the table containing mesh size in microns and its weight in g. I used the code library(G2Sd) and then granstat(web_interface = TRUE). When the web interface dialog box is opened I gave the input and this error occurred. How to rectify it and also is there any example dataset of the table to be given as input?
Relevant answer
Answer
Dear Yamini,
'missing value where TRUE/FALSE needed' is a general error message and is probably not thrown by package 'G2Sd'. R throws this error when a condition used in if or while is not TRUE or FALSE but NA. Of course the error is somehow connected to G2Sd and your data (it might be caused by a bug in G2Sd or one of its required packages), but for debugging you should examine the result of traceback() called right after the error message. Maybe this shiny application catched the errror with a tryCatch(), in this case it is really hard to debug. If the error message displayed by shiny is also sent to the R session in the form of a warning, you can start debugging by turning warnings to errors (options(warn = 2)).
HTH,
Ákos
  • asked a question related to Tables
Question
4 answers
Is there any standard table?
Relevant answer
Answer
If you mean does the base material affect the coefficient of friction of a coated bolt, then the answer is not entirely clear. There can be two cases here: weak tightening of the bolt (the coating and the base material do not undergo plastic deformation) and strong tightening of the bolt (the coating and the base are deformed). If in the first case the base material does not matter, then in the second case the base material (its hardness) has a fundamental effect on the coefficient of friction. As for the influence of the atmosphere on the coefficient of friction, it certainly exists. For example, it is widely known that bolted joints in a vacuum can be tightly "welded" due to the diffusion effect of oxide-free metal surfaces.
  • asked a question related to Tables
Question
1 answer
Hey, I have been looking for it a lot. Are there sigma + - constants for meta substitution?
And also I couldn't find the sigma * constants.
Relevant answer
Answer
This is where I usually look for the Hammett constants: A Survey of Hammett Substituent Constants and Resonance and Field Parameters. Chem. Rev. 1997, 91, 165-195.
  • asked a question related to Tables
Question
3 answers
Hello, I have two cohorts and I'd like to check if the distribution of cases is similar between them. For clarity I'll post the table.
I'd like to know if the HEMS and GEMS groups differ significantly in the distribution of counts of various coded call (represented by their category number, 7, 14, 15, etc.).
I'm not sure how to run and check that with those variables - sorry for the basic question, I'd just like to prove similarity between the two groups and I'm a bit of a statistical novice.
Thank you.
Relevant answer
Answer
Thank you both for responding :)
  • asked a question related to Tables
Question
5 answers
I mean compiled data, table in an article, etc.
Relevant answer
Answer
Hi again.
For Scandinavian bats there is an interesting source of information about this subject:
BAAGOE, H. J. (1987). The Scandinavian bat fauna: adaptive wing morphology and free flight in the field, pp. 57-74 (in): Fenton, M. B. et al. (eds.), Recent advances in the study of bats. Cambridge University Press. Cambridge / London / New York / New Rochelle / Melbourne / Sydney.
Best regards.
  • asked a question related to Tables
Question
8 answers
Hey,
I sampled benthic foraminifera from two sites, in each site from 3 different habitats.
I want to check if the differences in the numbers of species are significant or not.
My data is basically composed of present/ absent numbers and the number of species in each habitat, for example:
Site A:
Habitat 1: 6, 8, 14, 16
Habitat 2: 3, 4, 7, 9
Habitat 3: 4, 6, 9, 10
* There is another table for the other site
* Different numbers within each habitat represent different sampling months.
* From each habitat, at least 3 replicates were sampled. The decision to sum all species present in the replicates is due to the fact that sometimes there is a big variability between the replicates and since the comparison between habitats and sites is more relevant for our question.
What test would be the best to validate if the differences are significant or not?
Thanks in advance,
Yahel
Relevant answer
Answer
Hi Sir,
First, you have to calculate the minimum number of samples to be sampled, because three replicates are not enough. Then you can use T-students test to show the variability between two sites.
Thank you
  • asked a question related to Tables
Question
16 answers
I have measured breast tumor size using different modalities CESM and MRI. I have also the size on Histology.
I have a table of paired Samples Test, and the Sig. is .037 MRI and .523 for CESM when compared with the gold standard of histology. What does this mean??
Help xx
Relevant answer
Answer
The output table shows the degrees of freedom (df). The table also already tells you which modalities specifically differ. Please see my previous comments.
  • asked a question related to Tables
Question
1 answer
I am currently using the conn toolbox in Matlab. In the 2nd level result, we set the connection threshold at uncorrected p-value < 0.001 and the cluster threshold at uncorrected p-value < 0.05 to confirm the following results.
If you look at the result table as shown below, the values of p-uncorrected and p-FDR are shown.
I have two questions.
1. What does the pFDR value shown in the results table below calculate and mean?
2. Both the connection threshold and the cluster threshold are specified as uncorrected, but the value of pFDR was calculated in the table below; why? What does the pFDR from this mean?
Relevant answer
Answer
Not my area, but I just want to make sure that you know that if you change the sample size, you change the p-value. With a large enough sample size (where 'large enough' depends on the application and standard deviations) you can 'disprove' anything. With a small enough sample size, nothing is disproven, based on the p-value.
This is why an isolated p-value is virtually useless.
Standard errors (unlike standard deviations) also are reduced with sample size, but they are more practically interpretable. Graphics can often be very helpful. But an isolated p-value with no mention of effect size is not helpful.
  • asked a question related to Tables
Question
4 answers
For example, if I have a pre-post design experiment with 2 groups A and B. I can get change scores(Post minus Pre). And I can count the number of positive changes and negative changes. Hence I create a 2*2 table(GroupA/B*Positive/Negative Changes)
```
A B
Positive Change Score n1 n2
Negative Change Score n3 n4
```
Then can I use Chi-Square Test?
I have learned that ANCOVA is a better choice, but the change scores violate normality test assumptions so maybe ANCOVA is not a good one.
Relevant answer
Answer
Dong Qianli , U should use mcnemar test if variable is dichotomy
  • asked a question related to Tables
Question
1 answer
I'll be really grateful if anyone could help me with FEPA-Standard 42-1:2006 acceptable ranges for F4 to F220?
there are different references on internet with different tables of F grit which really made me confused.
Relevant answer
Answer
hi,
FEPA (Fédération Européenne des Fabricants de Produits Abrasifs) Federation of European Producers of Abrasives. The FEPA distinquishes between grain for sanding paper (FEPA P) and grain for sharpening stones or wheels (FEPA F) The decisive factor for comparison is FEPA F.Lower mesh numbers means larger, coarser particles. Higher mesh numbers mean the particles are more fine. Grit is sized by passing it through a series of mesh sieves. Essentially, the Grit Size is determined by how much grit passes through or is retained at certain sieve sizes.Diamond is available in grit sizes from 40 to 8,000 mesh while CBN comes in the range of 50 to 8,000 mesh. As with sandpaper, a smaller number signifies the abrasive particles are larger.
For more information kindly refer this link:
Best wishes..
  • asked a question related to Tables
Question
1 answer
gmx_mmpbsa_ana always displays the following error, and closes the program when I try to view the results:
/usr/local/bin/gmx_MMPBSA_ana:8: PerformanceWarning: indexing past lexsort depth may impact performance.
sys.exit(gmxmmpbsa_ana())
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/GMXMMPBSA/analyzer/customitem.py", line 148, in _show_line_table
from GMXMMPBSA.analyzer.plots import Tables
File "/usr/local/lib/python3.8/dist-packages/GMXMMPBSA/analyzer/plots.py", line 38, in <module>
from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as FigureCanvas, NavigationToolbar2QT
ModuleNotFoundError: No module named 'matplotlib.backends.backend_qtagg'
Aborted (core image recorded)
Relevant answer
Answer
Hi,
Kindly refer this link:
Best wishes..
  • asked a question related to Tables
Question
1 answer
Producing a standard score for a neuropsych test normally involves browsing norms tables, finding a particular table that matches a demographic, finding the appropriate raw to standard conversion, then manual entry. Unfortunately, this process can be quite error prone.
Is it possible to instead use a calculated field and lookup tables to both (1) choose the correct table of normative data and (2) choose the correct standard score for a given raw score?
  • asked a question related to Tables
Question
3 answers
Hi,
I want to present the results of multinomial logistic regression at a conference in a visual way,
is it enough to present the table of the results in the power point,
thank you
Relevant answer
Answer
Sam H.Bahreini I would also consider showing ROC curves and maybe the resultes of your Hosmer-Lemeshow goodness of fit tests. I have examples of each; DM me if interested.
  • asked a question related to Tables
Question
4 answers
Shapiro-Wilk p-value is 0.0072, W = 0.8844. Data Points 26. Table data for W: n = 26, W = 0,891 - a = 0,0,1; W = 0,92 - a = 0,05. Is distribution normal in my case?
Relevant answer
Answer
With almost certainty: it is not normal (unless it is a synthetic variable, and even then it ay be doubtful, depending on the RNG used).
Note 1: the test cannot give any indication or evidence for the assumption that the tested distribution is normal. It only checks if the given sample size is already sufficient to reject this hypothesis. Failing to reject only means that the sample size is too small.
Note 2: I bet that answering the question "are my data sampled from a normal distribution?" is not what you need. I guess that the correctly formulated question is rather: "are my data sampled from a distribution that is sufficiently similar to a normal distribution to warrant conclusions from analysis methods assuming that the data are samples from a normal distribution?". This is a considerably different question, and this cannot be answered by "Normality test" like Shapiro-Wilk etc.
  • asked a question related to Tables
Question
2 answers
Dear all,
I am conducting a systematic review of the diagnostic studies to identify vascular complications after liver transplantation in pediatrics. Among the findings, there were 4 studies where controls (x2) and comparisons between index/reference tests (x2) were found.
I still find it difficult to extract data from these studies and present it in a word document as a table or something. Is it handy to conduct a meta-analysis and/or standardization of the data somehow?
What is your point of view and suggestions on extracting relevant data + analyzing these?
My understanding of these is very basic.
Thanks,
Bader.
Relevant answer
Answer
These resources maybe useful (see screen shots) for meta-analysis
  • asked a question related to Tables
Question
1 answer
Hello,
I am running some generalised estimating equation analyses using SPSS.
For the most part, the p-values for each variable are the same across the 'tests of model effects' and 'parameter estimates' tables in the output.
However, for one variable (continuous in nature), the p-value is different across the two output tables (significant in model effects but insignificant in parameter estimates).
Does anyone have any idea what might be causing these discrepancies or how I could rectify the problem?
Many thanks,
Amy
Relevant answer
Answer
Perhaps it has something to do with what the variables.measure? David Booth
  • asked a question related to Tables
Question
3 answers
I'm formatting a paper using the MDPI's LaTeX template and there is a huge blank space at the left of each page. It seems to be a narrow column that I cannot remove. Indeed, in the comments of the template the "left column" is mentioned, but the options "figure*" and "table*" do not fit the corresponding figure or table to the full width of the page. Besides, this apparent "left column" does not only appear on the first page (where different citations can be included) but also in the rest the manuscript (where only a blank region is visible).
Please, I wonder if you could help me to find the LaTeX commands or to modify the template so as to:
- Fit the full manuscript to the overall width of the page
- Fit at least large figures and tables to the overall width of the page
Thank you in advance!
Relevant answer
Answer
Thank you for your replies.
Certainly, the required commands are contained in the MDPI template. Nevertheless, I replaced the text in the template by my code and I deleted these commands. Revisiting the template I could find the required instructions.
I detail them here just in case it is useful for anybody:
The width is adjusted by means of:
\begin{adjustwidth}{-\extralength}{0cm}
...
\end{adjustwidth}
- Regarding the references section, these commands must be written before and after the definition of the references section
- As for figures and tables, the same commands must be used, but this time inside the definition of the figure or table.
  • asked a question related to Tables
Question
2 answers
i. real data of a particular year after base year,
ii. Hypothetical scenario taking certain data for a particular year,
iii. Case base study of a particular factor, or
iv. other options?
I will be thankful and appreciated the comments and suggestions from the dignified academicians.
Relevant answer
Answer
  • asked a question related to Tables
Question
1 answer
On the anatomage screen/table, there is a 3D image of a person in which student use it to do on- screen dissection. The image comes from a donor or a person who has probably passed away.
How can we insert our own image on to the anatomage table?
Relevant answer
Answer
icians, residents, and medical students can visualize internal and surface anatomy in 3D, with high resolution and accuracy. Users learn and practice using real patient data.
Regards,
Shafagat
  • asked a question related to Tables
Question
1 answer
Dear all,
I have been studying the impact of globalisation on political, economic and financial instability, so I need this data:
Table 3B: Political Risk Points by Component, 1984-2018 (Zip file)
Table 4B: Financial Risk Points by Component, 1984-2018 (Zip file)
Table 5B: Economic Risk Points by Component, 1984-2018 (Zip file)
Unfortunately, I do not have access to the (ICRG) database, so I would greatly appreciate your help in obtaining this data, if you can. Thank you in advance.  
Best regards
Relevant answer
Answer
Hello,
did you manage to get the database?
I would need it too.
thank you
  • asked a question related to Tables
Question
7 answers
I have used the Minitab program for Plackett-Burman screening; on data analysis I used the forward selection method & I obtained 4 significant factors in the pareto charts, but 2 of them were not significant in the ANOVA table (as in the attached photos).
My question is:
which one can I depend on (the pareto chart or the ANOVA table)?,
on another word, should I make the factorial analysis by using the 4 significant factors appeared in the pareto chart or by using only the 2 significant factors (the 2nd & the 3rd ) in the ANOVA table?
Thanks for help
Thanks for all of u., I have another question plz.; If I obtained 4 highly significant factors (p = 0.0001) & 1 significant factor (p= 0.042) from the Blackett Burman screening step, should I complete my factorial design using the highly significant factors only or use all the 5 factors? which is the best?
Relevant answer
Answer
Andrew Paul McKenzie Pegman , this is a classic case where traditionally we choose alpha levels other than the 0.05. It appears that the Minitab software defaults to alpha = 0.25 for this kind of screening. This makes sense if we want to be relatively liberal with the number of factors we want to keep for further evaluation, and we acknowledge that our initial screening study didn't have tremendous power. ...
  • asked a question related to Tables
Question
12 answers
Do you think such an approach can be made especially to control the conductivity of semi-crystalline polymers?
Relevant answer
Answer
By the way, don't try temperature, most salts melt at high temperatures where no polymer maintains its stability, it degrades. NaCl melts at 801°C. My Regards
  • asked a question related to Tables
Question
4 answers
Imagine I have study 1 with results from location A. And study 2 with results from locations B,C and D. I want to compare all of them with a random effect meta-analysis. Is it possible to create 1 table entrance to study 1, and 3 table entrances to study 2?
Relevant answer
Answer
You need to account for the dependency between ES that come from the same study (typically with a multilevel meta-analysis): https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/multilevel-ma.html
  • asked a question related to Tables
Question
2 answers
I performed 20-something moderated multiple regressions using the PROCESS macro for SPSS. Does anyone have a template for how to put these results into a table?
  • asked a question related to Tables
Question
7 answers
As I am new to docking part, wanted to know that is a more negative docking score in RMSD table is better or positive score is better.
As i have read two recent papers in which both are explaining different information related to docking.
I want the clarity in the docking score which better for research purpose?
Relevant answer
Answer
The RMSD (root mean square deviation) would be a positive number. It is a measure of how close a given structure is to a reference structure, so that the closer the RMSD is to zero, the better.
In contrast, some docking algorithms compute a docking score that is related to the free energy of binding of a ligand to a receptor. For this type of docking score, the more negative the score, the better.
  • asked a question related to Tables
Question
5 answers
Hello everyone,
I would be very appreciative if anyone could provide me with the reference to the table below. and how accurate are the displayed means?
Regards
  • asked a question related to Tables
Question
2 answers
My dissertation project seeks to understand how and which women’s inclusion models have been applied to support gender equality and the meaningful participation of women during the three previous rounds of peace negotiations in Yemen. The project’s framework is drawn from Paffenholz (2014) and Reilly. et.al (2015) toward an inclusive peace process and the project’s framework outlines five models of inclusion (1) direct participation at the negotiation table, (2) observation, (3) official and unofficial consultation, (4) negotiation commissions, (5) problem-solving workshops. The purpose of using this framework is to define the meaningful participation of women in the UN-led peace process in Yemen.My research hypothesis is: The concept of meaningful participation and its practical application is still not clear in attempts to involve women in peace processes. And main research questionis: What has enabled and obstructed women’s meaningful participation in the Yemeni peace process?
Relevant answer
Answer
Dear Bushra,
I have seen"theory of critical mass" often used to analyse women's political representation and thus an agency to influence decision making. There are some follow up researches more focusing on contextual issues. Hope this will help.
  • asked a question related to Tables
Question
1 answer
Good day!
I am currently working on a study that compares the knowledge of medical residents regarding child abuse, which involves eight specialties.
Upon having my data statistically treated, I received data that was treated thru ANOVA, and for post hoc examination, Fisher's LSD was used. I was able to see some research articles that used the "superscript" method to identify if the values were statistically significant or not, in order to avoid putting the large tables in their manuscript.
However, upon consultation with a senior/adviser, she was advising me to put the multiple comparison tables in my manuscript and do an analysis of the post hoc table.
I have a dilemma since in one table, I am dealing with eight categories, which were compared to each other. How can I present the data without bulking my manuscript up?
Thank you!
Relevant answer
Answer
Hi,
I'd like to recommend "Statistica V13", where it is very effective tool for such task.
Good luck...