Science topic
Performance Testing - Science topic
Explore the latest questions and answers in Performance Testing, and find Performance Testing experts.
Questions related to Performance Testing
Dear Community,
I am conducting DEPT135 and DEPT90 NMR experiments within the scope of structure elucidation. However, the latter experiments are giving some odd results, I mean not as expected: CH3 and CH having a phase opposite to that of CH2 in DEPT135, and DEPT90 displaying CH groups, only.
What can I do to resolve this problem, knowing that all the performance tests gave good results?
Hi,
I'm currently preparing for experiment which is supposed to test the performance of my wind turbine, but I have problem with finding a paper to find out the methodology and equipment that i need in order to properly measure my wind turbine. First i found a paper where they used Torque meter and Tachometer to get Cp and wing tip ratio. But my supervisor said that i should find a cheaper solution because torque meters are expensive. I found some papers in which scientists use Tachometer and Generator in order to measure their wind turbines. But i can't find any detailed information what kind of generator or any additional equipment i need. Can somebody give me a tipp where i can find useful information about suitable equipment for my experiment?
The ultrafine-equiaxed graines (500nm for Al-alloys) exhibit what advantages in terms of tensile and fatigue performance testing?
Are there any online courses about the characterization analysis and performance test?
Characterization analysis: XRD, EDX, FESEM and XPS
Performance test: Cyclic Voltammetry and EIS
Hi Sirs
Which is better to use in performance test (electrochemical cell) Pt Nps electrode or Pt plate electrode?
and why?
Will the temperature used affect the results on the soil pH, soil organic matter contain, cation exchangeable capability and texture of the soil.
I perfomed a column performance test, which showed an abnormal peak shape. when I was opening up my column packed with daiso cilica gel, I noticed That the column bed was extemely jelly-like. I have never seen this before. what could be the reason? What have I done wrong?
Hello everyone,
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
I performed tests of photoelectrochemical. I need to determine the flat band potential of the system. I read papers and observed that plots Moot-Shotky (C^-2 versus V) is the most used in literature. But I don't know-how from the Nyquist plot (Z'' versus Z') and find C^2 versus V. How to determine flat band potential for this method and what measurement type can I use? In summary, what steps must follow or what step by step?
Thanks
we have reach an algorithm in the bioinformatics field and need computational equipment for a data test. Looking for a colaboration to perform test
I would like to know which is a better approach,
Doing catalyst synthesis and characterization, performance testing of catalysts and then doing the kinetic and mechanistic study OR studying the reaction mechanistic via software tools like DFT before carrying out the catalyst synthesis and performance testing? I am asking the question in context with the steam reforming reaction over Ni-based bimetallic catalyst.
Thanks
We have multiple samples of PHA polymers that are prepared differently. We would need Tg values with sufficient precision in order to compare the different samples, which will have relatively close Tg values. I believe values to at least 1oC would be sufficient.
I have been recommended to perform DSC testing at a heating rate of 1oC/min, performing multiple passes and repeating the test with replicate specimens to get the most accurate Tg. As far as I'm currently aware it isn't feasible to slow down the heating rate.
Are there any other ways of performing this test to ensure we have values precise enough for comparison or is performing multiple passes the best way of determining these values?
I would like to perform tests on triaxial equipment, So l need Geotechnical lab who receive foreign students.
Hi,
I am studying the links between ADHD sub-dimensions and the different types of risky driving.
What are the best ADHD diagnostic methods, given that my goal is to associate between risky driving and sub-impairments/sub-dimensions of ADHD?
I hope to use each of the common diagnostic platforms (questionnaires, computerized performance tests, and interviews).
I am working on performance testing of S.I engine with brown gas generation and use of it as a fuel blend.
I'm using the DNA to do RT-qPCR, but the performance of test group is not same as normal, I want to know the exact composition.Thanks in advance!
I carried out trials for drought in rice in 2017 and 2018 with a large-scale germplasm of 2030 genotypes under lowland (irrigated) and upland (drought) conditions. Six agronomic traits were considered for further analysis. Kindly let me know, how can I perform test of homogeneity?
I have a problem in evaluating machine learniong algorithms in WEKA!
I did the first six steps correctly, accordang to the instructions at https://machinelearningmastery.com/binary-classification-tutorial-weka/ but from step 7 I get the following:
7. Click on “Run” to open the Run tab and click the “Start” button to run the experiment. The experiment should complete in just a few seconds.
I got a report like this:
09:38:24: Started
09:38:24: Class atribute is not nominal!
09:38:24: Interrupted
09:38:24: There was 1 error
8. Click on the “Analyse” to open the Analyse tab. Click the “Experiment” button to load the results from the experiment.
In the window “Test output”, I have got this:
Tester: weka.experiment.PairedCorrectedTTester –G, 4,5,6, -D1 ...
Analysing: Percent_correct
Datasets: 0
Resultsets: 0
Confidence: 0.05 (two tailed)
Sorted by: -
Date: 3/19/20, 9:53AM
Index 0 out of bounds for lenght 0.
9. Click the the “Perform test” button to perform a pairwise test-test comparing all of the results to the results for ZeroR.
The test did not start.
There are no changes nor results!!!
Where is the problem?
What was happened?
The present research paper aims to discuss one of the problematic issues in general and mental measurement particularly. By reviewing the psychological and educational literature, it is observed that researchers who investigated multiple intelligences depended upon on the instruments of self-report, whether in the preparation of measurements of multiple intelligences or in measuring them as well as in the preparation of multiple intelligences batteries.
Thus, the question is that: can self-report instruments be replaced as an alternative to maximum performance tests in the measurement of multiple intelligences? Due to the importance of mental measurement and measuring tools, and the continuous efforts in this area, the researcher discussed this problem in three axes; the first concerns the mental measurement in terms of its definition, objectives, importance, tools of measurement and the basis of classification, the second tackles self-report in terms of its definition, problems related to self-report instruments, and the extent to which self-report instruments can be used as an alternative to maximum performance tests in measuring intelligence and mental abilities.
The present research aimed to examine the validity of self-report instruments in measuring multiple iintelligences. Mckenzie's inventory was used as a model for those instruments., through exploring the factorial structure of the multiple intelligences inventory, identifying their concurrent validity through identifying the significance of the correlation between the students' self-reported scores on Mckenzie's Multiple Iintelligences Iinventory and their performance on Maximum Performance Tests, identifying the possibility of predicting their academic achievement through Multiple intelligences and finally identifying the possibility of discriminating validity of multiple intelligences inventory through exploring the multiple intelligences discrimination between high and low achievers in Math and Arabic. The researcher translated Mckenzie's Multiple intelligences inventory into Arabic and depended on the National Battery of Cognitive and Social Emotional Intelligence.
Findings revealed that the factorial structure of the multiple intelligence scale Fit the current research data. There was no statistically significant correlation between students' scores on Mckenzie's Multiple intelligences inventory and their language ability test. There was no statistically significant correlation between self-reported social intelligence and social emotional situational intelligence tests; while there was a weak statistically significant correlation between students' scores on mathematical intelligence and numerical ability tests, and also spatial intelligence and spatial ability tests. In addition, the findings also showed that the achievement in Arabic can be predicted only through natural and musical intelligences. Only natural intelligence predicts mathematics, and multiple intelligences do not discriminate between high and low achievers in mathematics, nor do they discriminate between high and low achievers in Arabic except natural and bodily- kinesthetic intelligences.
Key words:
multiple intelligence, self- report, typical performance, maximum performance. academic achievement.
After machining an alloy using WEDM or any other non traditional machining processes, what kind of analysis can be performed for testing surface and sub surface conditions?
Hi,
I am new to EEG signal processing. I am now working on the DEAP dataset to classify EEG signals into different emotion catagories.
My inputs are EEG samples of shape channel*timesteps. The provider of dataset has already removed artifects.
I use convolution layers to extract features and use a fully connected layer to classify. I also use dropout.
I shuffle all trails(sessions) and split the dataset into trainning and testing sets. I get resonable accuracy on the trainning set.
However the model is anable to generalize accross trails.
It overfits. Its performance on the test set is just as bad as a random guess(around 50% accuracy for low/high valance classification).
Is there any good practice for alleviate overfitting in that senario?
Another thing bothers me is that when I search for related literature, I find many paper also give an around 50% accuracy.
Why are results from EEG emotion classifcation so bad??
I feel quite helpless now. Thank you for any suggestions and reply!
Recent research has spent a lot of attention on black-box performance optimization to realize global optimization with respect to an utility function. The advantage of black-box techniques is that it does not require an upfront performance model of the application, which is tedious and often inaccurate. The disadvantage is that black-box techniques may require a lot of live performance test samples in multi-dimensional parameter spaces.
Some recent work tries to tackle this disadvantage by means of model-based reinforcement learning, so the samples can be directly retrieved from the model instead of a life test.
Why are such trained models even better than the queue-based performance models?
1. I have decided to use the TLC technique to identify the types of Polychlorinated Biphenyls and the 12 most toxic congeners in the soil samples collected. If some of you have already performed tests by TLC to identify the PCBs they send me the protocol(s) used?
For my experiment i am going to do immunofluorescence for two junctional proteins in post treated endothelial cells. My test sera obtained from DENV infected patients. Now, quantity of test sera in my possession vary from 250ul to 500ul and i have to perform test in duplicate. I am going to add serum in 1:3 proportion to culture media. If i conduct experiment in cover slip placed on 6 well culture plate, 2.5-3ml media will be required, hence exceeds my storage serum quantity for appropriate proportion. What can i do here? Can i change media to lesser amount after cell grown to confluence before adding test serum?
Dear specialists/ experts
I am very new for ARDL test and these days I am trying to analyse my data which are having different stationary levels. I have panel data. My all variables are in natural log transformation. My dependent variable and six independent variables are stationary at level and two IV are stationary at I(1). Since mixed stationary levels I thought ARDL test will be fitted with my data. Eviews does not support for my data panel and says "near singular matrix) when try to perform the test. Therefore I thought to go with STATA. I have few basic questions.
1. Since my data in natural log transformation there are negative values in my data series. Is it a problem to carry out ARDL with negative values?
2. I have time invariant proxy-( distance) is it a problem for my analysis?
3.Are there any limitation on number of variables that can be used in ARDL test ?
I have stucked with panel data ARDL in STATA and hope your assistance and guidance with appropriate STATA commands
In my stud, I have 2 groups (intervention and control Group). I performed tests twice (pre and post scores). I want to compare the significance of the changes between two times within two groups.
I am interested in understanding the effect of thermal treatments on proteins denaturation into food simulant.
Actually, I have already performed tests onto acqueous solutions of different kind of proteins (BSA, whey proteins) but I would like to "complicate" a little bit the system in order to assess whether some other compounds (sugars, salts etc.) could have an influence on the induction of structure modification by temperature.
There are several samples which is believed that glycine is incorporated in them. I must know the exact quantitative amount of the glycine in the sample?
What would be the most suitable way to perform the test?
Kaiser test (ninhydrine test) is thought at first to detect the primary amine.
What could be the other alternatives?
Hi, I am a bit unsure about how the data should be preprossessed prior to performing a paired statistical test. Lets say I have two groups A and B where I measure something every day for twenty days. I would then test for normal distribution, and further use wither paired t-test or paired Wilcoxon test with dependency based on sampling day. Here where I am unsure. Lets say each group has four replicates. Should I;
1) take the mean of the replicates so that each group (A or B) is only represented by one value or
2) perform the test, and each group is represented by four values.
Option 2 gives me more degrees of freedom, and thus usually a more statistically significant value.
Madeleine
I am trying to differentiate u937 cell line in order to perform test on antigen presentation (ovoalbumin+SIINFELK). I think I am having trouble with the differentiation because I see a low number of cell actually presenting the SIINFELK peptide and the shape of the cells does not resemble the one of machrophages.
For differentiation I have followed 2 different protocols:
-100ug/ml of PMA for 48h,
-20 nM of PMA + 25 mM of VitD3 for 48h following 6 days of resting with only VitD3.
Is there something I am doing wrong?
I performed the test according to Wolf et al., 2016.
Is a scenario possible wherein the drug exposure achieved in clinical studies surpass or exceed than the exposure achieved in nonclinical toxicity studies conducted of appropriate duration?
And if so, is the dose selection flawed in that particular clinical study?
And to address that kind of a situation, should more nonclinical toxicity studies be performed that test a higher exposure than what is expected in humans?
Dear all,
My name is Carlos Freitas and I am a chemical engineer student. I am performing tests of furfural HDO and I am with difficulties to find chromatogram and retention times to identify the products of my reactions.
Thank you in advance.
Scientific greetings,
Carlos
When I was in undergraduate classes, I was advised to perform test of normality and equality of variance prior performing any test for means comparison (ANOVA) in the case of parametric test. However, I recently read on a paper that, normality is applied on residuals and not on measured variables.
I wish to know what is advisable if I want to publish my results in a paper. Statistically talking, is it fair to perform Normality test on residuals instead of measured variables?
Thank you very much.
Evans, E.
I treated the cancer cell to induce calreticulin(CRT) exposure on the cell surface. Before I could observe 20% of ecto-CRT(+), then it dropped to 5%, then it dropped to nothing recently. I checked the cell contamination, used new the batch of cells, the new batch of drug, and also the new vials of antibody. But I couldn’t think any other reasons why I couldn’t get duplicated result. Also to clarify, the flow cytometer was validated and passed the performance test every time.
Here is my protocol. 1 million cells were cultured in 6 cm plate and performed the drug treatment. The cells were harvested by Accutase buffer. Wash twice with ice-cold PBS, the cells were blocking with 1% BSA-PBS. Then incubated with monoclonal mouse anti-CRT antibody (1ug/100ul) on ice for 1hr. Wash once. Incubated with goat anti-mouse IgG (Alexa fluor 488) on ice in the dark for 1hr. Wash once. Diluted the sample in PBS and tested by flow cytometer. I'm using the scatter plot to gate the live cells since the drug mitoxantrone has fluorescence ~600nM which is overlapped the PI cell viability dye.

For modeling, I am using about 20 different variables (in the form of GIS layers). Before proceeding with the modeling, I want to perform collinearity test. Does any one know how can I perform this test? There are some procedures available for some statistical data, but How we can implement this with our GIS layers?
Hi.
I'm a student studying Li-S battery.
I'm trying to use a 2032 coin-cell
How much diameter should i consider when punching a cathode slurry?
I wonder if it should be smaller or larger than the Li metal anode.
And
When I making cathode thickness is adjusted.
Is there any difference in performance test depending on thickness? If so, what effect does it have on thickness?
Help me.
Thank you
I need a common method to perform this test and different detailed information about the test parameters.
I’m currently using features that are built with statistics over a certain window. This takes in f.e 10 datapoints and make them into one using PCC,KL or simple average(see link). The predictions are also made over a sliding window meaning one anomaly will be present in multiple windows.
If you have two classes ‘normal’ and ‘anomaly’ how do i best score performance on the test set?
I tried different manners to do the MTT (Cat# 4890-25-K, 2500 Tests) test in SGHPL-4 cells, but I had some troubleshooting.
I incubated 10, 20, 30, 40 and 50 uL of MTT in Hans-F10 medium (with 10% of FBS) for 2, 4, 6, 8, 12 and 24 hours. After that, I put 100 uL of SDS and incubated for 4, 6, and 12 hours both in 37°C or room temperature. Also, I tried to incubate the MTT in PBS for 4 hours, but it did not work.
Did you have any experience with this test/cell?
How can I fix this problem?
Is there another way to perform this test?
The methodological process that was carried out for the design of the encapsulation models was first developed with the preparation of the S-0.0 system. Where initially the additives were mixed gradually (Ripio - Bentonite - cement - water), once the mixture was homogenized, the consistency of the mixture was determined, to proceed to the emptying of the material mixed in cylindrical molds, with In order to obtain the monoliths to subsequently perform the test of resistance to unconfined compression, this according to the methodology, at 7, 14 and 28 days of curing. Taking the formulation to 28 days of curing decreased the resistance to compression with the passage of days.
In our lab we perform the potentiodynamic tests according to ISO 10993-15 in 0,9% NaCl. Our test material is medical steel 316LVM. At the beginning, we used to use special PTFE holder from GAMRY, which was inside the corrosion cell. Everything was fine, until crevice corrosion started to occur between sample and PTFE seal. We’ve tried to find the reason, but we’ve failed and decided to change the shape of the sample: from a small piece to a long rod, which one end is outside the corrosion cell, so there is no place to crevice corrosion. It’s worked for a long time, but suddenly crevice corrosion occurred again, but this time at liquid/gas interface.
First idea was IR drop, so we've changed a frit in the Luggin capillary and it helped for a moment (3 samples). But crevice corrosion is occurring ever since. Usually, first cycle goes alright, there’s hysteresis but breakdown potential is relatively high (in comparison to previous tests for the same type of material). Next cycles show no hysteresis, like complete repassivation is happening and breakdown potential is shifted to even greater values. What’s interesting, samples made form steel 2H17N2 don’t undergo crevice corrosion.
We’ve bought new reference electrode (SCE), it hasn’t helped. We’ve checked Luggin capillary, there’s no signs that’s something wrong with it. We’ve tried with graphite electrode as counter electrode, no results. Naturally, we’re checking reference electrodes with Master Electrode. We’re also checking potentiostat at manufacturer’s and with Dummy Cell. We’re performing tests on one type of stainless steel (316LVM), but usually samples are made of material from different deliveries.
Our test parameters (invariable):
- 0,9% NaCl solution
- 30 min. deoxygenation with 99,999% nitrogen
- reference electrode: SCE
- counter electrode: platinum (80,6 cm2)
- temperature: 37±1°C
- potential sweep: 1 mV/s
- 1 whole cycle = first step + second step
- first step: potential sweep until destination potential (2000 mV) or destination current density (1000 μA/cm2) is reached
- second step: potential sweep back to open circuit potential
Any ideas what can be wrong?
The faint pink colour is very difficult to observe with respect to human eye when a ffa test is done with used oil. The procedure is also a tough method which explained:
1. take the amount of 28.2 gm of oil in one conical flask (cf)
2. take 50 ml drops of propanol into another conical flask and add 20drops of phenolphthalein indicator and neutralize with two drops of .1 N of NaOH and a faint pink colour observed
3. Add the mixed propanol into the oil and heat at 60-70 degree celcius but donot boil it and keep it warm so that you can easily touch the bottom of the conical flask.
4. Now titrate it against .1N NaOH , take reading until it appears again a faint pink colour that was observed and colour should persist upto 30 seconds .
Now when we shake the titrating Cf the colour of dotted pink disappears..till long no color appears and until a cloudy solution appears and then titrating so long the colours appear as cloudy faint pink .
some says that colours should be noted when the dotted colour persists for upto 15 secs. but I saw their way of shaking, is very slow and the colour doesn't resemble the overall solution's colour i.e. a dotted pink colour.
How colour change is to be noted down? What is the exact way to perform this test then? Anyone have video based on same above procedure? The test is done for many times with the multiple times used oil for changing ffa (based on oleic acid), but unfortunately the way of experiment fails.
please show me a proper video with same procedure and observance of colour, results .
Can I perform test directly using the paper sample?
If I need a solvent to perform test which could be suitable solvent?
Hello All,
I want to perform silanization reactions at 200 L scale. Currently we perform these test in 10 L glass reactors dedicated for this project. I wanted to know if I can scale this up to larger reactors, do I have to use glass lined reactors or can I use like stainless steel reactors. Unfortunately I work at pH 4 and have used HCL to adjust pH. What do companies use at large scale for silanization. Also would make a silane coating on your glass if you use it.
Hello everyone,
I have 2 samples distributions (approx 500 values per condition) that I would like to compare using the Kolmogorov smirnov test on SPSS.
I have been able to import my table in SPSS, but unfortunately since I'm not confident with SPSS I haven't been able to perform the KS test.
Does someone have any advice on how I could perform this test ?
Many thanks
Laura
I'm new to this type of fuels, and need to know what (possibly unknown) aspects I would need to consider for engine performance tests in comparison to conventional fossil fuels.
Thanks in advance!
Please refer this paper to know what are the problems they have performed the test.
I have some data collected through questionnaire-based survey. After tabulation of data, I can see that participants from private clinics responded differently to the same question compared to the participants from government hospitals.
For example, in a Yes/No/Not sure question, Participants from Private clinics (n =24) responded like this: Yes (63.48%), No (34.13%), Not sure (2.39%). But participants from government hospitals (n = 226) responded like this: Yes (27.44%), No (67.73%), Not sure (4.83%).
I want to perform test of significance to see if the difference in 'Yes' between private and government institution is significant. Can I do it? If so, then how?
I am planning to perform tonic immobility test in laying hens. Is it necessary to use V-shape cradle or is it possible to perform the test by placing the hen on a flat surface, e.g. scale. If the V-shape cradle is necessary, could someone give me the measures? Thank you!
In short, I reduced the number of items in my measurement model to 4 or 5 items (from 9 originally). The overall fit indices (rmsea, cfi, tli, rsmr and chisq pvalue) are all in favour. It is however the intermediate steps I am concerned about: I removed 1 item at a time, but did not have a test such as the chi-square which is used for nested model comparison. Since my models differ in terms of observed variables, the models are non-nested.
My model concerns a 1-factor model. What I did so far was simply visual inspection of the aforementioned overall fit indices, residual correlations ( > |0.10| being 'bad'), modification indices and internal reliability (Cronbach's Alpha and composite reliability).
A test I came across is Vuong's test, which is able to compare non-nested models. However, I am not sure of the performance of this test. In my analysis it seems to prefer any model that has fewer items load on the factor. The same holds for the expected cross validation index (ECVI); i.e. it always seems to provide a lower value in case of modeling fewer items. Is this because Vuong's test only works (properly) for equal sample sizes and number parameters?
Vuong's test and the ECVI do seem useful when I compare two non-nested models with equal items.
I have searched quite a lot for relevant literature on non-nested model comparison in CFA, but have been unsuccessful. Any suggestions are welcome. If you know of a research / paper that delved into this topic I'd appreciate your sharing.
Thanks in advance for any help.
I would like to perform detection of apoptosis with using Annexin V (detecting translocated phosphatidylserine) or Mitotracker Deep Red (stains mitochondria in live cells) in luteal cells of a cow by means of a flow cytometer. For technical reasons, I need to store cells for 1-2 days to perform tests on the cytometer. Therefore, I would ask whether there is a method of cell preservation/fixation and storage them for future cytometer assays ? Is it better to stain the cells before preservation or after just before the cytometer test ?
We are testing a new diagnostic tool and comparing it to the actual gold standard for this diagnosis.
Briefly, we examined 25 patients with the new diagnostic tool (test A) and the gold standard diagnostic tool (test B). Test A gives a positive result or a negative result (no variability or range in numbers, just "positive" or "negative" as outcome). We then performed test B which also gives a "positive" or "negative" results and which is considered the true result since this is the gold standard diagnostic tool.
All patients having a positive result on test A (n=18), had a positive result on test B (n=18).
Of all patients having a negative result on test A (n=7), 5 were negative on test B but 2 were positive on test B.
Overall, 23 patients had the same outcome on test A and test B, 2 were different, which means that our new diagnostic test has a sensitivity of 92% (if we consider test B to have 100% sensitivity).
Can you recommend me any more statistics on this data, to draw conclusions? Any idea to look at this data from another perspective? Any help or insight is appreciated.
Thank you
I have done normal distribution test on two groups dependent variables. file is attached please tell me . is it suitable to perform M-W test on it. i have 7 scale likert scale answers i code it extremely agree 1 to extremely disagree 7 and then perform test. please explain?
Dear all
we are performing a study on FSW of AA6082 alloy, and I 'd need the temperature dependent flow stress of the material. Since we don't have the facility to perform the tests in house, I'd need some references concerning the flow stress of this alloy at different temperatures.
Dear All,
An experiment was carried out, using as factors the temperature, the humidity and in addition, it was carried out for 6 different species. The total sample number of species analyzed is 6, analyzing 3 individuals of each, so the total N = 18. In my case, I would like to be able to test normality, but the following problem arises. As I must test within each group, I would finally have a N = 3 for the same temperature, humidity and species. It is an N too low for the power of the result of normality to be high.
Trying to see how to solve the problem, I found the following possibility: calculate the average of each group of 3 individuals (that is, of each species), for a certain temperature and humidity. Once calculated the means, we obtained the subtractions (or deviations) of each individual of a species, with respect to the average of the species. When calculating these subtractions or deviations, we could group the N = 18 data, that is, all the species together, so that we increase in sample size and on that, finally perform the Test of Normality.
The problem is that I read this in a book, and I do not find any more information that supports this technique. What do you think about the reliability of this method? What alternatives could I use?
Hello dear collegues,
can anyone please recommend a protocol for infection of cells (MDCK and A549 to be precise) by Influenza A (H1N1) virus in "solid medium" (cover with agarose)?
I would like to perform tests with H1N1 infecting cells but I need virus medium to be solid.
I believe there is possibility to cover cell monolayer infected with virus with medium that contains agarose...
Can anyone provide protocole? Or article that consists exhaustive protocole?
Thank you all in advance
-In first journal:
Immersing small size scaffold cause only slight increment in the volume observed in graduated measuring cylinder (V2). What is the correct way of performing this test?
-and also:
I used another method in second journal, but my results were more than 100 %!
For example:
P=(W2-W1)/pV1
Where W1 and W2 indicates the weight of the composites before and after immersing, respectively, and V1 is the volume before immersing, ρ is a constant of the density of n-hexane.
W2 = 0.196 g, W1 = 0.020 g, V1 = 0.235 cm3, p=0.654 ……..
P = 115.032 !!!
We performed a case series with only 16 cases worldwide (unfortunately, since the studied phenomenon is very rare, these are all the cases available).
Of course the number is very low, but we were interested in seeing if there were some differences between the cases that survived vs. deceased. Because of the small number we used either fisher (freeman-halton) or Mann whitney U to perform the tests.
With this small sample size, would it be fair to also perform post-hoc correction, like Bonferroni or Holm-Bonferroni?
Dear Sir / Madame
I performed this test and I wondered if it was right or not, so I want you to give me some suggestion about my method of calculation, and the way to set up an ELISA test scientifically. Thank you very much.
Actually what I found is that all the programs available on net to perform the randomness tests (mentioned by NIST) are written in C language and I m working in matlab... Do I need to write the matlab code separately to perform all the 15 tests mentioned in NIST suite or there is any predesigned software to which we give the input sequence and get all the results done.
I did not find any performance test for measuring mobility.
I'd like to give a little background, and then pose a couple (what I think are simple) questions about statistical analysis of data.
I have set up a warburg respirometer. As such set up requires, I have two total respirometers: 1) one contains a reactions, 2) the other is a blank to account for temperature changes.
I have multiple runs with both of them containing blanks to test their relationship under different temperature conditions, and they are mostly statistically the same (taking in the standard deviation, and seeing when they overlap).
However, when I run with one respirometer containing my reaction and the other a blank, I get inconsistent results. My average oxygen uptake matches the expected value in literature (18 mmol/hr/gDW) BUT the standard deviation is 7.
At what point do I reject the data due to the large std. deviation? Is there an alternative statistical analysis to run, or a statistics package I can plug in my data? Obviously mean and standard deviation does not tell the whole story.
I have also performed a t test of my data, and generated a histogram. the histogram also supports the ~18 value that I want.
The reason I want to know if my results are valid, and how to express their validity, is that I want to use the respirometer on unknown compounds, and I need to be able to ensure my results are appropriate based on past results of known substances. If my results aren't certain, I don't want to perform tests on unknown reactions because my resulst will be equally uncertain.
Hopefully my question makes sense. Any statistical insight or resource suggestions are welcome.
I would like to analyse my 16S sequencing data (ANOSIM/PERMANOVA) and to visualize them in nMDS plot by using Bray_Curtis similarity matrix. Do I need my data to be rarefied before performing the test?
Moreover, if I don't do rarefaction, how much it would impact on diversity analysis?
We are working on a material that has potential in packaging fresh produce. We want to know how we can quantitatively assess the freshness of the produce.
I am currently developing an algorithm that detects change in the state of a biological signal (on/off). The algorithm works by comparing about 5000 matrices (each sized 250x1) with a control matrix. The matrices are normally distributed. The comparisons are made with a Student’s t-test.
Obviously, multiple comparisons increase dramatically the chances of obtaining a false positive. Unfortunately, the Bonferroni correction is very conservative and deteriorates drastically the algorithm’s performance (This was tested by simulating a self-made signal where the instances of on and off states are known beforehand). To solve the problem, I was thinking of calculating the false discovery rate using the Benjamini & Hochberg test.
I would like to know if my approach is correct or is there a better way to tackle the problem? Thank you in advance.
Hi,
What are the difference between these assays and what are the significance of performing the test on microbes cell (fungal pathogens).
I have HOT DISK 2500S and need to measure thermal conductivity and specific heat capacity of nanofluids which their viscosity near to water ones.i have tried to measure them but the results were not reasonable!! I wonder what are HEAT POWER and TIME to perform that test ??
I am doing 24 hrs performance testing of newly formulated imidazoline based CI. The testing is carried out using 80% brine 20% kero. After 18-20 hrs I can see the brine becomes cloudy. No chance of scaling as its only 3.5% NaCl.
Follow the parameters of weibull:
- α = 46509.7171 β =1.5536
- α = 46071.6505 β =1.4878
Which distribution has greater adherence to the data?
Data: Attachment
I am little bit puzzled with a basic concept. Please help me in understanding the following.
I have design NHR of a GT plant (simple cycle ),design Net power o/p at ISO conditions and at 50Deg ambient. I have also the full load performance test data i.e. corrected NHR and corrected net power o/p at ISO conditions.
I would like to correct the performance test NHR and Net power O/P at 50DegC ambient . Is it possible to calculate without using correction curves?
How about correcting the design data at 50Deg with the %deviation from design ISO to performance test ISO conditions? will it make sense!?
Moreover If I would like to know, will the full load NHR at any ambient temperature in-between 15DegC and 50DegC be linear? can I just do linear interpolation from the two data points I have (i.e. ISO and 50DegC)?
regards
I am working on a test rig, the test includes measurement of some parameters : pressure, temperature and water flow rate. For monitoring the variation of temperature, I was wondering what type of thermocouple that I could use, in other words, what are the test conditions that need to be specified to choose the suitable thermocouple, is it temperature range or the rate of temperature change or other factors?
I would like to know what precautions one should take while performing tensile testing in LAMMPS. For example; tensile testing of Copper. I would like to perform the test at room temperature conditions (300K and 1 atm pressure) hence how should we go about relaxing the structure and temperature (using fixes) so as to achieve good results? My box size is 100 x 100 x 100 Angstroms
I have to use it for testing multiple structure breaks in monthly data.
If I have allometrically scaled (body mass) data from a performance test is it possible to use the scaling component to track changes longitundinally?
If I scaled data from a baseline measure (fitness test) from a full squad/cohort of players, and want to assess a number of specific players longitudinally (across a season) do I still use the original scaling component from baseline to assess changes?
For example if a player has a large increase in body mass across a season but maintains performance of a fitness test, does the original scaling component apply as I am assessing the individual against themselves?
Thanks
Josh
I am trying to build a miniature smart grid environment in lab and test its parameters like current, voltage, power etc.
Could anyone tell me if there are any software tool that can help me simulate the smart grid environment and perform tests on its security and reliability.
Dear all,
I am applying the Engle-Granger 2 steps cointegration procedure between two time series. After the first regression, I have to save the residuals and check whether they are stationary. I cannot use the standard critical values as they are not valid. My point is: given that I am testing unit root in the residuals (with Augmented Dickey Fuller and Phillip-Perron), and residual are known to be mean 0, I don't have to include neither an intercept nor a trend while performing the test. For n=2 (where n is the number of variables), MacKinnon (1991) does not provide critical values for the "no constant, no trend" case. Should I use the constant case critical values? Or better use Engle and Yoo (1987) critical values? Please find attached both MacKinnon and Engle and Yoo papers.
Thank you in advance
I estimated the translog cost function (KLEM model) by dropping one equation. now i am to test coefficients. but, as one equation is dropped, in this situation how to perform the test? For example, coefficient qi, (i=K,L,E,M) is zero.
and i am to test whether model A is nested in B or not. i use Likelihood values of model and B. is there any difference in likelihood values when one equation is dropped? If yes, how can i do it?
I need to explore public cloud performance degradation scenarios on a source & destination machine due to the live VM migration. Can anyone suggest the best way to set up the experiment environment?
I read a few research papers which suggest to go with Linux Xen cluster. But I am not very aware about it. Other research papers are using cloudsim.
Which one is best at the initial level for learning purposes? Simulation using Cloudsim or to create cluster using Xen Server?
I couldn't find any published protocols (like one that published in nature protocols for Morris Water maze here:http://www.nature.com/nprot/journal/v1/n2/full/nprot.2006.116.html) for Rotarod performance test.
I was getting some unexpected p-values when performing statistical tests in Matlab (such as very small p = 1e-40) so I decided to check their
behaviour on a synthetic dataset.
I performed tests 100 times, each time generating a new pair of datasets from N(0,1), as follows:
pd1 = makedist('Normal', 'mu', 1, 'sigma', 0);
sample1 = random(pd1,100,1);
sample2 = random(pd1,100,1);
p = kruskalwallis([sample1, sample2], [], 'off');
p = ranksum(sample1, sample2);
p = signrank(sample1, sample2);
[h, pValue] = kstest2(sample1, sample2);
sample1zero = sample1 - median(sample1); %for dispersion test
sample2zero = sample2 - median(sample2); %for dispersion test
[h, pValue] = ansaribradley(sample1zero, sample2zero);
[H, pValue, SWstatistic] = swtest(sample1);
[H, pValue] = lillietest(sample1, 'MCTol',0.1);
Results are shown on the graph(image).
I expected the p-values to be high and consistent, since both samples come from N(0,1). However, the p-values of all tests range from 0 to 1 and the
medians are around 0.5.
Are these results correct? Is my methodology faulty or expectations incorrect? What am I missing?

Can it be that the most authors and researchers have neglected and still do not follow the demands of the following ISO norms (with year of coming into effect): 15193 (2002); 15194 (2002); 15195 (2003); 17511 (2003); 18153 (2003) and especially: ISO/PDTS 25680.8: Use of external quality assessment schemes in the assessment of the performance of in vitro diagnostic examination procedures? This European Standard was approved by CEN on 2 March 2004 as EN 14136. Why do most published papers in this area not perform the minimum performance test by taking part in an inter-laboratory trial with real samples and not with pure aqueous solutions without a possibly interfering matrix (e.g., in bio-sensing: enzyme-poisoning, denaturing reagents, proteases, drug-metabolites, etc.)?
With the aim of validating a self-rating physical function item bank for German speaking populations, we are searching for an instrument measuring physical performance with satisfactory psychometric properties as an external criterion.
Local comparison between schools as well as global comparisons (i.e. through the OECD and other international organizations) rely heavily on standardized achievement tests as a key indicator of quality education (or, even if they account for other definitions of quality, they still use achievement/performance test scores in comparison with these other contextual variables).
However, much research comes from the perspective that this dominant approach is highly problematic in many ways. So, if this definition of quality (student achievement on standardized tests) was hypothetically changed, how can we define quality and compare the quality of different schools/education systems/countries? What alternative conceptions and measures of “quality” education/learning exist or are supported/being discussed (including sources by "peripheral", non-dominant research, both OECD and non-OECD countries)?
When a researcher does machine learning work, they need to select parameters for their model, such as SVM. However, when I do cross-validation with a pre-defined set of parameter combination, some of them generate the same error (mis-classify) rate. In this case, how can I pick up the 'best' parameter that I can used in the test stage.
And actually the 'best' parameter picked up during cross-validation may not achieve the best performance in the test phase. But I think it should not be allowed to try to find the best result by altering the machine learning parameters again in the test phase, right?
Thank you!
We currently, run this assay in a bio safety class II cabinet, UV tubes and use milli Q water for washing this test but consistently get high background in our negative controls. We also use filtered non aerosol tips and change each time. We have tried autoclaving a large number of the tubes before use but experienced very high background contamination. Does anyone else perform this test and have any advice to share? We are currently trialling autoclaving a small number of tubes I.e. 12 at a time.
Now I am planning to do pemfc performance modeling using matlab, with a focus on the catalyst layer. I wondered roughly how long it will take to do the modelling study from scratch?
I already know basics of matlab and fundamental of the whole pemfc unit, and taking the book "PEM fuel cell modeling and simulation using Matlab" by Colleen Spiegel as the primary reference book.
Because now I need to make a timeline for my phd study,but performance modelling is not the focus of my phd study so I need some systematic information from the performance test. Having an idea of how long it will take is very important for me.
Thanks,
Hello, Colleagues,
There are several assumptions that are related to multiple regression (MR) tests that must be checked before proceeding to the actual performance of the test. The literature mentions that there are some assumptions that MR are robust to; one of them is "Independence of Errors" (Osbourne & Waters, 2002). What are the others? Please help to enlighten me...
Best Regards,
Ruth
Ref:
Osborne, J. W., & Waters, E. (2002). Four assumptions of multiple regression that researchers should always test. Practical Assessment, Research & Evaluation, 8(2), 1 - 5.