Science topic

Performance Testing - Science topic

Explore the latest questions and answers in Performance Testing, and find Performance Testing experts.
Questions related to Performance Testing
  • asked a question related to Performance Testing
Question
2 answers
I perfomed a column performance test, which showed an abnormal peak shape. when I was opening up my column packed with daiso cilica gel, I noticed That the column bed was extemely jelly-like. I have never seen this before. what could be the reason? What have I done wrong?
Relevant answer
Answer
What was your mobile phase? What was its pH? Without knowing these, it is hard to imagine why you found gel formation. The column should have a pH range to work with. If you are within the range, then finding an answer will be difficult. At that point, you may wish to talk to the manufacturer of the column to learn more.
  • asked a question related to Performance Testing
Question
2 answers
Hello everyone,
I've got a question regarding within-subject experiments, in which two or more variants of a prototype (e.g., chatbot) are evaluated with respect to different constructs, I.e. classic A/B testing experiments of different design options. For both versions, the same items are used for comparability.
Before the final data analysis, I plan to perform tests for validity, reliability and factor analysis. Does anyone know if I need to calculate the corresponding criteria (e.g., Cronbach's alpha, factor loadings, KMO values) for both versions separately, or only once aggregated for the respective constructs? And how would I proceed with the exclusion of items? Especially when there are a lot of control conditions, it might be difficult to decide whether to exclude an item if it is below a certain criterion.
In reviewing the literature of papers with a similar experiment design, I couldn't identify a consistent approach so far.
Thank you very much for your help! If anyone has any recommendations for tools or tutorials, I would also appreciate it as well.
Relevant answer
Answer
Dear Pia, thank you very much for your helpful recommendation!
  • asked a question related to Performance Testing
Question
1 answer
I performed tests of photoelectrochemical. I need to determine the flat band potential of the system. I read papers and observed that plots Moot-Shotky (C^-2 versus V) is the most used in literature. But I don't know-how from the Nyquist plot (Z'' versus Z') and find C^2 versus V. How to determine flat band potential for this method and what measurement type can I use? In summary, what steps must follow or what step by step?
Thanks
Relevant answer
Answer
To measure the FBP, an alternative method of determining the flat band potential is based on measurement of the net photocurrent as a function of applied potential. The flat band potential is predicted to be at the intercept of the square of the net photocurrent with the potential axis.
  • asked a question related to Performance Testing
Question
1 answer
we have reach an algorithm in the bioinformatics field and need computational equipment for a data test. Looking for a colaboration to perform test
Relevant answer
Answer
Hi
As there are various resources to be allocated to this requirement, please explain more about the amount of required computational power. Then, it is possible to suggest a solution
Regards
Ahmad
  • asked a question related to Performance Testing
Question
3 answers
I would like to know which is a better approach,
Doing catalyst synthesis and characterization, performance testing of catalysts and then doing the kinetic and mechanistic study OR studying the reaction mechanistic via software tools like DFT before carrying out the catalyst synthesis and performance testing? I am asking the question in context with the steam reforming reaction over Ni-based bimetallic catalyst.
Thanks
Relevant answer
Answer
Dear Steve,
You may choose either of the technique.
However, doing a DFT study, in the beginning, will save the material and may help in the prediction of the reaction mechanisms.
Whereas, carrying out the kinetic and mechanistic study of the catalyst supported by XRD, XPS, and DRIFT studies will help in the prediction of the actual reaction mechanism taking place for your catalyst system.
  • asked a question related to Performance Testing
Question
7 answers
We have multiple samples of PHA polymers that are prepared differently. We would need Tg values with sufficient precision in order to compare the different samples, which will have relatively close Tg values. I believe values to at least 1oC would be sufficient.
I have been recommended to perform DSC testing at a heating rate of 1oC/min, performing multiple passes and repeating the test with replicate specimens to get the most accurate Tg. As far as I'm currently aware it isn't feasible to slow down the heating rate.
Are there any other ways of performing this test to ensure we have values precise enough for comparison or is performing multiple passes the best way of determining these values?
Relevant answer
Answer
You can use both DSC and DMTA
Best regards !
  • asked a question related to Performance Testing
Question
9 answers
I would like to perform tests on triaxial equipment, So l need Geotechnical lab who receive foreign students.
Relevant answer
Answer
You may enquire from Civil Engg./ Earth Sc. Depts. of various Indian Institutes of Technology (IITs) and National Institutes of Technology (NITs) by way of accessing their respective websites. Further, You may also contact through some private GT Labs. (e.g. Soiltech, Pune, India).
  • asked a question related to Performance Testing
Question
1 answer
Hi,
I am studying the links between ADHD sub-dimensions and the different types of risky driving.
What are the best ADHD diagnostic methods, given that my goal is to associate between risky driving and sub-impairments/sub-dimensions of ADHD?
I hope to use each of the common diagnostic platforms (questionnaires, computerized performance tests, and interviews).
Relevant answer
Answer
Hi,
Adult ADHD rating scale is available in open access.
  • asked a question related to Performance Testing
Question
2 answers
I am working on performance testing of S.I engine with brown gas generation and use of it as a fuel blend.
Relevant answer
Answer
To inject hho in a car engine, Amp reading helps too to have an idea of liter/min.
  • asked a question related to Performance Testing
Question
3 answers
I'm using the DNA to do RT-qPCR, but the performance of test group is not same as normal, I want to know the exact composition.Thanks in advance!
Relevant answer
Answer
I would like to know how to use Tablet InhibitEx?
  • asked a question related to Performance Testing
Question
3 answers
I carried out trials for drought in rice in 2017 and 2018 with a large-scale germplasm of 2030 genotypes under lowland (irrigated) and upland (drought) conditions. Six agronomic traits were considered for further analysis. Kindly let me know, how can I perform test of homogeneity?
Relevant answer
Answer
Dear Ahmed,
You can find several homogeneity tests in XLSTAT addinsoft tool (in EXCEL)
  • asked a question related to Performance Testing
Question
3 answers
I have a problem in evaluating machine learniong algorithms in WEKA!
I did the first six steps correctly, accordang to the instructions at https://machinelearningmastery.com/binary-classification-tutorial-weka/ but from step 7 I get the following:
7. Click on “Run” to open the Run tab and click the “Start” button to run the experiment. The experiment should complete in just a few seconds.
I got a report like this:
09:38:24: Started 09:38:24: Class atribute is not nominal! 09:38:24: Interrupted 09:38:24: There was 1 error
8. Click on the “Analyse” to open the Analyse tab. Click the “Experiment” button to load the results from the experiment.
In the window “Test output”, I have got this:
Tester: weka.experiment.PairedCorrectedTTester –G, 4,5,6, -D1 ... Analysing: Percent_correct Datasets: 0 Resultsets: 0 Confidence: 0.05 (two tailed) Sorted by: - Date: 3/19/20, 9:53AM Index 0 out of bounds for lenght 0.
9. Click the the “Perform test” button to perform a pairwise test-test comparing all of the results to the results for ZeroR.
The test did not start.
There are no changes nor results!!! Where is the problem? What was happened?
  • asked a question related to Performance Testing
Question
1 answer
The present research paper aims to discuss one of the problematic issues in general and mental measurement particularly. By reviewing the psychological and educational literature, it is observed that researchers who investigated multiple intelligences depended upon on the instruments of self-report, whether in the preparation of measurements of multiple intelligences or in measuring them as well as in the preparation of multiple intelligences batteries.
Thus, the question is that: can self-report instruments be replaced as an alternative to maximum performance tests in the measurement of multiple intelligences? Due to the importance of mental measurement and measuring tools, and the continuous efforts in this area, the researcher discussed this problem in three axes; the first concerns the mental measurement in terms of its definition, objectives, importance, tools of measurement and the basis of classification, the second tackles self-report in terms of its definition, problems related to self-report instruments, and the extent to which self-report instruments can be used as an alternative to maximum performance tests in measuring intelligence and mental abilities.
The present research aimed to examine the validity of self-report instruments in measuring multiple iintelligences. Mckenzie's inventory was used as a model for those instruments., through exploring the factorial structure of the multiple intelligences inventory, identifying their concurrent validity through identifying the significance of the correlation between the students' self-reported scores on Mckenzie's Multiple Iintelligences Iinventory and their performance on Maximum Performance Tests, identifying the possibility of predicting their academic achievement through Multiple intelligences and finally identifying the possibility of discriminating validity of multiple intelligences inventory through exploring the multiple intelligences discrimination between high and low achievers in Math and Arabic. The researcher translated Mckenzie's Multiple intelligences inventory into Arabic and depended on the National Battery of Cognitive and Social Emotional Intelligence.
Findings revealed that the factorial structure of the multiple intelligence scale Fit the current research data. There was no statistically significant correlation between students' scores on Mckenzie's Multiple intelligences inventory and their language ability test. There was no statistically significant correlation between self-reported social intelligence and social emotional situational intelligence tests; while there was a weak statistically significant correlation between students' scores on mathematical intelligence and numerical ability tests, and also spatial intelligence and spatial ability tests. In addition, the findings also showed that the achievement in Arabic can be predicted only through natural and musical intelligences. Only natural intelligence predicts mathematics, and multiple intelligences do not discriminate between high and low achievers in mathematics, nor do they discriminate between high and low achievers in Arabic except natural and bodily- kinesthetic intelligences.
Key words:
multiple intelligence, self- report, typical performance, maximum performance. academic achievement.
Relevant answer
Answer
I raccomaded you an article: Multiple intelligences and minds for the future in a child's education - present on my reserchgate.net. I hope that it will be interesting for you. Zbigniew
  • asked a question related to Performance Testing
Question
5 answers
After machining an alloy using WEDM or any other non traditional machining processes, what kind of analysis can be performed for testing surface and sub surface conditions?
Relevant answer
Answer
Dear Vinajak,
it all depends on how much your material is bent. If the crurvature radius is so high that in the specimen can't be considered flat on your salmple holder, I'm afraid it is not.
If the curvature is negligible, XRD is perfect for surface phase analysis and, if your diffractometer can do it, surface residual stress measurement can be performed as well.
If you can cut out small pieces of material and make a single piece always centred by the X ray beam, XRD phase analysis is possible. In this case, however, residual stresses could not be possibly measured, as when you cut a stressed material you can release them.
  • asked a question related to Performance Testing
Question
4 answers
Hi,
I am new to EEG signal processing. I am now working on the DEAP dataset to classify EEG signals into different emotion catagories.
My inputs are EEG samples of shape channel*timesteps. The provider of dataset has already removed artifects.
I use convolution layers to extract features and use a fully connected layer to classify. I also use dropout.
I shuffle all trails(sessions) and split the dataset into trainning and testing sets. I get resonable accuracy on the trainning set.
However the model is anable to generalize accross trails.
It overfits. Its performance on the test set is just as bad as a random guess(around 50% accuracy for low/high valance classification).
Is there any good practice for alleviate overfitting in that senario?
Another thing bothers me is that when I search for related literature, I find many paper also give an around 50% accuracy.
Why are results from EEG emotion classifcation so bad??
I feel quite helpless now. Thank you for any suggestions and reply!
Relevant answer
Answer
Hi, Ge.
I have applied the DEAP dataset for EEG emotion classification before. The situation you mentioned also occurred in my research. From my perspective, here are some suggestions:
Firstly, Compared with the classification model you used, I think the input features are more significant. Maybe you could pay more attention about the feature extraction(Such as: PSD, based on frequency domain; HOC, on time domain; Discrete Wavelet Transform, on time-freq domain), selection and confusion parts, which was included the channel selection in terms of the different emotion categories.
Secondly, about overfitting issue, I thought it's kind of unsuitable to use the DNN model for this DEAP dataset unless you could increase the mount of data. Maybe you could give the data segmentation part(cutting the epochs) a shot.
Finally, in terms of accuracy problem, I suggest you double check which emotion estimation method was used in the specific paper. There are two aspects for EEG emotion classification: one is based on Valence-Arousal plane, which was learned from Speech emotion recognition; another is for the specific emotions(such as: angry, happy,sad,etc). So, be careful about the baseline you used to compare.
Regards
  • asked a question related to Performance Testing
Question
1 answer
Recent research has spent a lot of attention on black-box performance optimization to realize global optimization with respect to an utility function. The advantage of black-box techniques is that it does not require an upfront performance model of the application, which is tedious and often inaccurate. The disadvantage is that black-box techniques may require a lot of live performance test samples in multi-dimensional parameter spaces.
Some recent work tries to tackle this disadvantage by means of model-based reinforcement learning, so the samples can be directly retrieved from the model instead of a life test.
Why are such trained models even better than the queue-based performance models?
Relevant answer
Answer
The purpose of reinforcement learning models is inferring some insights (quantitative and qualitative) from an empirical collection of various data to maximize some criteria of cumulative reward. These type of models do not use a causal explanation of any relationships between the variables (factors, features). In essence, this is a step by step empirical optimization in the reward loop.
In contrast, queuing based models (or better discrete event or systems dynamics simulation models) under the umbrella of operations research methodology offer the capability of testing multiple scenarios of behavior the complex variable systems based on mathematical / physical models of the systems. These type of models are based on understanding of the system to be modeled, i.e. they include some causal relationships and connections between the elements of the system.
The typical situation is when the validated simulation model first provides some 'what-if' scenarios outputs of the system performance. The optimization module then takes the output of the simulation model as the Objective function (OF), and, given a set of constraints, finds the set of the model's parameters (variables) that maximize or minimize the Objective function.
To summarize: reinforcement learning type models offer empirical data optimization without understanding causal relationship between the model elements.
In contrast, operations research methodology (queuing, discrete event of system dynamics simulation type models) include some causal relationships between the models's elements, i.e. includes understanding of basis of the model. This is a big advantage vs. a pure empirical learning.
  • asked a question related to Performance Testing
Question
3 answers
1. I have decided to use the TLC technique to identify the types of Polychlorinated Biphenyls and the 12 most toxic congeners in the soil samples collected. If some of you have already performed tests by TLC to identify the PCBs they send me the protocol(s) used?
Relevant answer
Answer
@ Karl, please have a look of the attached file.
  • asked a question related to Performance Testing
Question
5 answers
For my experiment i am going to do immunofluorescence for two junctional proteins in post treated endothelial cells. My test sera obtained from DENV infected patients. Now, quantity of test sera in my possession vary from 250ul to 500ul and i have to perform test in duplicate. I am going to add serum in 1:3 proportion to culture media. If i conduct experiment in cover slip placed on 6 well culture plate, 2.5-3ml media will be required, hence exceeds my storage serum quantity for appropriate proportion. What can i do here? Can i change media to lesser amount after cell grown to confluence before adding test serum?
Relevant answer
Answer
Paroma Deb Just do it in a sterile hood, I guess, use everything sterile. You can sterilize pretty much anything with UV.
  • asked a question related to Performance Testing
Question
4 answers
Dear specialists/ experts
I am very new for ARDL test and these days I am trying to analyse my data which are having different stationary levels. I have panel data. My all variables are in natural log transformation. My dependent variable and six independent variables are stationary at level and two IV are stationary at I(1). Since mixed stationary levels I thought ARDL test will be fitted with my data. Eviews does not support for my data panel and says "near singular matrix) when try to perform the test. Therefore I thought to go with STATA. I have few basic questions.
1. Since my data in natural log transformation there are negative values in my data series. Is it a problem to carry out ARDL with negative values?
2. I have time invariant proxy-( distance) is it a problem for my analysis?
3.Are there any limitation on number of variables that can be used in ARDL test ?
I have stucked with panel data ARDL in STATA and hope your assistance and guidance with appropriate STATA commands
Relevant answer
Answer
1. Negative values have no issues but when you will transform into log it become unidentified or zero. So it will create problem as missing values.
2. time-invariant variable create problem as lag is taken in ARDL estimation and did not get lag values in time-invariant variable.
3. Depends on some tests.
  • asked a question related to Performance Testing
Question
7 answers
In my stud, I have 2 groups (intervention and control Group). I performed tests twice (pre and post scores). I want to compare the significance of the changes between two times within two groups.
Relevant answer
Answer
if you are sure the data is approx. normal, you are good to go. I would suggest that you assign your participants to the intervention group and the control group at random. Best wishes, David Booth
  • asked a question related to Performance Testing
Question
2 answers
I am interested in understanding the effect of thermal treatments on proteins denaturation into food simulant.
Actually, I have already performed tests onto acqueous solutions of different kind of proteins (BSA, whey proteins) but I would like to "complicate" a little bit the system in order to assess whether some other compounds (sugars, salts etc.) could have an influence on the induction of structure modification by temperature.
Relevant answer
Answer
Dear Manuele,
thanks for your kind reply. I know several methods for the evaluation of structure modification of proteins, the only thing I am concerned about is how to prepare a solution which can act as a "food simulant".
  • asked a question related to Performance Testing
Question
3 answers
There are several samples which is believed that glycine is incorporated in them. I must know the exact quantitative amount of the glycine in the sample?
What would be the most suitable way to perform the test?
Kaiser test (ninhydrine test) is thought at first to detect the primary amine.
What could be the other alternatives?
Relevant answer
Answer
I think you can quantify glycine with and without derivatization by HPLC.
  • asked a question related to Performance Testing
Question
5 answers
Hi, I am a bit unsure about how the data should be preprossessed prior to performing a paired statistical test. Lets say I have two groups A and B where I measure something every day for twenty days. I would then test for normal distribution, and further use wither paired t-test or paired Wilcoxon test with dependency based on sampling day. Here where I am unsure. Lets say each group has four replicates. Should I;
1) take the mean of the replicates so that each group (A or B) is only represented by one value or
2) perform the test, and each group is represented by four values.
Option 2 gives me more degrees of freedom, and thus usually a more statistically significant value.
Madeleine
Relevant answer
Answer
You may also try Repeated Measured ANOVA or Survival Test (for toxicity, germination test).
  • asked a question related to Performance Testing
Question
3 answers
I am trying to differentiate u937 cell line in order to perform test on antigen presentation (ovoalbumin+SIINFELK). I think I am having trouble with the differentiation because I see a low number of cell actually presenting the SIINFELK peptide and the shape of the cells does not resemble the one of machrophages.
For differentiation I have followed 2 different protocols:
-100ug/ml of PMA for 48h,
-20 nM of PMA + 25 mM of VitD3 for 48h following 6 days of resting with only VitD3.
Is there something I am doing wrong?
Relevant answer
Answer
Hello,
As Siva said, you need mouse cells for SIINFEKL. Maybe the RAW cell line should work.
Regards
  • asked a question related to Performance Testing
Question
9 answers
What is the minimum volume of water required for performing these test
• chloramine residual
• phosphate concentration
• pH
• temperature
• microbial biomass
• ammonia, nitrite and nitrate
• TOC
• LC-OCD and FEEM
Answer with reference will be much appreciated
Relevant answer
Answer
Mahmoud, this our personal expirience.
  • asked a question related to Performance Testing
Question
1 answer
I performed the test according to Wolf et al., 2016.
Relevant answer
Answer
Probably they are biased due to some reason, could be anything from animal scent, cues in the test room etc. Try by increasing the time with close novel arm.
gd luck
  • asked a question related to Performance Testing
Question
4 answers
Is a scenario possible wherein the drug exposure achieved in clinical studies surpass or exceed than the exposure achieved in nonclinical toxicity studies conducted of appropriate duration?
And if so, is the dose selection flawed in that particular clinical study?
And to address that kind of a situation, should more nonclinical toxicity studies be performed that test a higher exposure than what is expected in humans?
Relevant answer
Answer
It is possible, but highly unlikely base on current regulatory guidelines. For a given dose value, the human can have a better bioavailability and exposure than the animal model used preclinically. However, the preclinical models are dosed at several multiples until toxicity or maximum acheivable dose is reached. Human Ph1 trials start at a fraction of the NOEL reported preclinically. At these starting doses, exposures are compared to the animal data so guidance is available for the escalating dose studies. So it is unlikely that you would ever be in a position that human clinical trials were achieving exposures that were in excess of any reported for preclinical studies. If preclinical studies are not designed correctly, then it is a possible scenario that clinical doses could provide unexpected exposures but in that case the company has wasted much time and money and should rethink their position as a drug company.
  • asked a question related to Performance Testing
Question
3 answers
Dear all,
My name is Carlos Freitas and I am a chemical engineer student. I am performing tests of furfural HDO and I am with difficulties to find chromatogram and retention times to identify the products of my reactions.
Thank you in advance.
Scientific greetings,
Carlos
Relevant answer
Answer
Although Venkata is right you will easily run into difficulties in getting all the possible pure compounds. On top of it, developing a gc method to completely separate all theoretically possible reaction products is not that easy.
You'll gain a lot of time if you start using GC-MS for this project. Most GC-MS implementations use Electron Impact as ionization source witch yields a 'fingerprint mass spectrum' witch is matched against a library, this will leave you with an estimation of chance that a certain peak could be the proposed compound . An extra confirmation is still needed but the amount of references needed is smaller
  • asked a question related to Performance Testing
Question
5 answers
When I was in undergraduate classes, I was advised to perform test of normality and equality of variance prior performing any test for means comparison (ANOVA) in the case of parametric test. However, I recently read on a paper that, normality is applied on residuals and not on measured variables.
I wish to know what is advisable if I want to publish my results in a paper. Statistically talking, is it fair to perform Normality test on residuals instead of measured variables?
Thank you very much.
Evans, E.
Relevant answer
Answer
Hello Evans. The trouble with testing for normality of the residuals is that normality is most important when the sample size is small, and becomes less and less important as the sample size increases (see slide 9 of the file linked below, for example). Your test of normality, on the other hand, has very low power when n is small and too much power as n becomes large. So tests of normality are just not very helpful. And notice, by the way, that normality of the errors is a sufficient, but not a necessary condition. It is the sampling distributions of the regression parameters that must be (approximately) normal for valid inference in OLS models. HTH.
  • asked a question related to Performance Testing
Question
7 answers
I treated the cancer cell to induce calreticulin(CRT) exposure on the cell surface. Before I could observe 20% of ecto-CRT(+), then it dropped to 5%, then it dropped to nothing recently. I checked the cell contamination, used new the batch of cells, the new batch of drug, and also the new vials of antibody. But I couldn’t think any other reasons why I couldn’t get duplicated result. Also to clarify, the flow cytometer was validated and passed the performance test every time.
Here is my protocol. 1 million cells were cultured in 6 cm plate and performed the drug treatment. The cells were harvested by Accutase buffer. Wash twice with ice-cold PBS, the cells were blocking with 1% BSA-PBS. Then incubated with monoclonal mouse anti-CRT antibody (1ug/100ul) on ice for 1hr. Wash once. Incubated with goat anti-mouse IgG (Alexa fluor 488) on ice in the dark for 1hr. Wash once. Diluted the sample in PBS and tested by flow cytometer. I'm using the scatter plot to gate the live cells since the drug mitoxantrone has fluorescence ~600nM which is overlapped the PI cell viability dye.
Relevant answer
Answer
Another thought is that you should always include viability dye. There are many available, which can overcome your overlap with PI. Depending on the lasers available on your machine, you could use DAPI or 7-AAD. These work the same as PI. Another option would be the fixable live/dead amine-reactive dyes. These are available in a variety of colors from eBioscience, BioLegend and Invitrogen. Good luck.
I also agree with the updated gating above.
  • asked a question related to Performance Testing
Question
7 answers
For modeling, I am using about 20 different variables (in the form of GIS layers). Before proceeding with the modeling, I want to perform collinearity test. Does any one know how can I perform this test? There are some procedures available for some statistical data, but How we can implement this with our GIS layers?
Relevant answer
Answer
Dear Naeem,
there is no need for sampling or transforming your raster data into vector format, or using expensive ArcGIS software.
Sampling is not good because you want to test correlation of entire rasters, rather than sparse pixels.
Instead, I recommend you to use open-source QGIS 2.18 (https://www.qgis.org/en/site/) with plugin called MOLUSCE (https://plugins.qgis.org/plugins/molusce/).
You should load entire rasters into QGIS, and load them into MOLUSCE as spatial variables. The next step is Evaluating correlation, where you can calculate Pearson's or Cramer's correlation coefficient or Joint information uncertainty. This document may help you with detailed instructions (https://github.com/nextgis/molusce/blob/master/doc/en/QuickHelp.pdf).
If some of the layers with spatial variables have high correlation (over 0.7), you should exclude one of them from your final model.
The other software is GRASS GIS (https://grass.osgeo.org/).
It has function r.covar that calculates correlation matrix between input rasters (https://grass.osgeo.org/grass70/manuals/r.covar.html). The computation time is much faster than in QGIS, but be aware that it is not user-friendly software.
  • asked a question related to Performance Testing
Question
4 answers
Hi.
I'm a student studying Li-S battery.
I'm trying to use a 2032 coin-cell
How much diameter should i consider when punching a cathode slurry?
I wonder if it should be smaller or larger than the Li metal anode.
And
When I making cathode thickness is adjusted.
Is there any difference in performance test depending on thickness? If so, what effect does it have on thickness?
Help me.
Thank you
Relevant answer
Answer
You have asked two different questions, Firstly;
How much diameter should i consider when punching a cathode slurry?
I wonder if it should be smaller or larger than the Li metal anode.
Ans: As you are going to use 2032 coin cell so definitely, your anode is not going to be larger than 18 mm ( as a safety, otherwise it may touch cell walls) and if you will buy Li metal disks for anode, customarily they come in the sizes of 12 mm. So, the cathode should always be either to the dia of anode or smaller to have better anode to cathode contact, and recommended size is 8 mm in dia for the cathode.
Secondly,
Is there any difference in performance test depending on thickness? If so, what effect does it have on thickness?
Ans: In my experience, thickness of the slurry does have effect on the performance of the cell and that's why we do optimization. But remember, a UNIFORM and SMOOTH coating is always desirable for a good cathode. If material has low conductivity then a thicker slurry cam s;sluggish the electronic and ionic transfer and could also have effect on the wetting process of cathode by the electrolyte.
Hope it will be useful for you in the preparation of LiSB's .
Best of luck.
Cheers
  • asked a question related to Performance Testing
Question
4 answers
I need a common method to perform this test and different detailed information about the test parameters.
Relevant answer
Answer
there are more model for stress cree recovery on viscoelastic materials such as maxwell , burger
  • asked a question related to Performance Testing
Question
3 answers
I’m currently using features that are built with statistics over a certain window. This takes in f.e 10 datapoints and make them into one using PCC,KL or simple average(see link). The predictions are also made over a sliding window meaning one anomaly will be present in multiple windows.
If you have two classes ‘normal’ and ‘anomaly’ how do i best score performance on the test set?
Relevant answer
Answer
I'm not familiar to R, since I work with python (keras,tensorflow,...) .
I can find some functions for calculating ROC curves but that's not the problem.
I can't seem to find in any literature or documentation how the performance is measured when one anomaly is present in different windows for sliding window algorithms.
  • asked a question related to Performance Testing
Question
1 answer
I tried different manners to do the MTT (Cat# 4890-25-K, 2500 Tests) test in SGHPL-4 cells, but I had some troubleshooting.
I incubated 10, 20, 30, 40 and 50 uL of MTT in Hans-F10 medium (with 10% of FBS) for 2, 4, 6, 8, 12 and 24 hours. After that, I put 100 uL of SDS and incubated for 4, 6, and 12 hours both in 37°C or room temperature. Also, I tried to incubate the MTT in PBS for 4 hours, but it did not work.
Did you have any experience with this test/cell?
How can I fix this problem?
Is there another way to perform this test?
Relevant answer
Answer
No experience with this cell line but many others. Cells grown in 96-well plate in 0.1 mL medium/well. Include a 'no cell' control. Add 20 uL/well of 5 mg/mL MTT reagent [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] in PBS (stored in aliquots at -20 C) and incubate 1-4 hrs. Microscopic examination should show dye accumulated in cells. Aspirate off medium and add 100 uL/well DMSO, mix ~15 min on an orbital shaker and read plate at 570nm. Getting the cell density right is usually the biggest challenge. Since the medium plus reagent is removed prior to solubilization in DMSO, it's prudent to visually examine the cells micrscopically beforehand to determine if cell loss will occur with medium aspiration. If the cells are not well attached to the well, then I'd use the MTS [ 3-(4,5-Dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium] protocol. Good luck!
  • asked a question related to Performance Testing
Question
4 answers
The methodological process that was carried out for the design of the encapsulation models was first developed with the preparation of the S-0.0 system. Where initially the additives were mixed gradually (Ripio - Bentonite - cement - water), once the mixture was homogenized, the consistency of the mixture was determined, to proceed to the emptying of the material mixed in cylindrical molds, with In order to obtain the monoliths to subsequently perform the test of resistance to unconfined compression, this according to the methodology, at 7, 14 and 28 days of curing. Taking the formulation to 28 days of curing decreased the resistance to compression with the passage of days.
Relevant answer
Answer
If this is a bentonite-cement slurry then you will have considerable shrinkage and drying if testing in anything other than the fully saturated state. Similarly, unconfined compression will lead to false readings. You need to consider triaxial testing of your mixes.
  • asked a question related to Performance Testing
Question
3 answers
In our lab we perform the potentiodynamic tests according to ISO 10993-15 in 0,9% NaCl. Our test material is medical steel 316LVM. At the beginning, we used to use special PTFE holder from GAMRY, which was inside the corrosion cell. Everything was fine, until crevice corrosion started to occur between sample and PTFE seal. We’ve tried to find the reason, but we’ve failed and decided to change the shape of the sample: from a small piece to a long rod, which one end is outside the corrosion cell, so there is no place to crevice corrosion. It’s worked for a long time, but suddenly crevice corrosion occurred again, but this time at liquid/gas interface.
First idea was IR drop, so we've changed a frit in the Luggin capillary and it helped for a moment (3 samples). But crevice corrosion is occurring ever since. Usually, first cycle goes alright, there’s hysteresis but breakdown potential is relatively high (in comparison to previous tests for the same type of material). Next cycles show no hysteresis, like complete repassivation is happening and breakdown potential is shifted to even greater values. What’s interesting, samples made form steel 2H17N2 don’t undergo crevice corrosion.
We’ve bought new reference electrode (SCE), it hasn’t helped. We’ve checked Luggin capillary, there’s no signs that’s something wrong with it. We’ve tried with graphite electrode as counter electrode, no results. Naturally, we’re checking reference electrodes with Master Electrode. We’re also checking potentiostat at manufacturer’s and with Dummy Cell. We’re performing tests on one type of stainless steel (316LVM), but usually samples are made of material from different deliveries.
Our test parameters (invariable):
- 0,9% NaCl solution
- 30 min. deoxygenation with 99,999% nitrogen
- reference electrode: SCE
- counter electrode: platinum (80,6 cm2)
- temperature: 37±1°C
- potential sweep: 1 mV/s
- 1 whole cycle = first step + second step
- first step: potential sweep until destination potential (2000 mV) or destination current density (1000 μA/cm2) is reached
- second step: potential sweep back to open circuit potential
Any ideas what can be wrong?
Relevant answer
Answer
Hi,
crevice corrosion can occur everytime, when you have crevices ;-) Some materials are more prone to it than others. Sometimes you have an initiation at the edges that leads to crevice corrosion when pitting is stabilized. So if you see pits on the whole surface and also crevice corrosion, your experiment can still be ok. Surface preparation is also crucial. Do you sand your specimen before your experiments (grit size?) or passivate it? If so, how and how long?
I personally dont like teflon holders. Have you tried the Gamry pot-holes or any other adhesive films to separate your working electrode? Maybe this is working well also at 37°C, we use this technique at room temperature succesfully. Another possivility is the use of an Avesta-cell. In this cell, the transition bewtween cell body and working electrode is flushed slowly with chloride-free solution, preventing crevice corrosion. This is of course more complex and not every sample is suitable for it.
I would try adhesive films at first :). Be sure to press the edges firmly so that the adhesive comes out a little bit.
Best regards,
Andreas
  • asked a question related to Performance Testing
Question
3 answers
The faint pink colour is very difficult to observe with respect to human eye when a ffa test is done with used oil. The procedure is also a tough method which explained:
1. take the amount of 28.2 gm of oil in one conical flask (cf)
2. take 50 ml drops of propanol into another conical flask and add 20drops of phenolphthalein indicator and neutralize with two drops of .1 N of NaOH and a faint pink colour observed
3. Add the mixed propanol into the oil and heat at 60-70 degree celcius but donot boil it and keep it warm so that you can easily touch the bottom of the conical flask.
4. Now titrate it against .1N NaOH , take reading until it appears again a faint pink colour that was observed and colour should persist upto 30 seconds .
Now when we shake the titrating Cf the colour of dotted pink disappears..till long no color appears and until a cloudy solution appears and then titrating so long the colours appear as cloudy faint pink .
some says that colours should be noted when the dotted colour persists for upto 15 secs. but I saw their way of shaking, is very slow and the colour doesn't resemble the overall solution's colour i.e. a dotted pink colour.
How colour change is to be noted down? What is the exact way to perform this test then? Anyone have video based on same above procedure? The test is done for many times with the multiple times used oil for changing ffa (based on oleic acid), but unfortunately the way of experiment fails.
please show me a proper video with same procedure and observance of colour, results .
Relevant answer
Answer
There is a reasonable method based on simple , rapid and complete extraction of acids from an oil test portion into reagent .....This technique is based on indirect acid-based using special reagent. The reagent is similar to the reagent for the pH - metric. The presence of triethanolamine ( B ) causes rapid ( 1 min ) and full extraction of free fatty acids from the test oil portion into the reagent.In that case , the standard titration technique with colour indicator is not applicable. You can refer to : CROATIA CHEMICA ACTA. CCACAA 78 ( 1) 99 - 103 ( 2005 )
  • asked a question related to Performance Testing
Question
2 answers
Can I perform test directly using the paper sample?
If I need a solvent to perform test which could be suitable solvent?
Relevant answer
Answer
I agree with Mr. Vohlidal. You can see the attached review paper for the dissolution of your material.
  • asked a question related to Performance Testing
Question
4 answers
Hello All,
I want to perform silanization reactions at 200 L scale. Currently we perform these test in 10 L glass reactors dedicated for this project. I wanted to know if I can scale this up to larger reactors, do I have to use glass lined reactors or can I use like stainless steel reactors. Unfortunately I work at pH 4 and have used HCL to adjust pH. What do companies use at large scale for silanization. Also would make a silane coating on your glass if you use it.
Relevant answer
Answer
  • asked a question related to Performance Testing
Question
6 answers
Hello everyone,
I have 2 samples distributions (approx 500 values per condition) that I would like to compare using the Kolmogorov smirnov test on SPSS.
I have been able to import my table in SPSS, but unfortunately since I'm not confident with SPSS I haven't been able to perform the KS test.
Does someone have any advice on how I could perform this test ?
Many thanks
Laura
Relevant answer
Answer
Alright, so you have two samples, did you try to combine thing into one sample with a flag variable that separates them? What I am saying is make both samples one variable, then create a second variable coded as 0 for the first sample and then 1 for the second sample and then add the two distributions combined into one variable and have them grouped by the flag variable. I don't have SPSS anymore so I would just do it for you. I have altered your excel file to show what I mean.
So for the explore option, you would put the populations variable (the combination of the two variables in the dependent list and then use the flag variable (coded as 0 or 1) in the factor list. Have you tried this?
If this is still a problem you could use R if you are familiar. And if not I would be willing to run it for you.
Logan Netzer
  • asked a question related to Performance Testing
Question
1 answer
I'm new to this type of fuels, and need to know what (possibly unknown) aspects I would need to consider for engine performance tests in comparison to conventional fossil fuels.
Thanks in advance!
  • asked a question related to Performance Testing
Question
8 answers
Please refer this paper to know what are the problems they have performed the test.
Relevant answer
Answer
Dear Amit,
I suggest you to see links and attached file in topic.
-Solving Rotated Multi-objective Optimization Problems Using ...
- An improved NSGA-III procedure for... (PDF Download Available)
- U-NSGA-III - MSU College of Engineering - Michigan State University
https://www.egr.msu.edu/~kdeb/papers/c2014022.pdf -Multi-Objective Test Problems, Linkages, and Evolutionary ...
Best regards
  • asked a question related to Performance Testing
Question
7 answers
I have some data collected through questionnaire-based survey. After tabulation of data, I can see that participants from private clinics responded differently to the same question compared to the participants from government hospitals.
For example, in a Yes/No/Not sure question, Participants from Private clinics (n =24) responded like this: Yes (63.48%), No (34.13%), Not sure (2.39%). But participants from government hospitals (n = 226) responded like this: Yes (27.44%), No (67.73%), Not sure (4.83%).
I want to perform test of significance to see if the difference in 'Yes' between private and government institution is significant. Can I do it? If so, then how?
Relevant answer
Answer
Hi Shadid,
Depending on sample size and normality, if you're thinking about a non-parametric test, then you might consider the Kruskal-Wallis method. Here's a descriptive link (see chapter 6):
have a great day!
--Adrian
  • asked a question related to Performance Testing
Question
2 answers
I am planning to perform tonic immobility test in laying hens. Is it necessary to use V-shape cradle or is it possible to perform the test by placing the hen on a flat surface, e.g. scale. If the V-shape cradle is necessary, could someone give me the measures? Thank you!
Relevant answer
Answer
Jones and Faure (1981 - Behavioural Processess): "Tonic immobility was induced in every test situation used in the first experiment, which underlines the robust nature of this phenomenon. However, although rigid standardisation of methodology may not be essential, we strongly recommend the use of the cradle and cloth for studies of immobility particularly when dorsal restraint is used. Not only was the reaction enhanced by this apparatus but the fact that tonic immobility was virtually always (19 out of 20) induced at the first attempt affords a high degree of consistency and reliability. The amount of handling necessary, with its possible stressful effects (Freeman, 1967; Candland et al., 1969), is thus reduced to a minimum. This high reliability may have been due to a purely mechanical effect in that the cradle provided greater support and restraint than the other substrates used."
Hope this helps.
  • asked a question related to Performance Testing
Question
5 answers
In short, I reduced the number of items in my measurement model to 4 or 5 items (from 9 originally). The overall fit indices (rmsea, cfi, tli, rsmr and chisq pvalue) are all in favour. It is however the intermediate steps I am concerned about: I removed 1 item at a time, but did not have a test such as the chi-square which is used for nested model comparison. Since my models differ in terms of observed variables, the models are non-nested.
My model concerns a 1-factor model. What I did so far was simply visual inspection of the aforementioned overall fit indices, residual correlations ( > |0.10| being 'bad'), modification indices and internal reliability (Cronbach's Alpha and composite reliability).
A test I came across is Vuong's test, which is able to compare non-nested models. However, I am not sure of the performance of this test. In my analysis it seems to prefer any model that has fewer items load on the factor. The same holds for the expected cross validation index (ECVI); i.e. it always seems to provide a lower value in case of modeling fewer items. Is this because Vuong's test only works (properly) for equal sample sizes and number parameters?
Vuong's test and the ECVI do seem useful when I compare two non-nested models with equal items.
I have searched quite a lot for relevant literature on non-nested model comparison in CFA, but have been unsuccessful. Any suggestions are welcome. If you know of a research / paper that delved into this topic I'd appreciate your sharing.
Thanks in advance for any help.
Relevant answer
Answer
Hi Kasper
Your Modification Indices is really the comparison you are looking for. They identify strain in the original model and tell you approx how much the overall model X2 would decrease if the fixed or constrained parameter was freely estimated. Thus, MI's are analogous to the X2 difference of nested models.
You seem to have followed a sound procedure. Check first if each item has a satisfactory factor loading (at least 0.4). Then check MI's for strain.
Instruments only tested with EFA are often reduced when they are evaluated with a CFA. as a CFA is a stronger analytical framwork.
See for example the procedures we followed to get the most parsimonous model in this article:
Best
jk
  • asked a question related to Performance Testing
Question
5 answers
I would like to perform detection of apoptosis with using Annexin V (detecting translocated phosphatidylserine) or Mitotracker Deep Red (stains mitochondria in live cells) in luteal cells of a cow by means of a flow cytometer. For technical reasons, I need to store cells for 1-2 days to perform tests on the cytometer. Therefore, I would ask whether there is a method of cell preservation/fixation and storage them for future cytometer assays ? Is it better to stain the cells before preservation or after just before the cytometer test ?
Relevant answer
Answer
Hi Robert,
For Annexin V /PI staining it is recommended to measure the samples as soon as possible (we usually measure the samples within 30 minutes after staining). So for this staining fixation is not appropriate.
For other stainings we use 1-2% PFA in PBS to store the cells.
Please be aware that some fluorochromes are not stable and will not "survive" certain fixation methods.
Whenever we combine surface and intracellular stainings we use commercially available Fix+Perm kits.
Best
Wolfgang
  • asked a question related to Performance Testing
Question
10 answers
We are testing a new diagnostic tool and comparing it to the actual gold standard for this diagnosis.
Briefly, we examined 25 patients with the new diagnostic tool (test A) and the gold standard diagnostic tool (test B). Test A gives a positive result or a negative result (no variability or range in numbers, just "positive" or "negative" as outcome). We then performed test B which also gives a "positive" or "negative" results and which is considered the true result since this is the gold standard diagnostic tool.
All patients having a positive result on test A (n=18), had a positive result on test B (n=18).
Of all patients having a negative result on test A (n=7), 5 were negative on test B but 2 were positive on test B.
Overall, 23 patients had the same outcome on test A and test B, 2 were different, which means that our new diagnostic test has a sensitivity of 92% (if we consider test B to have 100% sensitivity).
Can you recommend me any more statistics on this data, to draw conclusions? Any idea to look at this data from another perspective? Any help or insight is appreciated.
Thank you
Relevant answer
Answer
good question I follow
  • asked a question related to Performance Testing
Question
2 answers
I have done normal distribution test on two groups dependent variables. file is attached please tell me . is it suitable to perform M-W test on it. i have 7 scale likert scale answers i code it extremely agree 1 to extremely disagree 7 and then perform test. please explain?
Relevant answer
Answer
Hi Adnan, this may help you..
De Winter, J.C., & Dodou, D.(2010). Five-point Likert items: t test versus Mann-Whitney -Wilcoxon. Practical Assessment, Research & Evaluation, 15(11),1-12.
  • asked a question related to Performance Testing
Question
4 answers
Dear all
we are performing a study on FSW of AA6082 alloy, and I 'd need the temperature dependent flow stress of the material. Since we don't have the facility to perform the tests in house, I'd need some references concerning the flow stress of this alloy at different temperatures.
Relevant answer
Answer
parent. thank you
  • asked a question related to Performance Testing
Question
4 answers
Dear All,
An experiment was carried out, using as factors the temperature, the humidity and in addition, it was carried out for 6 different species. The total sample number of species analyzed is 6, analyzing 3 individuals of each, so the total N = 18. In my case, I would like to be able to test normality, but the following problem arises. As I must test within each group, I would finally have a N = 3 for the same temperature, humidity and species. It is an N too low for the power of the result of normality to be high.
Trying to see how to solve the problem, I found the following possibility: calculate the average of each group of 3 individuals (that is, of each species), for a certain temperature and humidity. Once calculated the means, we obtained the subtractions (or deviations) of each individual of a species, with respect to the average of the species. When calculating these subtractions or deviations, we could group the N = 18 data, that is, all the species together, so that we increase in sample size and on that, finally perform the Test of Normality.
The problem is that I read this in a book, and I do not find any more information that supports this technique. What do you think about the reliability of this method? What alternatives could I use?
Relevant answer
Answer
What you did was essentially correct. You said that "As I must test within each group, [...]", what actually means that the conditional distribution of the response should be (approximately!) normal. In a linear model the conditional distribution is just the distribution of the residuals. And in fact, statisticiand do a residual analysis to see if (how well) data and model go together.
With your subtractions you actually calculated the residuals of a full-factorial model. You did this quite manually, but that doesn't make it wrong.
I am only slightly concerned with your wish to "test" the normal assumption. A formal test won't answer your important question, namely whether the normal model is sufficiently good to model your uncertainty associated with the data. If the test is not significant, it may simply be "blind" w.r.t. relevant deviations from the normal model, and if it is significant it may only because of entirely irrelevant deviations from the normal model. Ideally, the normal assumption is justified on theoretical grounds (or prior experience/data). If you think the normal assumption should be ok, it doesn't harm to confirm this using a diagnostic plot of the residuals. In your case, a normal quantile-quantile plot would be good.
  • asked a question related to Performance Testing
Question
4 answers
Hello dear collegues,
can anyone please recommend a protocol for infection of cells (MDCK and A549 to be precise) by Influenza A (H1N1) virus in "solid medium" (cover with agarose)?
I would like to perform tests with H1N1 infecting cells but I need virus medium to be solid.
I believe there is possibility to cover cell monolayer infected with virus with medium that contains agarose...
Can anyone provide protocole? Or article that consists exhaustive protocole?
Thank you all in advance
Relevant answer
Answer
This paper should be very helpful for you. Use of Avicel will be better for plaquing flu.
  • asked a question related to Performance Testing
Question
5 answers
-In first journal:
Immersing small size scaffold cause only slight increment in the volume observed in graduated measuring cylinder (V2). What is the correct way of performing this test?
-and also:
I used another method in second journal, but my results were more than 100 %!
For example:
P=(W2-W1)/pV1
Where W1 and W2 indicates the weight of the composites before and after immersing, respectively, and V1 is the volume before immersing, ρ is a constant of the density of n-hexane.
W2 = 0.196 g, W1 = 0.020 g, V1 = 0.235 cm3, p=0.654 ……..
P = 115.032 !!!
Relevant answer
Answer
There's many ways of measuring porosity, see this review:
If you have limited equipment, instead of the graduated cylinder method, you may get better results using a good analytical balance and using the Archimedes method. You weigh the sample dry, saturated, and submerged in a non-solvent liquid then you can calculate the volume of the open pores from the mass of wet scaffold minus dry and the bulk volume from the mass of the displaced liquid (saturated-submerged). The ratio will give the open porosity. You can calculate the total porosity by comparing the density of your material and the density of your scaffold (dry mass/bulk volume).
  • asked a question related to Performance Testing
Question
3 answers
We performed a case series with only 16 cases worldwide (unfortunately, since the studied phenomenon is very rare, these are all the cases available).
Of course the number is very low, but we were interested in seeing if there were some differences between the cases that survived vs. deceased. Because of the small number we used either fisher (freeman-halton) or Mann whitney U to perform the tests.
With this small sample size, would it be fair to also perform post-hoc correction, like Bonferroni or Holm-Bonferroni?
Relevant answer
Answer
In fact, the post hoc of Ficher is the test that seems to have less influence in relation to the number of evaluated subjects. Thus, in my experience, it does not seem to require changes in statistics. But, of course, this is a superficial view. I
t is necessary to understand the totality of the methodological mechanisms of its research in order to have a more careful evaluation of the statistical analysis.
  • asked a question related to Performance Testing
Question
2 answers
Dear Sir / Madame
I performed this test and I wondered if it was right or not, so I want you to give me some suggestion about my method of calculation, and the way to set up an ELISA test scientifically. Thank you very much.      
Relevant answer
Answer
Hi!
You obtained some interesting results. First of all, your data for intact group is closer to well distribution. Data of experiment group is divided into at least two clasters (low (0.4-06) and high (0.8-1.2). Some of eggs gave no immune response. these are primary data. But! You need in calibration curve to evaluate real concentration of antibodies. This approach have to give you more clear results.
Regards,
Tatiana
  • asked a question related to Performance Testing
Question
1 answer
Actually what I found is that all the programs  available on net to perform the randomness tests (mentioned by NIST)  are written in C language and I m working in matlab... Do I need to write the matlab code separately to perform all the 15 tests mentioned in NIST suite or there is any predesigned software to which we give the input sequence and get all the results done.
Relevant answer
Dear Neetika, 
You can try to write your own code for the problem using the very famous soft such as Maple.
  • asked a question related to Performance Testing
Question
8 answers
I did not find any performance test for measuring mobility.
Relevant answer
Answer
Horak FB, in a study Objective biomarkers of balance and gait for Parkinson's disease using body‐worn sensors.He developed a metric for mobility assessment in Parkinson's disease, which includes: postural stability in stance, postural responses, gait initiation, gait (temporal-spatial lower and upper body coordination and dynamic equilibrium), postural transitions, and freezing of gait.  
Horak FB, Mancini M, Objective biomarkers of balance and gait for Parkinson's disease using body‐worn sensors. Mov Disord. 2013. 28: 1544–1551. pmid:24132842
  • asked a question related to Performance Testing
Question
4 answers
I'd like to give a little background, and then pose a couple (what I think are simple) questions about statistical analysis  of data.
I have set up a warburg respirometer. As such set up requires, I have two total respirometers: 1) one contains a reactions, 2) the other is a blank to account for temperature changes. 
I have multiple runs with both of them containing blanks to test their relationship under different temperature conditions, and they are mostly statistically the same (taking in the standard deviation, and seeing when they overlap).
However, when I run with one respirometer containing my reaction and the other a blank, I get inconsistent results. My average oxygen uptake matches the expected value in literature (18 mmol/hr/gDW) BUT the standard deviation is 7.
At what point do I reject the data due to the large std. deviation? Is there an alternative statistical analysis to run, or a statistics package I can plug in my data? Obviously mean and standard deviation does not tell the whole story. 
I have also performed a t test of my data, and generated a histogram. the histogram also supports the ~18 value that I want.
The reason I want to know if my results are valid, and how to express their validity, is that I want to use the respirometer on unknown compounds, and I need to be able to ensure my results are appropriate based on past results of known substances. If my results aren't certain, I don't want to perform tests on unknown reactions because my resulst will be equally uncertain.
Hopefully my question makes sense. Any statistical insight or resource suggestions are welcome.
Relevant answer
Answer
Dear Sarah,
I understand that you have data with a mean that meets expectations ;-) however s is quite high!
The first thing you should always do is to look at the data: You plotted a histogram - good!
For selecting the right number of classes for the histogram, I'd recommend to chose an uneven number around SQRT(n). Does this look about symmetrical, monomodal (one highest value class), no surprises, i.e. no classes being higher populated than a population should be that is expected to be normal? No values that lie far outside from the apparent "curve"?
If so, fine, you can calculate the mean and s. If there are "outliers" you should investigate the reason for those and improve your measurement technique to replace them by good data. If there are no apparent outliers and everything is symmetrical, try to lower variability by standardizing your method. Imagine whic factors do have an impact on the result and estimate who big this impact is. Sort the factors by impact and address from the highest.
Remember: Confidence interval of the mean = mean plusminus t*s/SQRT(n).
So s=7 IS high, which hints to outliers or, as said, unacceptably high variability.
Hope this helps, good luck!
Holger
  • asked a question related to Performance Testing
Question
2 answers
  1.  .
Relevant answer
Answer
Vienna software as I found is for audio processing, not a network emulator.
The test I'm doing is for computer networks, to check the speed and the throughput of the network components.
  • asked a question related to Performance Testing
Question
4 answers
I would like to analyse my 16S sequencing data (ANOSIM/PERMANOVA) and to visualize them in nMDS plot by using Bray_Curtis similarity matrix. Do I need my data to be rarefied before performing the test?
Moreover, if I don't do rarefaction, how much it would impact on diversity analysis?
  • asked a question related to Performance Testing
Question
6 answers
We are working on a material that has potential in packaging fresh produce. We want to know how we can quantitatively assess the freshness of the produce.
Relevant answer
Answer
Water vapour permeability test is essential for fresh produce packaging material which decides its transpiration and firmness. Also burst strength, puncture test to be carried out for packaging material for its durability.
  • asked a question related to Performance Testing
Question
3 answers
I am currently developing an algorithm that detects change in the state of a biological signal (on/off). The algorithm works by comparing about 5000 matrices (each sized 250x1) with a control matrix. The matrices are normally distributed. The comparisons are made with a Student’s t-test.
Obviously, multiple comparisons increase dramatically the chances of obtaining a false positive. Unfortunately, the Bonferroni correction is very conservative and deteriorates drastically the algorithm’s performance (This was tested by simulating a self-made signal where the instances of on and off states are known beforehand). To solve the problem, I was thinking of calculating the false discovery rate using the Benjamini & Hochberg test.
I would like to know if my approach is correct or is there a better way to tackle the problem? Thank you in advance.
Relevant answer
Answer
Does the concept of "false positives" make sense at all?
Benjamini.Hochberg can control the FDR (false discovery rate - that's not a FWER!). It is surely a good option (much better than Bonferroni control of the FWER), but to my personal opinion I think that a meaningful application of hypothesis tests always requires to state a relevant effect size that is tested for (which I bet is not given and likely impossible for your task). Note that the top ranked comparisons will remain the top-ranked, no matter what correction you apply. I suggest you decide how many will be worth for further investigation and then simply select this number of top ranking candidates.
  • asked a question related to Performance Testing
Question
3 answers
I have established a binary regression model with only significant predictors included. In theory, the predictors looks quite appropriate to explain the clinical phenomenon.
I checked the goodness of fit of the model using various test as follow, it performed good in some tests while bad in some tests as follow:
(1). Good performance in the tests:
      - Omnibus Test of Model Coefficient (Chi-sq.=22.471, p<0.0001)
      - Hosmer and Lemeshow test: (Chi-sq.=7.09, p=0.526>0.05)
(OTMC result means It's better to have this model than no model; H & L test result means this model is well calibrated)
(2). Average performance:
      - Classification table: overall correct percentage of prediction: 64%
      - Relation between Observed & Predicted DV: Pearson's r=0.26, p<0.0001
(The predicted accuracy is overall all, 64% with nearly 7% of variance predicted.)
(3). Bad performance in the tests:
      - Cox & Snell R square: 0.079
      - NagelKerke R square: 0.106
(This may mean only 7.9%~10.6% of variance can be explained).
My questions are:
1). Please comment on this binary model based on test results above.
2). What should I focus when I interpret the binary model in my paper?
3). Do you think it is ok to put the result in the paper?
Thanks very much and merry X mas!
Relevant answer
Answer
Hi Danny, I agree what Dr Sarsam has already explained regarding the performance of the model. While explaining logistic regression model in an article, one needs to specify / focus in the methodology that all the pre-requisite criteria are met for performing logistic regression. Then if all criteria are met, then performance may rise. In the result section you need to specify in running text about the performance of the model with Omnibus chi-square, H L statistic, Classification table, Cox and Snell R square and Negelkerke R square. These two things are mandatory. In result section u may / better to show the table with Odds ratio (If univariate) / AOR (If multivariable) with CI and P values, with interpretation in the result section. 
Wish you all a merry Christmas and advanced happy new year 2017. 
Indranil
  • asked a question related to Performance Testing
Question
3 answers
Hi,
What are the difference between these assays and what are the significance of performing the test on microbes cell (fungal pathogens).
Relevant answer
Answer
Superoxide dismutase (SOD) is an important antioxidative enzyme. It does catalyze the dismutation of the superoxide anion into hydrogen peroxide and molecular oxygen. Activity of SOD can be determined by a number of methods such as Pulse radiolytic methods (Rigo et al. 1975), Beauchamp and Fridovich (1971); Misra and Fridovich (1972); Tyler (1975). SOD can also be measured by inhibiting the enzyme of an O2- dependent reaction. A number of kits are also now available for measuring SOD.
However, in relation to ROS as well as its relation to microbial cells you can get some knowledge from my recent publications with CAB International, Oxfordshire, UK. There some interesting findings have been illustrated on ROS, antioxidants in relation to microbial cells, specific to AM fungi. I am also forwarding the published paper as it may help you.
  • asked a question related to Performance Testing
Question
1 answer
I have HOT DISK 2500S and  need to measure thermal conductivity and specific heat capacity  of nanofluids which their viscosity near to water ones.i have tried to measure them but the results were not reasonable!! I wonder what are HEAT POWER and TIME to perform that test ??   
Relevant answer
Answer
Generally speaking for low-viscosity fluids you want to keep the test time very short and the test power relatively low - else you may introduce convection. I do most of my liquid testing with MTPS as opposed to TPS - but principles are similar. For liquids, MTPS keeps the test time <1s and the heater power very low. With liquids of similar viscosity to water, once you go above ~2-3s you start getting some degree of convection and after 7s convection starts in earnest, so a short test time is critical - as is making sure the liquid is fully stagnant prior to onset of testing.
  • asked a question related to Performance Testing
Question
4 answers
I am doing 24 hrs performance testing of newly formulated imidazoline based CI. The testing is carried out using 80% brine 20% kero. After 18-20 hrs I can see the brine becomes cloudy. No chance of scaling as its only 3.5% NaCl. 
Relevant answer
Answer
If you have worked in Oil Field Chemicals/production chemicals, we never needed to exchange so many msgs. Despite of bad English (!!) I believe I was quite clear with my issue in first place, if you have answer I appreciate otherwise lets not waste time on understanding who wants what!! ..... 
  • asked a question related to Performance Testing
Question
8 answers
Follow the parameters of weibull:
  • α =  46509.7171 β =1.5536
  • α =  46071.6505 β =1.4878
Which distribution has greater adherence to the data?
Data: Attachment
Relevant answer
Answer
Your question seems to be a combination of two questions:
1) Given two parameter sets, which one is better with respect to the KS test statistic? The first one gives a value of 0.0796, the second one 0.0921 (this is then reflected in the higher p-values for the first one (0.891) compared to the second (0.765) using the KS distribution, but this is not required in this case).
In this sense the first parameter set is the better one. This looks a bit puzzling at first, as the second set is the ML estimator, but keep in mind that different optimization criteria will lead to different parameter estimations.
2) How to do a KS test, if the parameters are fitted to the distribution? The question addressed is whether the data is coming from a Weibull distribution. If one uses fitted parameter values, the standard p-values of the KS distribution do not apply and one needs to use simulations. The result will depend in principle on the parameter estimation procedure used (MLE with bias correction in the paper mentioned above).
Using LcKS (Lilliefors corrected KS) from the "KScorrect" package in R, one gets a p-value of 0.35, so the hypothesis that the data is coming from a Weibull distribution is not rejected.
A final comment: Given the uncertainties of the estimated parameters I am not sure, whether the difference between the two is of any relevance.
  • asked a question related to Performance Testing
Question
4 answers
I am little bit puzzled with a basic concept. Please help me in understanding the following.
I have design NHR of a GT plant (simple cycle ),design  Net power o/p at ISO conditions and at 50Deg ambient. I have also the full load performance test data i.e. corrected NHR and corrected net power o/p at ISO conditions.
I would like to correct the performance test NHR and Net power O/P at 50DegC ambient . Is it possible to calculate without using correction curves?
How about correcting the design data at 50Deg with the %deviation from design ISO to performance test ISO conditions? will it make sense!?
Moreover If I would like to know, will the full load NHR at any ambient temperature in-between 15DegC and 50DegC be linear? can I just do linear interpolation from the two data points I have (i.e. ISO and 50DegC)?
regards
Relevant answer
Answer
Dear Mohamed Said,
First of all thank you to your time and detail writing with a needful attachment.
Please be informed that I am not trying to compare different model of gas turbine and find a single number of NHR. Instead, I would like to calculate the expected NHR and Net power output for n numbers of machines (installed in different location). I need this matrix of NHR and net power O/P for every plant to forecast the expected energy production and energy consumption. As the site performance test recent ISO values would be more conservative in terms of efficiency, I may under estimate the total energy consumption. Accordingly I am trying to correct the performance test recent data @ ISO conditions to an average temperature like 30DegC to see the expected net power generation and energy consumed as well. The intention is to do this without using individual plant machine type correction curves.
I welcome  any new ideas to do this as long as we doesn’t require all GT model correction curves and from the above specified sets of available data .
 Regards,
  • asked a question related to Performance Testing
Question
4 answers
I am working on a test rig, the test includes measurement of some parameters : pressure, temperature and water flow rate. For monitoring the variation of temperature, I was wondering what type of thermocouple that I could use, in other words, what are the test conditions that need to be specified to choose the suitable thermocouple, is it temperature range or the rate of temperature change or other factors?
Relevant answer
Answer
Dear Elsharif,
The National Instruments (NI) issued the attached file which gives a comparison between many types of the thermocouples including the maximum errors over
a full temperature range and typical errors at room temperature. 
Please have a look at the attached PDF file. 
Best Regards
  • asked a question related to Performance Testing
Question
7 answers
I would like to know what precautions one should take while performing tensile testing in LAMMPS. For example; tensile testing of Copper. I would like to perform the test at room temperature conditions (300K and 1 atm pressure) hence how should we go about relaxing the structure and temperature (using fixes) so as to achieve good results? My box size is 100 x 100 x 100 Angstroms
Relevant answer
Answer
Hello Rohit,
I think your problem already imposes the demand for simulations to be carried out at 1 atm pressure, so NPT would be required (assuming its periodic). To achieve the required pressure and temperature for long enough time such that the system pressure and temperature oscillations are very small around the desired values. During tensile deformation you can either use fix deform (with flag erate) to achieve a constant strain rate. The transverse directions can be relaxed by NPT @ 1 atm.
Caution: The simulation cell needs to be big enough that you don't have periodic boundary effect during deformation.
  • asked a question related to Performance Testing
Question
3 answers
I have to use it for testing multiple structure breaks in monthly data.
Relevant answer
Answer
Apparently, not. The best advice is to use the open source progarm R, which has developed the test, or Eviews8, which incorporates directly this routine in its libraries. 
  • asked a question related to Performance Testing
Question
8 answers
If I have allometrically scaled (body mass) data from a performance test is it possible to use the scaling component to track changes longitundinally?
If I scaled data from a baseline measure (fitness test) from a full squad/cohort of players, and want to assess a number of specific players longitudinally (across a season) do I still use the original scaling component from baseline to assess changes? 
For example if a player has a large increase in body mass across a season but maintains performance of a fitness test, does the original scaling component apply as I am assessing the individual against themselves?
Thanks
Josh
Relevant answer
Answer
Allometric scaling provides a method for examining the structural and functional consequences of changes in size and scale among organisms.