Science method

# Design of Experiments - Science method

Explore the latest questions and answers in Design of Experiments, and find Design of Experiments experts.

Questions related to Design of Experiments

SNR for dynamic response: 10 log10 ((1/r * Sbeta - V)/Ve ).

I have done a full factorial experiment (using 3 factors) and three center points. The experiment was repeated thrice at main points, overall producing 27 design points.

Now analyzing the data, I am not able to get the desired model. I analyzed the design using a first-order model with all interactions, but R square and r square adjusted was found to be very less.

After various iterations I finally ended up with a model having only main effects. However, the curvature effect was seen to be quite significant.

Subsequently when I tried to analyze the design using RSM, the model worsened and I had to revert back to the original model.

I am getting Rsquare and Re-sq. adjusted as 79 and 75% respectively. I feel it's too low to accept the model.

Can someone tell me whether curvature effects are taken into account while building a first-order model? I mean, is y=a+a*x1+b*x2+c*x3 sufficient even though centerpoint/curvature shows a significant effect?

If it's not included, can someone please explain how to improve the model or include the effect of curvature.

We have a project that requires us to design an experiment to show how protozoans metabolize.

9 factors (A-I) each at three levels and interactions AB and CD are to be considered. What DOE would be suitable so as to avoid incorrect analysis, faulty conclusion and minimize the confounding effect of factors and interactions?

I want to optimize an analytic procedure. I have 4 factors and 4 levels. So which design is best fitted to my experiment? I am using Design Expert Software. Does anyone have experience working with this software?

In an attempt to determine the transcriptional responses, I used three animals (at each time point) to conduct the experiment. I collected the tissues and POOLED the equal amount of tissues from these triplicates to extract the total RNA. Subsequently, CDNA was synthesized and for each CDNA sample, triplicate (n=3) qPCR assay was performed to detect the transcript levels. Regarding the interpretation of the data, a question arose from one of my peers stating that “transcriptional data that is presented obtained from one single RNA pool obtained from three individuals is not enough to support the results”. What could be the logical explanation I might add, to my argument against this question?

If anybody has any idea or any related material please send them.

The experimental parameter used in the literature in my problem are all used for a parallel execution. But I want to make a sequential execution.

In sample size estimation, before starting our research, we have to decide on the significance level and our estimated power for the study. When we are able to collect a larger sample size, could we consider as an example 0.01 instead of 0.05? How does this change affect the result?

To get the optimization parameter in machining process.

As per current ICH guidelines, Q8R2 is the current demand of the pharmaceutical formulation development to implement Quality By Design in the pharma industry. Therefore, the chemists and scientists should have the knowledge of QbD. However, academics show the least amount of interest.

I have Three factors A, B and C with levels 15, 2, and 2 respectively. The standard deviation of population is 1.8 from pilot survey. I want to fit three way ANOVA model:

y_jikl=mu+alpha_i+beta_j+gamma_k+(alpha*beta)_ij+(alpha*gamma)_ik+()ijk+error_ijkl

Our main hypothesis is to find best level of factor A with interaction levels of B and C. How do you calculate sample size for testing this hypothesis? And could you give me the R/SAS code for calculating sample size by simulation?

I calculated the heat of formation of ammonia using different levels of theory namely Hartree fock, MP2, CCSD and CCSD(T) and different basis sets namely 3-21G, 4-31G, 6-31G**, 6-31G**++.

What i expected was the answer in the case when I use CCSD(T) theory and 6-31G**++ basis sets, to be closer to the experimental result, but it was not the case.

I have checked the calculation twice but the answer is the same.

Is there an explanation for this?

Thanks.

In my thesis, I have a couple of variables which may affect the final result. When checked the number of experiments to be conducted it was 128. Even if I decide to do each one once, performing 128 tests seems quite impossible. I cannot reduce the factors because all seem important. Is there any other ways to reduce the number of experiments?

In the pre-test post-test experiment design, both tests should be identical (same questions). Is it possible to have a pre-test and post-test which are assisting the same concepts but using different questions?

In my experiment I had a control and experimental groups. Both groups answered the pre-test and post-test questions, however, the questions in the pre-test were a bit different from the questions in the post-test.

I have results of the experiments designed as you can see below (or in the attached file). What is your suggestion for analysis of such designed experiments? Actually my problem is with center points which were run separately and in different days otherwise the main experiments could be treated as split-plot design by taking A and B as hard-to-change factors.

Please note that:

- doing complementary experiments is impossible.

- Finding the significance of curvature is important.

- Estimating the effect of main factors and their 2nd order interactions is necessary.

A B C D

------------------------------------1st day

0 0 0 0

------------------------------------2nd day

-1 1 1 1

-1 1 1 -1

-1 1 -1 1

-1 1 -1 -1

------------------------------------3rd day

-1 -1 -1 -1

-1 -1 -1 1

-1 -1 1 -1

-1 -1 1 1

------------------------------------4th day

0 0 0 0

------------------------------------5th day

0 0 0 0

----------------------------------- 6th day

1 -1 -1 -1

1 -1 -1 1

1 -1 1 -1

1 -1 1 1

------------------------------------ 7th day

1 1 1 1

1 1 1 -1

1 1 -1 1

1 1 -1 -1

-------------------------------------8th day

0 0 0 0

-------------------------------------9th day

0 0 0 0

I found out the optimal version for my experiment would be ABBA/BAAB/AABB/BBAA, but it only made sense just to do a ABBA/BAAB.

My question is what properties of the optimum get lost through this reduction of test groups?

is it still uniform and strongly balanced? If yes, what's the disadvantage of this reduction of groups?

At https://onlinecourses.science.psu.edu/stat509/book/export/html/123 are some definitions, but I'm not sure about whether or not these properties also match with my cross over design...

I would like to conduct an experiment where participants are put in different emotional states and then need to take decisions. Is there a common/well-known experimental approach for that? Maybe some game?

I am used to the corpus linguistic paradigm, but now I need to do some linguistic experiments. As I want to avoid making fundamental mistakes, I search for literature that describes the general methodology of experimental linguistics.

Scientists have a choice to start with either the theory or the experiment.

If there are at least 4-5 factors to consider, there will be too many samples. I have read something about 2k factorial level design, and some researchers used to screen out the important factors.

I am conducting an experiment about public goods dilemma with a group of 4. My design has two kinds of treatment as experimental treatment and control treatment. However, I have very limit financial support so I am wondering how many participants should I invite to my experiment. Now I have collected 8 groups (32 participants) for experimental treatment and 7 groups (35 participants) for control treatment, so the total number is 68 persons. Is that enough?

I have checked many ralated papers, but many of them have more than 100 participants and some only have 64 subjects, for example “Climate change in a public goods game: Investment decision in mitigation versus adaptation”.

By the way, the result of the experiment with the current data is pretty good and it has verified my hypothesis.

Any idea will be most helpful.

Yours sincerely

Joanna Zhang

____________________________________

With so many excellent answers, I have learned a lot, thank you!

Hi all,

I would like to implement DoE into our bioprocessing unit for animal cell culture. I would like to ask you the following:

- What are your experiences with DoE

- Which program do you recommend

- Is there literature you can suggest to get more familiar with DoE

- Any course/workshops you can recommend

Thank you so much - looking forward to an interesting discussion,

David

I did a two-level full factorial design in order to find the significance of three factors (A, B, C) and their interactions on a response (R). Due to interaction between minimum level of A and maximum level of C at two runs, a situation turned up that R is not detectable. How can I treat with such a case? Shall I put these two runs away and analyze the remained ones? If so, what should I do with this non-orthogonal condition?

Hi, I'm planning to conduct a social experiment on online communication apprehension. I've been informed that the number of participants for a social experiment should not exceed 30 people for every experiment group. Any idea where I can get references to support this information or otherwise?

Many thanks in advance for your assistance.

To test the effect of defoliation on an invasive weed, I will be clipping plants in different seasons (spring, early summer, late summer, autumn [fall]) and at different frequencies (none (= control), 1, 2, 3, or all 4). The attached file "Clipping design" shows it diagrammatically.

Is it correct to describe this as a full factorial design with 4 factors (Spring, ESummer, LSummer, Autumn) each varied at 2 levels (clipped or not clipped)?

It just seems a complicated explanation of a simple design. And analysis would allow for high-order interactions (up to Spring*ESummer*LSummer*Autumn), which also seems a bit complicated.

Frequency is aliased with the other factors, so can effectively be ignored, or analysed separately.

I need to design an experiment to measure or record dislocation movements inside material grain under different strain conditions.

I am working on an experiment using Response Surface Methodology (Design of Experiment). I use StatEase software for the same. During analysis the software shows that CUBIC model for my data is Aliased. However, the cubic model is significant (p<0.05) and has insignificant lack of fit (p > 0.05). Moreover R2 value is also very good 0.985.

Can I use this model for prediction of optimized conditions, although the model is aliased, but statistically significant?

Can we design an experiment to prove that speed of a particle cannot go from below light to above light? I am not talking only about accelerating particles but the experiment must be able to prove that no method including pushing, jumping or tunneling particles from below the speed of light to above the speed can exist. I am not looking at the theoretical derivation but an actual lab experiment.

I think that Taguchi its very useful, however, as I do not have much experience using these techniques because my experiments are expensive very time consuming.

Do we have nowadays the software which can design a mixed factor fractional factorial experiment? Probably in R, Matlab, Unscrambler?

I have 2 factors with level 2 and 3 factors with level 3. The full factorial experiment 2^2 3^3 = 108 runs, but I want to reduce it to 54. Someone knows how to do it?

I want to carry out remediation in a screen house using different macrophytes to test their remediation potential on 5 different industrial effluents. How do I design the experiment?

I am planning to prepare a review in a reputed impact journal that will give a brief and lucid idea on the application of Design of experiments and statistics in pharmaceutical research.

If interested please respond and be a coauthor the Publication

Most experiments which can used will have an induced associative learning recorded in the memory of the lion. How can we remove that part and conduct an experiment?

I am working on a transcriptomics project to identify bio-markers which can separate two classes of hepatotoxicants. there are 10 compounds, 5 of the compounds belong to one class and the other 5 belong to second class. for each of the compound, there are 3 concentrations tested. In total 32 samples from both classes in an experiment (biological replicate) including the control sample (same control will be used to make all comparisons). With sample size of 6, I have have 192 samples from 6 independent experiments. Now, I need to perform hybridization of 192 samples in 2 runs or batches.

I am wondering what is the best way to randomize the samples to avoid possible batch effects.

Thanks in advance.

I did a 2-level full factorial design with 3 factors in order to find the significance of each factor and their interactions on a response. After ANOVA on non-transformed response, I found no factors with significant p-value while 2 factors were obsereved with significant p-value when the same response was log- transformed. What can I say on significance of the factors? Are those significant or not?

Does anyone know some proxies to measure audit quality as they are used by Kaplan/Mauldin (2008) or Knapp (1991)?

I did the following full factorial experiments to find the significance of variables and fit model. I analyzed the data using the standard least squares model and generalized linear model (normal distribution/ identity link function). The results of both analyses are attached. Why do they show completely different p-values while the model estimates are exactly the same? What can I say about the significance of the variables?

X1 X2 X3 Y1

++− 1 1 -1 67.5

−++ -1 1 1 0.5

+−− 1 -1 -1 8.9

−−+ -1 -1 1 0.4

+++ 1 1 1 56.9

−−− -1 -1 -1 8.7

+−+ 1 -1 1 6.6

−+− -1 1 -1 69.4

000 0 0 0 37.1

(all variables are continuous)

Generally, how can I find that which model (GLM or standard least squares) should be used for analysis when I have no idea about the response distribution?

----------------------

Y2

0.034

0.001

0.011

0

0.144

0.007

0.035

0.021

0.053

consider Y2 as another response of the designed experiments above. when you check the distribution of Y2 , you will find 0.144 as outlier. does it mean that least squares model can not be suitable as it is affected by outliers?

~10% of the Reactive oxygen species can originate from extramitochondrial sites like NADPH oxidases. Besides knocking out mitochondrial genes, is there a way to determine the differences in cell culture. I am trying to design an experiment to explain that the origin of ROS is extra mitochondrial. As far as I know, fluorescence based assays do not tell the difference from where they come from. Am I missing something?

I'm trying to design an experiment on domain wall resonance by alternating field (magnetic field or current). My concern is about frequency domain of the resonance (ideally in Permalloy and with 50nm domain wall size) and what kind of experiments has been done. I'm using geometry constrictions to trap domain walls.

Individual Variables Symbols Levels

-1 0 1

Extraction time X1 20 24 28

Extraction temperature X2 4 30 56

Vol of enzyme solution X3 4 5 6

I have tried water in which extract is insoluble and DMSO & other organic solvent itself is showing inhibitory effect in the assay.

I am planning to study a hypothesis for understanding the pathophysiology in a particular disease. I am therefore wanting to know how many disease cases do I need to study for such research.

I want to do 2^4 (+ 3 center points) experiments in order to find the effect of 4 factors on a goal at two levels. One of the factors is temperature. If i want to run the experiments randomly it takes a long time because I have just one equipment set for keeping the temperature fixed at max or min (or middle) value. Can i do all experiments requiring max (or min or middle) temperature together instead of doing randomly? As far as i know we can not block the experiments in terms of one factor (temperature).

I used to use full and partial factorial DoE methods for mathematical analysis. Right now I am using Taguchi concept. I like Taguchi because of easier calculations and higher precision, moreover there is no need of the deriving any mathematical models. However, in my applications very often interactions between inputs play an important role and Taguchi can be in such applications insufficient. Does any one of you can suggest me any other design of the experiment methods and give me some info about additional literature?

I am interested to find projects and papers about this subject that focus especially that kind of relation - the Relational Design in the Andrew Blauvelt Concept.

Plotting of graphs are the best means for the analysis of our results. But if they are plotted wrong, then surely we can conclude wrong results.

Many of the biologists are not very familiar to various programming languages, so the software which are user friendly, may be preferred in contrast to one, which involves command line interface.

I was wondering if anyone could help me with a question regarding the assumptions for a statistical test I am running as part of a manuscript that I am revising now for publication (after receiving initial comments by the reviewers).

Basically, I recorded voices of men and women in two languages, responding to both men and women who were categorised either as attractive or unattractive, and I am analysing 4 different acoustic parameters of their voices. So, it is a 2 x 2 x 2 x 2 mixed design MANOVA, for which I have 2 between-subject variables (sex of the speaker and language) and 2 within-subject variables (target sex and target attractiveness), all with two levels, and 4 outcome variables. In total, I have 110 participants (30 men and 30 women in one language, and 25 men and 25 women in the other), and 4 observations per participant for each one of the 4 outcome variables. Therefore the degrees of freedom are 4, 103 for multivariate results, and 1, 106 for univariate.

Because all independent variables have only two levels, sphericity is not an issue. However, I have not been able to find clear information about multivariate normality and the variance-covariance matrices for a mixed design like mine, and how robust is the test to violations of these. I can run a Box’s test, but it seems that to be able to interpret the results it is essential test the multivariate normality, which apparently it is not possible in SPSS (which is what I normally use). Even more, all the information I have found seems to be contradictory regarding the importance of group sizes and how it affects the robustness of the main MANOVA to possible violations of the assumptions.

As you can see, I am very confused. I would appreciate any advice.

Epitope prediction have many tool online, but which one will be a generally suitable for polycolone antibody production (maybe for western blot)?

Mixture design of organic solvents

I have recently conducted a randomized experiment. I have one major (in my opinion) finding and potentially two more interesting findings. What would be a better strategy to get this into these journals: A) Simply focus on the main finding, B) Discuss the main finding and then add the other two as added complexity, C) Advance all three hypotheses together in the lit review section, and then test all three and present three findings together.

I am using a backstepping position control for an electrohydraulic servo system. I am using DS1005 DSpace and real time workshop to develop the experimental setup and the 4/3 way servo valve and asymmetric cylinder is also used. The simulation is doing very well, but during the experiment the extension case has noise and disturbance while the extraction case works very well.

I am carrying out a three factor ANOVA to test for significant differences between three factors, site, shore and station. Now I am having interaction factors which are significant either between site x shore or even all three factors. Though conditions within a site vary from other sites, i see no way how one can explain an interaction between site and shore type.....in these cases are interactions left uninterpreted?

I have to measure vibrations of blades fixed to the rotor. Unfortunately I have a very limited budget. We've just started the project and tried to get some funds. At present I have to prepare the measuring system on the rotor with diameter of ca. 5m. What is the cheapest way to do this? The rotating system can rotate with various angular velocities in the range from 0 to 1000 rpm.

Effects of ambient light; How to design experimental device?

I have six treatment groups (different cell types) with four biological replicates per group that I would like to compare protein levels in. I have come up with several experimental designs, each with pros and cons... For my first design, I thought to run four separate gels, each one containing 1 lane of each treatment group..For a second design, running six different gels, one for each cell type and having 1 lane per biological replicate. Trade-offs between the two being inter-gel variability and culture issues (different growing times, passage numbers, etc.)

Does anyone have any recommendations or thoughts on what the best way to go about this would be?

For instance, in a 2x2 factorial experiment with 4 treatments, what minimum number of animals can be used for each treatment?

I'm working with a colleague on series of experiments treating mice with compound A to see if their condition improves. Baseline behavior tests were administered to determine the difference between WT and diseased mice, and then each genotype was to be separated, 50% for no treatment, 50% for treatment with compound A. My initial reaction was to randomly assign the mice to either treated or not treated based upon drawing mouse numbers out of a hat.

Instead, my colleague analyzed the baseline tests and decided which mice would be treated or not treated based upon these tests in an effort to "ensure accurate representation of all baseline levels". Now, ignoring the fact that I would have rather done this completely blind, and had someone else randomly assign them, is this an acceptable way of deciding on treated vs. untreated mice? I'm not 100% comfortable with it, so I'm curious what the scientific community at large feels.

Optimized through DOE algorithm, parameter of interest lies outside the range of DOE dataset indicating there is a interaction effect between variables. What is the simplest regression method if we know what is the correlation between individual variables that are part of the input parameters?

Let me know location for ball mill.

I'm analizing the interaction between a plough tool whit the soil, I have a DOE for this analysis, but my response is a Matrix.

Can this be done? And if so, could someone supply the design please?

When using stratified sampling to build focus groups, it may be harder to find groups with certain qualities (in my case a certain style of religiosity). Sometimes it is even necessary to boost samples. Of necessity, I have been unable to find equal numbers of each instance of religiosity style - I have held 8 focus groups with one religiosity style (heritage) but only one with a contrasting style (convert) as the latter is quite rare. To what extent can I base conclusions on the contrast between the two types of group - and how would I qualify my findings in a write up?

The lack of experimental details seems to be a problem for most posts I visit to try to offer advice. There needs to be a policy or standard set of information that is offered up to get proper advice. Check out some of the QA spaces in the computer science realms to see what I mean (eg. stackoverflow). The majority of replies to a post seem to always be clarification questions and then a few people to take a stab at an answer.

I sort of understand most peoples reluctance to provide details as they fear being scooped, but if your want help you have to give a sufficient amount of detail. Most problems stem from experimental artifacts or fundamental misunderstanding of biological principals. For example you can hide the name of a gene but should provide functional details like if it is a transcription factor or a kinase, I think the name of the sequencing platform or details on how a library is prepared are very pertinent pieces of information that will not deluge to the world your research.

The post on this system are too much like twitter. Its fine for quips or to point people at a resource/news article/event, but not okay for any serious conversation.

I work on the optimization of surveillance network for an insect ecosystem. I'm finding out some models or methodologies for this study. If you have some experiences, please share me some references or links?

Thanks in advanced!

For example, when the nature of the experiments indicate that is highly unlikely that the random component of the observations will affect the estimates of the factor effects.

I am planning to conduct experiment with composite of different proportion of Si3N4 and hBN, to investigate the effect of proportion on tribological behaviour of composite. What should the design of experiment be?

The dependent variable should be operationally defined in measurable terms. As such they should be characterized as reliable and valid. Could someone clarify these concept?

As usual in some disciplines, a Researcher conducted an experiment without having previously designed. The experiment consisted of exposing 4 different groups of mice at two types of food (simultaneously measuring and amount ingested preference) and each group was expusto turn to different conditions.

What would be the appropriate statistical procedure?

The problem I have is the dependence on preference.

Friedman and Sunder defined experimental data as "data deliberately created for scientific (or other) purposes under controlled conditions", and laboratory data as "data gathered in an artificial environment designed for scientific (or other purposes)." Based on these definitions, I would like to know if experimental data are in anyway different from laboratory data. Where can the boundary be identified if they actually differ?

What is difference between standard method and S/N method in data analyzing?

Whenever individuals, or groups of scientists plan an experiment, small or large, there is a bias in the estimated accuracy or 'outcome' which more than often favour high accuracies. This 'higher than before' accuracy is often used as the driver for funding and scientific acceptance of the proposal. How does our subjective confidence (being higher than our objective accuracy) influence our sciences? Is it a positive or negative influence? Do we actually achieve more this way? Is there a balance between overconfidence, optimism and actual achievements?

Assume that we have 8 factors and want to design experiments using 2-level fractional factorial design. At the same resolution for 1/8 fraction and 1/16 fraction, we can run partly a same number of experiments:

37 runs with 1 replicate (32 + 5 center points)

42 runs with 2 replicates (16X2 + 5X2 center points)

Considering that we can do not more than 42 experiments, which ones give the better estimation - with replication or without?

There are many methods, such as, 2 level factorial, center composition design, taguchi, and so on, the question is which model is more responsible for DOE and concrete analyzing?

What would be a good behavioral task to test rats' sensory discrimination difficulties?

I'm using a design of experiments method, especially the Box-Behnken matrix. Now comes the time to plot some figures to see the effects of the three constants and I started to do it like in this paper:

The thing is that when studying 2 of the 3 constants, there are two results for the same point given the constitution of the matrix. Matlab automatically does an average of those two values but I wonder if it's really pertinent as the third value has an effect on the response. So is this method using the average still relevant to analyze the effects?

Can you suggest another method which could be better?

My goal is to detect the differential gene expression in two plant biotypes. The genes involved with hormone biosynthesis and regulation are what I'm mostly interested in. Which is the best way to do this with a limited budget?Should I use smaller sample size with higher sequencing depth (100 PE), or larger sample size with lower sequencing depth (50 SE).

This questions needs to be resolved seeing that otherwise it is impossible to include qualitative attributes in a pivot design. I have so far not tracked down a single article that has done so (they all only use quantitative attributes, that can just be coded linearly).

I've got some data from an experiment in which participants were asked to complete a task (i.e. placing a group of objects onto a target using a tool) as fast as they could with the minimum number of errors (dropping the objects). As time is dependant on error (dropped objects cannot be picked up again, so the experiment terminates earlier than if all objects were placed without error) I would like to combine the time and error data into a single figure by creating a time penalty for each error, but I do not want to arbitrarily select this number. Are there any classic methods for determining what value this penalty should be? I imagine it will be a combination of maximum / minimum / average time and errors.

Any suggestions appreciated.

Yielding of a chemical product (Y) is a function of concentration of 3 ingredients (X1, X2, X3), temperature (T), and pH. I want to find a fitting model which relates Y with all the factors (X1, X2, X3, T, pH).

* T and pH are independently adjusted in each experiment (they do not depend on amount (or concentration) of ingredients and each other).

** I want to investigate the factors at 3 levels (3 concentrations of ingredients, 3 temperatures and 3 pHs).

*** There is no linear relation between Y and factors' levels.

I've been observing my liquid sample under optical microscope and found the digital picture shown in the computer as obscure and less in quality from the actual picture seen through the eyepiece. I assume this is common for all the microscopes. How can I capture this eyepiece picture to a video?

We have isolated genomic DNA from three biological replicates (3 different petri plates of filamentous fungi) of our samples (2 treatments and a control). We are then proceeding to qPCR, examining the effect of our chemical treatment on starting copy number available for amplification. In order to optimize the assay, I would like to run a standard curve to determine our reaction efficiency (I believe this is also a suggestion for high-quality data from MIQE). I am looking at a 5-point serial dilution with three technical replicates of each point, taken from my control sample's gDNA. But how do my biological replicates play into this? Do I pick just one at random, or do I have to run a standard curve for each bio-rep and hope that all three are statistically similar? And if the latter is the appropriate course, then I won't be able to run my standard curves and samples on the same plate (I'm limited to 48 wells). In such a case, would it be appropriate to run my samples immediately after my standard curves?

I'm new to this, so I appreciate any advice!

I am doing cell experiment on op9 cells. My sample turns into suspension when i dilute it with the media. Would it be all right if i use that sample even though it is a suspension?

I want to analyze the gas evolved in minute amount (<0.5 ml). It is also necessary to store multiple gas samples.

I am planning on an experiment layout of random sampling using a quadrat of size x, over an entire field, rather than individual plots. Does anyone know of any good material that can support my choice of experimental design?

I'm looking for a way to inspect the flow behavior of vapors and its influence on my chemical system.

How can one determine the coefficient of discharge for oscillating helium flow from 22 bar to 8 bar? The oscillation is produced by a rotary valve which generates the sinusoidal pressure waveform to create an oscillating flow. Which instrument can be helpful? Coriolis meter can be useful but I do not know whether it can be suitable for 22 bar to 8 bar pressure variation with to and from motion of gas?

I would like to know the best and well known software which is able to do experiment design, to be specific OPTIMAL design?

We have the sequence of an siRNA we have used to knock down a gene of interest. Rather than keep ordering it, we converted it to shRNA to clone into the pU6YH vector.

Cloning worked, cells are transfected, but no knockdown relative to siRNA control. Can all siRNA's be converted to shRNA? We have siRNA-resistant constructs that we do not want to re-design. Could the sequence/length of the loop make a big difference (we are using one that worked for someone else)?

Last term we made a monopoly experiment relating to Nelson, R. & Beil, R. (1994)

We made new findings and developed enhancements. Which Journal might be interested?

I am wondering what possibilities there are in order to test whether one factor in an experimental design is more important than another. Importance being measured as the amount of variance explained. I think that testing the regression coefficients should work. Do you any other ideas? Or do you know research in which such a regression approach has been done?

I am measuring the level of a protein in different genetic backgrounds.

For example, I am measuring the level of protein X in wild type and atg8mutant.

I ran both the wild type and mutant samples on the same gel. Probed initially for protein X, followed by actin (loading control).

Bands were analyzed using an analysis software (Total lab).

Then I did a couple of normalizations.

1) Level of protein X was normalized to respective action control.

2) The control (wild type) condition was normalized to 1 and all other experimental conditions were compared to this.

Following is an example of what I have done.

SDS fraction Vol of protein x Vol of action

Wild type 695432.72 174080.04

Atg8a mutant 948245.24 61598.79

1) Normalisation- Vol of protein x in lane A divides by Vol of actin in lane A

That gives wild type =3.99

atg8a mut=15. 39

2) Relative protein level in relation to wild type -(Divide each sample with control (wild type) including the control)

That gives wild type =1

atg8amutant-=3.85

I repeated the experiment two more times and analyzed the data as described above. Other two experiments also follow the same trend (mutant with more protein). Since these data are from three independent experiments (three different western), How should I apply the statistics?

Should I use the values normalized with actin ( above described first value-3.99 and 15.39) or the value normalized to the 1 (1 and 3.85) for analysis?

Should I do a paired t test? Or two way anova (each experiment as random variable and group (wild type and mutant) as fixed variable?

Thank you very much. Any help will be appreciated.

Seeking DoE (Design of Experiments) help to frame treatment matrix for plastic injection moulding experimentation

I have 3 independent blocks (injector (3 factors), impression (2 factors) and injectant (1 (pair) factor)) with each factor at 3 levels as well as an factor (sprue expansion angle) which I have to test at 5 levels, as indicated in the attached excel file.,
Now for each combination of these 3 blocks + 1 factor, I will be able to record a pair of responses / contrasts (Performance (parametric) & Quality [non-parametric])
My hypothesis is,
(1) sprue expansion angle is significantly influenced by the previous factors as well as their interactions.,
(2) sprue expansion significantly influences response performance and quality.,
Kindly help to design an appropriate treatment matrix for injection moulding experimentation.,
I will be glad to answer any of your clarifications.,