Science method

# Design of Experiments - Science method

Explore the latest questions and answers in Design of Experiments, and find Design of Experiments experts.
Questions related to Design of Experiments
• asked a question related to Design of Experiments
Question
One of the most significant steps in solving multi criteria decision-making (MCDM) problems is the normalization of the decision matrix. The consideration for the normalization of the data in a judgment matrix is an essential step as it can influence the ranking list.
Is there any other normalization method for the "nominal-is-better" case besides the normalization that is possible through gray relational analysis (GRA)?
Here are a few common ones:
1. Min-max normalization: This method involves scaling the data so that the minimum value is mapped to 0, and the maximum value is mapped to 1. All other values are then scaled proportionally between 0 and 1. This method assumes that all criteria have equal importance.
2. Vector normalization: This method involves dividing each value in a row by the Euclidean length of the row vector. This method assumes that all criteria are of equal importance and that they are independent of each other.
3. Standardization: This method involves transforming the data so that it has a mean of 0 and a standard deviation of 1. This method assumes that the criteria are normally distributed and that they are of equal importance.
4. Logarithmic normalization: This method involves taking the logarithm of each value in the matrix. This method is useful when the data has a large range of values or when the values are highly skewed.
It's important to note that the choice of normalization method should depend on the nature of the data and the decision problem at hand.
• asked a question related to Design of Experiments
Question
I need to understand whether the focus should be on one person at a time in such experimental designs. Kindly help me with the references if possible. Thank you.
I can send you a book chapter in PDF format on your email, if you want? Just PM me your email and I will send it :) Hemangi Narvekar
• asked a question related to Design of Experiments
Question
Could any expert try to examine the new interesting methodology for multi-objective optimization?
A brand new conception of preferable probability and its evaluation were created, the book was entitled "Probability - based multi - objective optimization for material selection", and published by Springer, which opens a new way for multi-objective orthogonal experimental design, uniform experimental design, respose surface design, and robust design, etc.
It is a rational approch without personal or other subjective coefficients, and available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
Best regards.
Yours
M. Zheng
1. Evolutionary Algorithms (EA): Evolutionary algorithms (EA) are a family of optimization algorithms that are inspired by the principles of natural evolution. These algorithms are widely used in multi-objective optimization because they can handle multiple objectives and constraints and can find a set of Pareto-optimal solutions that trade-off between the objectives.
2. Particle Swarm Optimization (PSO): Particle Swarm Optimization (PSO) is a population-based optimization algorithm that is inspired by the social behavior of birds and fish. PSO has been applied to multi-objective optimization problems, and it has been shown to be effective in finding Pareto-optimal solutions.
3. Multi-objective Artificial Bee Colony (MOABC): MOABC is a multi-objective optimization algorithm inspired by the foraging behavior of honeybees. MOABC has been applied to various multi-objective optimization problems and has been found to be efficient in finding the Pareto-optimal solutions
4. Decomposition-based Multi-objective Optimization Algorithms (MOEA/D): Decomposition-based multi-objective optimization algorithms (MOEA/D) decompose the multi-objective problem into a set of scalar subproblems, then solve them by using a scalar optimization algorithm. MOEA/D has been found to be effective in solving multi-objective problems with high dimensionality and/or large numbers of objectives.
5. Deep reinforcement learning (DRL) : DRL is a category of machine learning algorithm that allows the agent to learn by interacting with the environment and using the rewards as feedback. This approach has been used to optimize the decision-making process in multi-objective problems.
• asked a question related to Design of Experiments
Question
Hi every one
I want to setup a new NGS based kit for my project.
In our project, other researchers used to work with one kit and It had good detection rate
some samples was False negative
The second generation kit is much more comprehensive and I want to set up it
How do you design experiment for the second kit?
How do you choose the best minimum Input for library preparation?
You need to go for lot verification for a new kit, first and always use positive n negative controls. The confirmed positive sample you can run in this batch
• asked a question related to Design of Experiments
Question
This is not a laboratory experiment where we are able to control all of the variables. This is a bit more working-world application experiment. What I'm seeing in the research is a lot around the cadence of giving the spaced review before the final assessment and types of questions to use. However, as we are doing this in a workplace with optional participation, some of the additional variables we have to consider in designing the experiment are how long the tests should be available (without messing up the cadence too much) and how many times does each concept need to be retrieved in a spaced retrieval session (aka mini-testing) to be effective?
Thank you!
Eftychia Aslanidou Thank you! The book didn't come up when I clicked, but the other two did. I think that last one might be the most helpful, but I need to get the full text to be sure. I suspect (since I've read a newer comprehensive meta-analysis) it won't have the answers but it's worth checking out!
• asked a question related to Design of Experiments
Question
Hello.
I would like to ask, how should I incorporate the cytotoxicity as a response in the factorial design, since cytotoxicity depends on concentration?
My only idea is to evaluate the IC50 for each experimental unit, but it seems like a horrendous amount of work.
Are there other approaches that would be more accessible?
Good luck Adam, I cannot think of any other way, research is often a horrendous amount of work.
• asked a question related to Design of Experiments
Question
Hello there!
My question concerns three cases: scientific papers, doctoral thesis and simple presentations. Which data from design of experiment are necessary in each case, which are more than welcome and which are completly redundant?
For example in full 2 level factorial design should I present half-normal probability plot, Pareto plot and ANOVA? Or perhaps ANOVA is better for papers and Shapiro Wilk normality plot better for presentations while it LOOKS better?
Another question, should I present all diagnostics plots (Box Cox, Residuals vs Factors and so on) or just mentioning that all data is correct is enough?
All suggestions are much appreciated!
I am agree to what mentioned by Tarek Moussaoui. You may consider the following as well:
1- In simple presentations use graphs to show the effects of main factors and also the interactions.
2- In scientific papers and doctoral thesis, In addition to finding the optimum conditions of your experiments by using final model of RSM, you should perform a verifying experiment at that condition, and compare the experimental against the model predicted response.
• asked a question related to Design of Experiments
Question
Can the levels of input factors that arrive at maximum value of response (Maximum is better) considered as optimum in full factorial design ?
desirability functions are used in the framework of Multi Objective optimization, they sort of try to attain a compromise among the different objectives ( for examples finding the factor settings that is simultaneously optimal for a set of objective functions, which may be the yield of product A1, A2 and so on)
I suggest you look into the theory and implementation of the desirability functions method, and other multi objective RSM methods, from the book on Response Surface Methodology by Myers, Montogomery and Cook, in this book there is a complete chapter dedicated to multi objective optimization methods and use and implementation of desirability functions approach in Minitab software.
• asked a question related to Design of Experiments
Question
Hi,
I have used central compoiste design with four variables and 3 levels which gives me 31 experiements. After performing the expeirments, I found that the model is not significant. However, when I used different data (which I prevousluy obtained), I got the good model.
How do I justifiy using user-defined data? and why CCD failed to provide a significant model?
I would be really thankful for your response.
There are few question necessery to ask.
Are you sure you have used Central Composite Design? CCD requires 5 levels for each factor: -axial, -1, 0, 1, axial. Perhaps you used Box-Behnken Design which requires -1, 0, 1?
Nextly, what do you mean you used different data, which gave you a good model? You already had responses for this exact design? If not, maybe previous data does not represent your current experiment?
Eventually, are you sure your factors significantly affect the response? They might not, therefore it can not find significant model.
• asked a question related to Design of Experiments
Question
Hello.
I evaluate factors affecting the synthesis of lipid liquid crystalline nanoparticles with 2^4-1 fractional factorial design.
After the ANOVA, I have two possibilities. The first model with fewer terms (only significant in terms of p-value) has a lower R squared (0.7541). On the other hand, I can choose a model with two additional insignificant terms, one suggesting interaction between factors, and this model's R squared equals 0.8676.
What is more correct to do? To choose the model with only significant terms but with a lower R^2, or the model with more terms, including insignificant ones, but a higher R^2?
You're missing the point here since you are doing ANOVA I assume explanation is your goal. So why do you need to consider two models at all.unless you are doing something like response surface methods. In any case.p-value selection doesn't work either. I suggest you see the two references below
Especially the one by Montgomery available for download from the z-library. Best wishes David Booth
• asked a question related to Design of Experiments
Question
Hello!
I went through the 2 level factorial design to screen factors, and now I am moving to the characterization using CCD.
The coded alpha value in Design Expert v.13 is set to ca.-1.4. However, one of my factors assumes discrete values. I can change the alpha value to e.g. -1.5 (I go with even numbers so it works fine). My question is, how does such a change affect the design? Will I encounter any issues?
It might have more runs, no?
There is a 80 minute presentation by Statease, describing how to sophisticatetly assess if "more runs", etc, are really necessary. It explains the concept of "Fraction of the Design Space", FDS, graphs.
• asked a question related to Design of Experiments
Question
I am carrying out a multiway factorial analysis with four factors each having three levels. Two of my factors A and B have significant main effects whereas the other two C and D don't. There is no first order (AB, AC, AD, BC, BD, CD) or third order interaction (ABCD) between these variables. However, there is significant second order interaction between variable A, B and C. Can this be ignored as a higher order interaction ?
I suppose, the interaction ABC is is the difference of effect AB at different levels of C; will there be interaction ABC when there is no AB interaction?
How do you conclude that "There is no [...] interaction"? Because the respective p-values from your sample happend to be larger than your level of significance?
Note that "Absence of evidence is NOT evidence of absence" (Cox).
You can (you should) ignore an interaction when it makes no sense or when you want to study marginal effects.
• asked a question related to Design of Experiments
Question
Intuition tells me that using the mean of three measurements, which is characterized by a standard deviation already, will interfere with other statistics calculations and subsequently with fitting the model.
However I am not 100% sure which should I use. Better safe than sorry.
TL;DR: use the averages.
What you want to know is what one should (reasonably) expect when the experiment is replicated. Repeating the experiment means to use new experimental units. So the key thing to know is how variable are results between different experimental units. What is not interesting is the variability between repeated mesurements on one and the same experimental unit. This "technical" variabilty interferes with what you actually want to know.
When the technical variability is very small compared to the variability between experimental units, a single measurement is a sufficiently precise estimate of the value of that experimental unit. Variability between single measurements will (more or less) faithfully represent the variability between experimental units.
When the technical variability is large, a single measurement is not a good estimate. But errors will average out in replicate measurements. By using averages of multiple replicate measurements (taken on the same exp. unit), you will get better estimates.
Statistically, you usually also want to estimate some model parameters and provide uncertainty (or compatibility) intervals or p-values for these parameter estimates. This involves the standard errors, which are calculated based on the relevant "degrees of freedom" of the statistical model. These degrees of freedom depend on the number of experimental units in your study - not on the number of replicate readings per unit; these would not be statistically independent! Hence, if you use the averaged values, the model only sees only statistically independent values (as many as you have experimental units in your study).
Example: let the experimental units be human subjects and the experiment is to estimate how the body weight depends on the sex. As an extreme case, you can take one male and one female subject and mesaure their weights. Because the weight measurement is erroneous (or depends on many things you cannot control), you take 100 measurements from each subject (possibly on different days, day-times, etc.). If you want to estimate (or test) the mean difference, the test "sees" 2 x 100 values and assumes that they are all statistically independent. The standard error will thus be tiny and you will find a highly statistically significant maen difference which makes you concluse that, say, females are heavier than males. But this conclusion is not backed up by data. You have NO information how the weights would differ in different subjects and if the highliy significant difference you found is a precise estimate of an arbitrary difference between two random subjects or if this is attributable to a systematic sex effect. The only way to examing a systematic sex-effect is to have data from several (many) different subjects (from each sex). And it is ONLY the information per subject that will count. Replicate measurements can help to get more precise (average) values per subject.
• asked a question related to Design of Experiments
Question
RBD(Randomized Block Design)
CRD (Completely Randomized Design)
As per my knowledge, CRD is used for homogenous experimental units, say lab and greenhouse research where influencing factors can be made the same for all treatments. But in field experiments, there is the heterogeneity of soil and climatic components and in such cases we use RCBD. Based on your question, just to answer it, I would say, when there is homogeneity in experimental units RCBD and CRD could be same which is possible in greenhouse and lab research only not in field experiments.
• asked a question related to Design of Experiments
Question
For DOE (Design of Experiments) and RSM (Response Surface Methodology), I'd like to use Python. Please post any relevant experiences, references, or Python codes in this thread.
I'm looking forward to hearing positive news from you.
Warm regards
Amir Heydari
Below, I presented my experimental data published in https://www.sciencedirect.com/science/article/abs/pii/S0960852419313148
Let's try to find the obtained quadratic equation using Design-Expert software and compare the results:
Q=+838.49+202.28A+382.17B+416.84C+94.04AB+109.54AC+278.44BC-220.71A²+125.44B²+16.19C²
A B C R
10 6 1500 347.5
10 3 2500 429.3
10 3 500 200.3
10 9 500 316.7
10 9 2500 1350.2
125 3 1500 478.6
125 6 500 483.2
125 6 2500 1250.3
125 6 1500 790.2
125 9 1500 1473.4
240 9 2500 2276.5
240 6 1500 912.2
240 3 500 312.3
240 3 2500 670.2
240 9 500 495.6
• asked a question related to Design of Experiments
Question
For our Design of Experiment (DOE) we're looking for an optimal zetapotential range for a nanoparticle size between 300 and 500 nm. If possible, could you please refer to the literature about this topic with the answer?
@Sakil Mahmud That just is not true. That is a rule-of-thumb that has been debunked over and over again. Did you even bother to look at any of the other answers to this question? Instead of finding articles to copy/paste from that you clearly don't understand, please keep these threads free of unhelpful and misleading answers.
• asked a question related to Design of Experiments
Question
i sudied a process using design of experiments. firstly, i used screening by fractional factorial design. results showed that 3 out of 5 affecting factors are significant. also i found significant curvature in model. so, i used RSM method (box-behnken) to better understand the process using the 3 selected factors. results showed that the linear model is the best model that fit the data. i have confused with the results. whats the reason that results from fractional factorial design show curvature but behave linear in RSM method?
• asked a question related to Design of Experiments
Question
Hi all,
In a experimental investigation, there are two parameters to be measured, say X1 and X2. My goal is to see how X1 varies with X2. Specifically, I am interested in classifying the graph of X1 versus X2 according to a number of characteristic graphs. Each characteristic graph corresponds to a specific state of the system which I need to determine.
The problem is with the graph of X1 vs X2 undergoing significant changes when replicating the test, thus making the classification a perplexing task. A simple approach I could think of is taking the average of these graphs, but I am not sure if this is reasonable; I am looking for a more mathematical framework.
Regards,
Armin
non-reproducible outcomes suggests that there are one or more fundamental flaws in the research. Your sample size might be too small given system variability. You might be missing some key variables. The analysis might not be appropriate. Some combination of all three. Some experiments are difficult, and maybe I can only have three replicates. I run the experiment again and get a different outcome. If you are certain that you have the right experiment, then try running the experiment several times and block each time, but analyze as one experiment.
• asked a question related to Design of Experiments
Question
Any method or technique are there to maintain homogenous weed population (approx. equal proportion of weed flora, diversity and density) on entire experimental plots to check the exactly efficacy of weed control method/herbicides.
I agree with the suggestions provided by @Andrew Paul McKenzie Pegman and @Medhat Elsahookie. Moreover, in field conditions, you may think of re-seeding/planting to maintain the same upto a certain period.
• asked a question related to Design of Experiments
Question
Hi
I am a physicist from Denmark planning a quantum experiment.
I need an cryostat that can go below 500 millikelvins.
I looking for any advice from researchers who have worked with cryostats before.
What are the best options for flexibility, price, maintains and operation?
As I think, you must construct and build this device by yourself. It is a classic tradition of all famoust physicists.
• asked a question related to Design of Experiments
Question
Hi fellow pioneers,
I was wondering if there is a good strategy for designing a set of experiments to find the factors with the most effect on the response, which is nominal (yes/no, pass/fail type of response) instead of the typical continuous response? While for nominal response a logistic regression can be performed on available data, I doubt the usual factorial/fractional factorial design still works in this case (since they are meant for continuous response). What would be a suitable approach in this case? Kindly point me to any relevant terms/theories if anything comes to mind.
Wrong. Factorial design will work here if I understand your description correctly. See Montgomery Design and Analysis of Experiments available in the z-library for download. Thinking in terms of regression rather than ANOVA should be of help to you. Best wishes David Booth PS follow up with Response surface methods to see if you can optimize your design.
• asked a question related to Design of Experiments
Question
I aim to allocate subjects to four different experimental groups by means of Permuted Block Randomization, in order to get equal group sizes.
This, according to Suresh (2011, J Hum Reprod Sci) can result in groups that are not comparable with respect to important covariates. In other words: there may be significant differences between treatments with respect to subject covaraites, e.g. age, gender, education.
I want to achieve comparable groups with respect to these covaraites. This is normally achieved with stratified randomization techniques, which itself seems to be a type of block randomization with blocks being not treatment groups, but the covariate-categories, e.g. low income and high income.
Is a combination of both approaches possible and practically feasible? If there are, e.g. 5 experimental groups and 3 covariates, each with 3 different categories, randomization that aims to achieve groups balanced wrt covariates and equal in size might be complicated.
Is it possible to perform Permuted Block Randomization to treatments for each "covariate-group", e.g. for low income, and high income groups separately, in order to achieve this goal?
Hi! You might want to check this free online randomization app. You can apply simple randomization, block randomization with random block sizes and also stratified randomization.
• asked a question related to Design of Experiments
Question
I have a within subjects pre-test post test design experiment. The participants are mental health workers and scores of emotional intelligence are recorded before and after a night shift. I ran a paired samples t-test to compare means for pre shift and post shift scores. I would also like to explore the effects of additional variables (e.g.experience, job role and usual shift length, hour break was taken and nap length) on pre shift and post shift emotional intelligence scores. Would multiple regression be the best analysis to do this? And how would I attempt this?
The simplest analysis would be a repeated measures ANOVA, with the repeated measures factor being defined by the emotional intelligence scores before and after a night shift. Yours additional variables can be added to the model as between-subject continuous variables (sometimes referred to as covariates in packages as SPSS), or as between-subject factors. The interactions of the repeated measures factor with the covariates or between-subject factor will let you know if the effect of occasion is different for different values of your additional variables.
One main problem with using this simple approach is that if there is attrition, then the cases analysed can only be those with complete data. If there is a substantial amount of missing data, then other more advanced procedures will need to be used to minimize bias (eg linear mixed models, multiple imputation etc).
• asked a question related to Design of Experiments
Question
Hi!
Could someone who has experience in factorial design of experiments help me with this question?
I'm completely new in this area, but I'd like to plan an experiment for initial screening to evaluate the "best" method to immobilize a protein. I want to evaluate one factor with 4 levels, and other three two-level factors (4^1 x 2^3).
But the four-level factor would be the type of reactant ("linker"), which I think probably afect the other variables (concentration, time, etc...) differently.
In this case, could I use an assymetric (4^1 x 2^3) factorial, and evaluate all the factors together, or should I "block" the "type of linker" factor, and evaluate only the two-level factors (i.e., make four two-level factorial experiments, one for each type of reactant?
I think you are asking a question about how the data would be analyzed, not a question about physically you would arrange the treatments. If I understand correctly, either option you present has an observation for each combination of reactant and the other factors. That is, in either case you would need at least 32 experimental units (4 x 2 x 2 x 2).
You will want to have omnibus model to analyses these data. (Not four separate models, if that was the question). A single model will make the best use of the information in the data.
However, you may not want to include all potential interactions in the model. So, for example, a model that might reflect your (4 x 2^3) thinking might be, Y ~ Reactant + A + B + C + AB + AC + BC + ABC .
• asked a question related to Design of Experiments
Question
Hey guys,
I´m looking for a standard (and automatic) way to illustrate a sequence of screens used in an experiment. Is there any plataform or software that you can indicate for this purpose?
Thank you!
Hey Diogo Kawano I use and recommend an online software called Mind the Graph. It has a big illustration library, and if you need a specific illustration they will make it for you. Check the website! Good luck!
• asked a question related to Design of Experiments
Question
Hello everyone,
I need to choose a topic on design and analysis of experiment as a project. The project consists of planning, designing, conducting and analyzing an experiment, using appropriate principles and software package of design and analysis of experiments. Could you please recommend me an article or any reliable resources for the project? It must encompass 2 nuisance factors and topics such as Randomized blocks, factorial designs, 2k design and NOT RSM or CCD.
Best wishes
I suggest you the followng book by D. Montgomery:
Montgomery, Douglas (2013). Design and analysis of experiments (8th ed.). Hoboken, NJ: John Wiley & Sons, Inc. ISBN 9781118146927
Best regards,
Ebrahim
• asked a question related to Design of Experiments
Question
Design of experiment: How can we increase or change the decimal place of the terms in the regression equation generated in Minitab v19?
It's a matter of the units you use
• asked a question related to Design of Experiments
Question
Is the determination of LC50 of any compound in Zebra fish during the breeding season cause any problems?
Zebrafish don't really have a breeding season. They have asynchronous ovaries meaning that all stages of oocytes are present, and they can spawn daily (or multiple times per week) throughout the year. Oocytes are released when stimulated by males during the breeding process.
• asked a question related to Design of Experiments
Question
We have sufficient seeds of 120 rice genotypes. We want to evaluate the genetic parameters related to early seedling vigour through replicated trials of these 120 genotypes. We didn't want to use Augmented design. Can anyone suggest us the appropriate experimental design for the same? I shall be highly grateful.
This depends upon seed available with you. If you do not want to go for augmented then conventional will work.
• asked a question related to Design of Experiments
Question
I have two factors ( X1 and X2) and a response (Y). I want to find the values for X1 and X2 to obtain maximum Y.
I have done some tests to have a view of response in terms of X1 and X2. I fixed X2 and varied X1 (X1= 5%, 35% and 65%). I observed that Y increases with increasing X1 from 5% to 65%. I cannot increase X1 to values more than 65%.
It seems that the maximum Y happens at boundary. So, CCD design does not work. What methods of experiment design is recommended?
Hi Amin .. ,
You wrote "I fixed X2 and varied X1 (X1= 5%, 35% and 65%). I observed that Y increases with increasing X1 from 5% to 65%."
However, exploratory studies should be also based on DOE (https://www.itl.nist.gov/div898/handbook/pri/pri.htm ), otherwise you can get a partial and bias view of your experimental conditions.
• asked a question related to Design of Experiments
Question
How can we use RSM when we have three levels?
FCC with alpha = 1?
Lesson 11: Response Surface Methods and Designs
• asked a question related to Design of Experiments
Question
I am working in a project to assist an experimental team in optimizing reaction conditions. The problem involves a large number of dimensions, i.e. 30+ reactants which we are trying out different concentrations to achieve the highest yield of a certain product.
I am familiar with stochastic optimization methods such as simulated annealing, genetic algorithms, which seemed like a good approach to this problem. The experimental team proposes using design of experiments (DoE), which I'm not too familiar with.
So my question is, what are the advantages/disadvantages of DoE (namely fractional factorial I believe) versus stochastic optimization methods, and are there use cases where one is preferred over the other?
When there are 30+ reactants, I first would make a network, with the input of the experimenters, of the relations between the reactants: you really have to understand parts of the chemical reactions . Modeling, without understanding the basics of what you are trying to model, is never a good idea. And, given my knowledge of chemistry, I fail to see the use of stochastic optimization in this context. Maybe systems and control theory could give you insights as well. Maybe you can view the whole as a system, with inputs and outputs.
• asked a question related to Design of Experiments
Question
How do you would design an experiment to determine the God's existence?
Dear Mr George Slade, you are right! Ethics existed from the very beginning, much before Christ! But it was Jesus, who showed the practical aspects of the same through his life! Please don't equate Jesus with the Christianity that exists today! Different churches have different practices to follow Christ! But the New Testament is quite crystal clear on how Jesus practiced the truth, which any one can follow and can experience Godly power in oneself! The essence of His teaching is that if one follow the truth, one can meet God in oneself!
• asked a question related to Design of Experiments
Question
Also what are the different efficient DOE and Analysis methods w.r.to machining operation with a specially sintered tool?
• asked a question related to Design of Experiments
Question
Experiment is done under augmented design.
You can use augmentedRCBD package in R software.
• asked a question related to Design of Experiments
Question
Hi,
I want recommendations on how best to analyze data from my 2x2 factorial design between-subjects experiment. I have two categorical (nominal) independent variables and one categorical (ordinal) Likert-scale dependent variable. I conducted an analysis using a 2-way ANOVA because I have a large sample, and the Likert data is normally distributed. I want advice on a defensible analysis for this type of data.
Thanks!
You need to describe your outcome more. It is common in some areas to use methods like ANOVA with a Likert scale, lets say 1-5. But that is generally a bad idea. You might get away with it if your scale is from 1-7 or greater, but still generally a bad idea. I attached research showing empirically why this is so and more appropriate models to apply depending on your outcome variable. Essentially, the distance between categories of 1 and 2 might not be the same as 2 and 3 or 3 and 4, etc. Even if there are numbers on the scale. The psychological distance could be different and different for some people. At best you miss relationships and at worst you have a model that makes no sense. For classic ordinal Likert scales I would go with generalized linear modeling using a cumulative distribution with a probit link function. But the article by Bürkner which is attached goes into more depth.
• asked a question related to Design of Experiments
Question
I would like to perform a sensitivity analysis of a CFD solver. There are 8 input variables, for each of them there are 2-3 prescribed numerical values.
To evaluate one set of parameters three costly simulations (each running for 20 hours on 800 cpu cores). Budget for these simulations is limited and due to the queuing system of the HPC, it would take a long time to get the results.
I'm aware of latin hypercube hierarchical refinement methods that allows starting the sensitivity analysis with smaller budget and subsequently incorporating newer results when they're available.
But those methods works with continuous variables. Is there a method for categorical and ranked/ordinal variables?
Thank you, Andrea
• asked a question related to Design of Experiments
Question
In Design of experiments,
Response Surface Methodology,
If I conducted Experiments with  variables different level example 3 factors and 3 levels, 15 runs,
Under open Atmospheric Condition,
How can I validate the model,
Because For 15 runs adopted different Atmospheric Condition,
Is there any possible solutions,
Thanking you.
• asked a question related to Design of Experiments
Question
Is there a Python project where a commercial FEA (finite element analysis) package is used to generate input data for a freely available optimizer, such as scipy.optimize, pymoo, pyopt, pyoptsparse?
You can find one implementation of Python/ABAQUS optimization in the following paper:
However, this is not a black box optimization since the analytical derivatives are used. You can implement the black-box optimization by presenting a Python code that presents an artificial neural network (ANN) surrogate to predict the derivatives (ANN is implemented in Python). You can also predict the objective by ANN. Then you can perform the mathematical optimization so easily (for example the implementation of MMA is available in Python). You can find such a project in the following link (a Ph.D. thesis but the codes are not available):
• asked a question related to Design of Experiments
Question
Dear all,
I am planning to conduct an experiment for 2 IVs (categorical variable - each IV has 2 categories) and 1 mediator (continuous variable - 7-point Likert scale) on an ordinal DV (6 categories). I understand that usually mediation analysis involves regression analysis to examine the indirect and direct effect of IV --> DV and mediator --> DV, and I will be able to use the PROCESS SPSS by Hayes (2013) to estimate the moderated mediation model. However, since it is a between subject design, I am not sure if I can separate the IVs when conducting the regression analysis.
I would deeply appreciate it if anyone can recommend tests and models I can use for this study, or have any resources that I may look into to better find a suitable test. Thank you very much!
Run a paired samples T-test and troubleshoot for Logistic regression
• asked a question related to Design of Experiments
Question
I did a product development using Design Expert for planning and evaluation. There is a response with a lot of significant effects shown in the Half Normal Plot. By testing factors A, B, C, D I got all these significant effects in addition to the interactions BCD, AB, AC, AD. Now I loose my head trying to figure out what all the interactions are about and I wondered by the way if there are commonly known mistakes leading to so many significant effects. Is it possible to get many effects, because the response might be very complex and dependent on all the factors? It would be kind of a novelty for this response.
Thanks a lot for brain-support
Thank you for your answers. Actually, I had a lot of samples. There were 8 variations and two Central points, so 4 replicates per variation and 8 replicates per central point (due to a categoric factor). I produced 48 samples and cutted them into 50 stripes per sample. For the response I described, I only used 5 of each.
The design was a fractional factorial, so I had to deal with many aliases.
Looking at the problem some weeks later, I think the range of factors being too large caused the amount of significant effects.
• asked a question related to Design of Experiments
Question
As a current undergraduate student majoring in Microbiology who is still learning DOE (Design of experiment) on my own, I have heard and read that an OFAT design is considered the most inefficient experimental design.
I am unsure whether that means that an OFAT design is useless to conduct or whether that means it only works for a limited number of experiments. Thank you.
When your surface is non-linear (interactions between variables exist, as Prof. David Eugene Booth mentioned), you can easily miss the optimal point using the OFAT. Yet, you might want to use it to sense your system and assess the importance of variables to better judge the ranges you might want to consider for a systematic DOE. A good practice is to check the literature for approximate values of the variables you are considering in your study. That will bring you closer to the optima and saves your time and resources.
Check the "Experimentation for Improvement" MOOC in Coursera (https://www.coursera.org/learn/experimentation). Montgomery books are excellent resources as well.
• asked a question related to Design of Experiments
Question
Hello,
I want to realize an optimization using a Rotatable Central Composite Design, but I think it is correct to perform a Factorial Design before. This is with the aim to know the best treatment and optimize it through CCD considering new levels. Please, Could you offer an advice?
Thank you very much to all of you
• asked a question related to Design of Experiments
Question
Researchers experiment with individual and group of designers. In engineering design what are pros and cons (in general) for doing experiment with two designers in a group?
Collaborative design is thought to have some general advantages and disadvantages. Depending on your experimental setting, the collaborative aspect could improve the outcome for each group. Importantly, the individual designers may be encouraged to express their ideas/understanding (sketches, notes, conversations) when working together. This might help the group keep track of the design process, as opposed to individual designers who tend to make fewer snapshots of their process. So if the experiment is comparing the process or the outcomes, grouping participants can be helpful. A benefit of having individual participants is the number of data points you collect.
• asked a question related to Design of Experiments
Question
Researchers experiment with individual and group of designers. In 'engineering design' what are pros and cons (in general) for doing experiment with two designers in a group? By engineering design, I mean a design problem based on mechanical, industrial design etc. By solution, I mean concept generation.
Modern engineering design education today relying primarily on the project-based coursed, performed by the group of students. For a reason. It is almost impossible to imagine a relevant industrial task implemented by one (even skilled) individual. It's simply not a real-life scenario.
• asked a question related to Design of Experiments
Question
I would like to design a fractional factorial experiment that has 10 factors with 2 levels and 1 factor with 3 levels (a total of 11 factors). However, I am the only interested in 2 and 3-factor interactions involving only the factor that has 3 levels. Any recommendations on what type of design to use?
My intention is to use the design for a screening experiment.
A simple by hand solution is to start with a fractional factorial of the 2^10 in 16 then replicate with the 3 level varying each time. That's 48 treatments. This guarantees you'll be able to measure all main effects of the 3 as well as all interactions between the 3 with each 2 level factor but only get the main effects of the 2 level factors. Unfortunately the main effect of the three cost you 2df, each main effect of the 2 will require 10df and each interaction will require 10x2df=20df. That's a minimum of 32df plus one for error is min 33df. So a 48 is not bad but someone may have a better solution in 36 which I suspects may exist with some careful confounding. This is bc 36 is a 6^2 which caters nicely to 2 and 3 level attributes.
• asked a question related to Design of Experiments
Question
I have four factors(F1, F2, F3, F4) with different levels( 3*2*2*3). Factors are nested each other so I am planning to do split -split-split design. please insist on the methodology for the triple split-plot design experiment.
You have a within x wihtin x within x within design. I could run suvh a design if you xwant. Just send me the data
• asked a question related to Design of Experiments
Question
I have data on the effectiveness of the three treatments: T1, T2 and T3 for each patient. Each variable is coded dichotomously - 0 = drug not working; 1 = drug is working. The patient could feel the effects of any of the three drugs. In such a system of variables, the Cochran's Q test seems to be the most reliable, which is the equivalent of an ANOVA with repeated measures for dichotomous (qualitative) data.
Nevertheless, design is more complicated. I am interested in the interaction with the test condition: one group of patients were given mentioned three different medications - second group in winter. So the design experiment I have is: 2 (season) x 3 (drugs) (repeated measurement) and the dependent variable is / are a dichotomous nominal variable.
Is there an interactive equivalent for the Cohran test? Technically, I could do a 2x3 ANOVA since the variable range is 0-1; however, I am looking for something more methodologically correct. Maybe just do subgroup Q tests? This also seems methodologically wrong. If anyone has heard of such a test - I will be grateful.
If you would like to double the repeated measures, e.g T1, T2, T3, T4, T5, T6, you could run the Cochran Q test followed by post hoc McNemar tests (thus, adjusting for multiple p-values). Instead of a (2x3) Between x Within subjects design, you then could test a (3x3) Within x Within design.
• asked a question related to Design of Experiments
Question
Dear all;
I have 4 factors to design my experiments. 3 of the factors are numerical but one is nominal.
for nominal factor I have 3 type as: A, B, C. the levels of A, B & C do not match each other.
I mean for each of them I have different levels of particle size as below:
For A - FIVE levels of particle size (F 320, F 400, F600, F800 & F 1000)
For B - FOUR levels of particle size (F 360, F 600, F800 & F 1000)
For C - SIX levels of particle size (F 120, F 360, F 400, F800, F 1000 & F 1200)
(The units of particle sizes are in FEPA grit and not important in the question)
briefly I have one factor (hardener factor) with 3 type (A, B & C) which every type has different number of levels that doesnt match each other. I need to design my experiments with these all.
could you please let me know what do you prefer me to do for designing?
Dear Jabir Ismaeili,
I have attached the excel file which I try to display the design of experiment. I hope it would be useful.
• asked a question related to Design of Experiments
Question
Hi everyone,
I need to plan an online experiment in which each participant should watch a series of screens each containing an image (the screens/images must appear in a randomized order). During the whole series of screens, an audio file should be played, it should begin at the first image and it should end at the end of the session.
I have a Qualtrics account, but I'm not able to implement this kind of procedure. In general, as I build a page with the audio player, the audio won't be playing anymore as soon as the next screen is on. On the contrary, I need the audio to be playing in the background during the whole presentation.
Could I achieve my aim by programming something in Python / Java / Node JS / HTML? Or should I change software? Any suggestions?
thanks in advance for any help
all the best,
Alessandro
Our lab at the University of Birmingham have just started using Gorilla.sc to build online experiments. So far, I have found Gorilla.sc really intuitive and easy to use and so I would definitely recommend it to other researchers. I'm not 100% sure whether you will be able to have the file playing in the background for the whole experiment, however, there is an option to have audio files playing in on each trial on Gorilla.sc so you might be able to?
If you do choose to use Gorilla.sc, some of the following webpages could be helpful to you! I referred to some of the following webpages to get an idea of how to build a task successfully:
Video walkthroughs that give a speedy introduction to using Gorilla.sc, some of which I found quite useful: https://gorilla.sc/support
Sample Code: https://gorilla.sc/support/script-samples#altercontent. I found the sample code really useful as there were examples for altering or adding content to the task via script (i.e. forcing participants to be in fullscreen when taking part, changing background text and colour, implementing live scoring etc.).
Hope this helps! Wishing you all the best with your research!
• asked a question related to Design of Experiments
Question
I have mostly done computational works and the transition to experimental work is slightly demotivating as I am stuck at each stage starting from whether to wear gloves for certain things to why my experiments are not reproducible!
At this stage, I am seeking an answer to how can I weigh my peptide correctly to make sure I am getting the same concentration as I planned?
For example, I want to get 1mM (just an example) concentration for my peptide and I calculated the required mass to be 0.7134 mg. Now a few questions which bother me are:
1. Since the weighing balance can only allow two decimal places, so should I round it off to 0.71 (because 3 is less than 5) or should I round it off to 0.72 (because it is likely that I will lose some peptides anyway while weighing!)
2. Even though I know theoretically the concentration of my solution, should I still measure maybe using nanodrop or some other way? (and how accurate it will be if no aromatic aa)
3. Is there anything else I should be taking care of here?
• asked a question related to Design of Experiments
Question
In an split plot experiment, 4 methods of Nitrogen(N) application was assigned in main plot and 3 doses of N in subplots and replicated thrice. Along with it, in the same field, No N control was done outside the split plot design and replicated thrice. How to compare No N control with those treatments inside split plot design?
If you take data from something not in the experimental design it is not part of the experiment but might in some way be treated as a covariate or covariates. The classical paper on such things (in this case rainfall) is by R A Fisher referenced here: and can be downnloaded from the Fisher archives at this link: https://scholar.google.com/scholar_lookup?title=The%20influence%20of%20rainfall%20on%20the%20yield%20of%20wheat%20at%20Rothamsted&author=RA.%20Fisher&journal=Phil.%20Trans.%20Roy.%20Soc.%2C%20London&volume=213&pages=89-142&publication_year=1924
I am sure some clever searching from Google or Google scholar will yield many more such references. You might also check this link which may be easier reading: Montgomery, Design and Analysis of Experiments: https://www.google.com/search?q=montgomery+d.c.+design+and+analysis+of+experiments&rlz=1C1CHBF_enUS915US915&oq=Montgomery+D+C+&aqs=chrome.1.69i57j0l6j69i60.19204j1j15&sourceid=chrome&ie=UTF-8 Start with ANCOVA. Best wishes, David Booth
• asked a question related to Design of Experiments
Question
For background:
We have a polymer and are looking to adjust its thermal and mechanical properties to resemble the properties of a commercially available polymer to present it as a viable alternative material.
We want to do this by adding a number of additives that are known to improve those properties. We have determined a range of additives we would like to test for their effects on the polymer's properties.
The problem:
I am unsure of a method to determine the best possible combination/ ratio of additives to produce the closest properties. I would like to be able to test different combinations/ concentrations of additives in an efficient way and determine which combination has the best overall effect on the properties.
I initially was considering using a Taguchi Method, however as far as I understand, Taguchi Methods are not suited for properties with interaction/ variables that confound? Is this an issue for this application? Is there a method of determining the optimal combination that would be better suited?
• asked a question related to Design of Experiments
Question
Hello,
The aim is to evaluate the effect of silt% in cement mortar mixtures. I'm wondering which approach would be more appropriate. The mixture design or other available methods such as factorial, RSM, etc.
Thank you
• asked a question related to Design of Experiments
Question
For the multi-objective optimization problem is it possible to apply the concept of SN ratio to individual outputs obtained through RSM or full factorial design of experiment. Also is it possible that the design of experiment developed by full factorial can match with Taguchi orthogonal array more specifically 2 factors 3 level design problem?? Where for full factorial it is coming 9 experiments.
Thank you....
No.
• asked a question related to Design of Experiments
Question
Dear all I am planning to transiently overexpress a proten (PROTEIN A) in a HEK293FS cell line, in the same experiment I want to knockdown another protein (PROTEIN B) in the same cell line that transiently overexpress(PROTEIN A) in order to know whether knocking down of Protein B will decrease or increase Protein A.
However, the problem is that the cell line becomes confluent within 3 days, but it needs 2 days to overexpress the Protein A and 3 days to knockdown Protein B. So how should I design such experiment in a proper way?
Does anyone has experience in doing such kind of experiment? any suggestions are welcome Thank you.
Yes,@Saurabh, you need to first KD the gene and then also overexpress it. If you don't do it, the referee or your examiner would ask for that.
I used this strategy and did my KD and over expression in duplicate each time, one for the RNA purification and other for the protein.
• asked a question related to Design of Experiments
Question
A question just came to my mind! I appreciate any helpful answers.
Consider a problem that there are a number of input variables (for example 5 variables), and we have one output parameter. The purpose is to develop a meta-model for the problem for example using a second order RSM method. The problem is highly nonlinear, therefore using quadratic (or linear, or cubic) equations to relate the input variables to the output parameter, results in significant errors (in the whole domain). But when we subdivide the design domain (i.e input variables space) to small regions and derive specific equations for each region, the issue is resolved and the output parameter can be predicted with acceptable accuracy.
Now, my question is how one should subdivide the design region? Is there any criteria to do it with minimum subdivision of the domain? In a 2D space this can be done easily by plotting graphs and observe the graph and the points. But how can the design (input) domain be subdivided when there are more design variables (e.g. 5 design variables)?
The problem is that not all your variables are significant in modeling ur response. A screening design also known as a factorial design ought to be carried out first on the input variables to determined those variables that are significant in predicting the dependent variables. This woukd have reduced the variables and there won't be need to subdivide the independent variables. After the scrrening of the variables using factorial or screening design, those factors that are significant can now be optimized using response surface methodology. Thanks i hope it helps.
• asked a question related to Design of Experiments
Question
Let's say I have the regression equation,
Y=
2 + 1.2 A – 1.3 B + 3 D + 19 B^2 – 11 AB + 13 BD
I have to attach the respective surface and contour plots for the interactive effects of A, B and D.
For both surface and contour plots,
should I do for the interactions of
choice 1> AB, AD, and BD
choice 2> AB and BD only (based on equation)
Anyone knows which choice I should go for? and why?
• asked a question related to Design of Experiments
Question
In Factorial Design of Experiments, each factor has different levels, one level can be considered as the base level. The cases/specimens sharing this base level can be considered as a control group. Also, randomization is similar to combination of all possible levels of all factors. In this sense, RCT and FDoE seem similar. What's your opinion?
On the topic of "control groups":
Indeed, in a factorial design, each factor has a distinct control group. That's why I mentioned that you need at least one control group in an RCT. The RCT-logic still holds.
If you have a full factorial design, you could say that the group which represents the combination of the baselines of all factors, is semantically the "most authentic" control group. However, statistically, as you mention, the comparison is within the factors. At the same time, the analysis considers the interaction effects between the factors. That is the beauty of factorial designs.
For a very thorough, very well written, and highly applicable introduction into RCTs (and, as an extension, also factorial designs) I can highly recommend Gerber's and Green's book https://wwnorton.com/books/9780393979954
• asked a question related to Design of Experiments
Question
In the response surface method, when the p-value is greater than 0.05, that component is insignificant. What does this mean? Should this component be removed from the modeled equation?
Yes, if the p-value is greater than 0.05, you may remove the component from the model.
• asked a question related to Design of Experiments
Question
Hello! I'm planning an experiment and I need to develop a DoE. I would like to use a software to design the experiment. My supervisor recommend me "MODDE Umetrics" , but there is not a free version available (only the trial version and I need it for a long time). Therefore, someone knows a software for DoE free and easy to use? Because it is my first time with this kind of softwares... Thank you very much in advance!
if you want, i can send free trial design expert(DX7) softwer for you.
it is suitable for DOE.
best rigards
• asked a question related to Design of Experiments
Question
Dear Members,
I have studies one factor using different nitrogen and carbon source and even physical parameter. Presently I am looking for Statistcal optimization of media for enzyme production. with help reference, I design experiment for Placket Burman, but now confused with selection of dummy variables, which are most importance in experiment. Earlier I thought of using Nacl 0.1 % in medium as dummy variables, is it right or wrong. Are there any option other than this.
Thanks and regards
Dear Denish,
why placket burman?
It is used for screening the factors and selecting the most effective factors.
best rigards
• asked a question related to Design of Experiments
Question
To design a proper experiment given the fNIRS signal characteristics, should one follow the fMRI experimental design recommendations (both signals present the hemodynamic delay), or are there specific recommendations for fNIRS experimental design one should care about as well? (relevant references on this?)
Hi, interesting question. Could you provide more information for the search of the mentioned articles (ciftni et al 2008 and kamran et al 2015 , kamran et al 2016 )?,
Thanks
• asked a question related to Design of Experiments
Question
factors : amount of lipid with three levels
surfactant : lipid ratio with three levels
you can use CCD or BBD in RSM (DX7).
10-12 test.
• asked a question related to Design of Experiments
Question
Which one is the best and versatile 'Design of Experiments' software? Can anyone suggest the list of software with the link to access the same.
DX7 is better.
best rigards.
• asked a question related to Design of Experiments
Question
Hello,
I have some categorical factors which are related to some continuous factors, In Minitab software when I use every type of design I see that factors are independent completely. for example If my salt concentration be zero, and salt type change in design, every effect of salt type is wrong, because I didn't add any of those salts at all. How can I introduce related factor as related factors to Minitab or every DOE software?
Thanks a lot
best rigards
• asked a question related to Design of Experiments
Question
I am trying to perform a screening DoE to further study influence of a range of factors for example Metakaolin content on various performances like compression strength and to further optimise my formulation in a next step.
Therefore I need to set the ranges for every factor which obviously has to be done in consideration of the chemical processes. To ensure that still a geopolymerisation process will be performed after changing many factors in high amounts because of the DoE protocol my plan is to set the ranges according to what are assumed to be the mini and maximal ratios to perform geopolymerisation.
It is stated in many papers and books in which direction increasing or decreasing ratios will change specific performances but I couldn't find the needed mini and maxima values.
I am very thankfull for every answer, hint or research paper you could give or recommend respectively.
some permissible ranges in works of literature, in addition other molding and curing conditions can be determining factors. watch this work: Potential use of ceramic waste as precursor in the geopolymerization reaction for the production of ceramic roof tiles
• asked a question related to Design of Experiments
Question
I'm an master student of advanced information systems and my bachelor degree was in industrial engineering. My field of interest in industrial engineering was quality related topics like SQC, SPC, DoE, Six Sigma, etc. Now I'm looking for a proper topic for my master thesis which combines Data Science and Quality related topics.
Quite often, QC data can be significantly unbalanced. I had a data set with 20+ variables and 100,000+ subjects from a line of a particular product, and 100-120 of the subjects came back for warranty work. The goal was to find the important variables to help predict which products would come back for warranty work. All the standard statistical methods failed. I didn't.
Maybe you can find a data set that has highly imbalanced data (Like cancer) and find something interesting.
• asked a question related to Design of Experiments
Question
Response surface methodology (RSM) and Multiple linear regression methods are applied to develop statistical models for catalytic reactions in order to predict conversion or selectivity within a given range of reaction conditions. Taking different process conditions, such as temperature, pressure, space velocity, time on stream as input, the statistical models are obtained. Are these methods applicable to predict conversion and selectivity by taking not only operating conditions as input parameters, but also the catalyst properties, such pore size, particle size and other properties?
If the experimental data have not been collected by DOE methods, is it always necessary to train the data for RSM by ANN, or it can be directly used to predict the model?
If you designed an experiment in Minitab and you analyze it through the 'Analyze' option in the DOE sub-menu, the software assumes that the highest level for each factor is something like '1'. The lowest level is '-1'. If you analyze the data through the Regression sub-menu, then it uses the actual levels for the analysis and creates all sorts of issues for analysis.
Under the DOE submenu, after analysis, there will be 2 sets of coefficients. One for the scaled factors levels (-1, 1) and another for the unscaled levels. When you go to optimize, use the coefficients for the coded model.
In general, ANN is useful when you have lots of data. DOE tries to minimize the amount of data you use.
• asked a question related to Design of Experiments
Question
in D-optimal design experiment, i used 3 factors and 5 responses. for the all responses i had a lack of fit p-value less than 0.0001, what should i do? i know that mean the model is not adequate but how this affect my experiment and how can i solve such problem?
The p-value is not at all informative here. You need to find out what the characteristic of the lack-of-fit is. Examine diagnostic plots. Simulate fake data based on your model and see if this is in relevant discrepancy with your observations.
• asked a question related to Design of Experiments
Question
Thanks for you read this question.
I developed a pre-service teacher training module (intervention), which involves educational activities of preparation, building teams, project design(after this, teachers will practice in primary school for teaching students ), implementation, demonstration and evaluation. Meaning that the pre-service teachers will be learning knowledge in university first; after that, they will transfer their knowledge to practice in primary school. So, this module involved two phases: teachers' phase and students phase.
Now, I design two experimental research to examine the teachers' motivation (experimental one ) and students' attitude(experimental two ). The study uses the quasi-experimental non-randomized pre-test and post-test control group design. In Experiment Two, the experimental group will be trained by pre-service teacher who comes from the experimental group of Experiment One. Meanwhile, the control group will be trained by pre-service teacher who comes from the control group of Experiment One.
The question is that this experimental study design needs to use two experiments or not?
I think that this study needs two experiments because students is another subject. some say that they do not need, because this study has one intervention, and the teacher and student are the different levels.
You might like to look at this study. We designed an experimental intervention for pre-service teachers workign in primary schools with regard to engineering:
Bill
• asked a question related to Design of Experiments
Question
I have to run computer simulations to find the optimal parameters for a structural problem. I have 8 factors and each of 3 levels. I can not run a full factorial design as it will take a lot of time to run 3^8 simulations. I want to reduce the number of simulations. I am confused between the Fractional Factorial method and Latin Hyper-cube method. Both can reduce the no. of experiments (simulation runs in this case), but what is fundamental difference between the two method? Which one is appropriate for this problem ?
Suggestions regarding other methods which can reduce the no. of simulation run are also welcome.
Abhishek Parida, I mean that they are real numbers, e.g. temperature, not categorical e.g. type.
• asked a question related to Design of Experiments
Question
To prepare a material there are four stages where in each stage involved 3 parameters. If it was a single stage and multiple parameters I would ha e used DOE or Taguchi's technique for optimization. What do I do in the case of multiple stages ?
Dear Murali,
To solve this problem, we used methods of discrete mathematics – graphs and network models, as well as algorithms for optimization of network models: Dijkstra, Bellman-Ford algorithm. The software developed by the authors of the article was used to automate the process of finding the optimal solution. The use of graphs and network models allows to present in a clear and compact form information about possible technological schemes in production, as well as to systematize the data. The developed program allows to increase the dimension of tasks.
The idea is the following – all possible options for the fields development can be represented in the form of an ordered structure – a graph (eg 37 vertex).
Each vertex (1–37) corresponds to a separate decision (alternative) that can be made, and the distance between the vertices (edge) has its length, which corresponds to the value of the parameter to be optimized (cost, labor, time costs, etc.). The model is structured in stages (levels), i.e. peaks (2–4) may correspond to alternative exploration options, (5–10) possible exploration options, (11–13) possible development technologies, etc. It should be noted that in the presented graph (Fig. 1), the vertices are interconnected according to real relationships, i.e., vertex 1 corresponds to the beginning of the process, vertices 2–4 of the exploration field, the process of exploration begins with the vertex 1 can be connected to vertices 2-4. After that we analyze the network model at each stage, if we have selected vertex 2, then there are two variants of development (vertices 5, 6) and so on. There are two conditions, if there is a connection between the stages of the field development, then the vertices connect, and if not – do not connect. For example, there is a route passing through points 1-3-8-11-16, as well as an alternative route passing through points 1-12-16, i.e. in the first stage of the field development there is a possibility of incomplete exploration of the field. That is, the field development occurs at once, but subsequently due to the lack of an adequate strategy, the development of the field is stopped. In order to find the best strategy, it is necessary to analyze all the steps, that is, to find the shortest route from vertex 1 to vertex 37.
Thus, in order to find the optimal strategy for the field development, all possible options should be presented in the form of an ordered structure – a graph. Then you should set the value of the optimization parameter (in our case, the cost of conducting each stage of field development) as the distance between the vertices of the graph. The totality of vertices that correspond to the stages and distances between the vertices that correspond to the value of the optimization parameter form a network model. The Bellman-Ford algorithm should be used to find the shortest route. The logic is that the process of finding the optimal solution can be carried out, both in the direct order from vertex 1 to 37, and, as a rule, in the opposite direction from vertex 37 to 1. For example, if the task is to design the most cost-effective production, then the opposite should be applied the order in which the field owner wants to reduce the production costs. And if the task is to minimize the adverse impact on the environment, it is better to set a straightforward order (from 1 to 37), when it is necessary to consider additional steps in the form of duilding of additional concentrating factories, etc. Depending on the order of consideration of the model, both environmental and economic options for field development can be analyzed.
Eg article
Best regards
• asked a question related to Design of Experiments
Question
The Newtonian model of string vibration traces to Euler who asserted the string can have the shape of any curve that can be drawn free hand.
The Hamiltonian model traces to Brook Taylor who asserted in 1714, based on his description of harmonic motion by a pendulum as isochronous, that every point on the string passes though the center of motion at the same point in time.
These predictions are subject to experimental confirmation.
Now a days, everyone is on Euler's side, and Taylor's equations for the string are not generally known. Euler and Bernoulli attacked Taylor's version. The issue has never been settled as to who was correct. I think it is obvious Taylor was right. The wave equation has only one solution because if S is the string then dS=0. That is why it is called a stationary wave.
We can predict that an image of the string taken at two different positions on the string will show the same orbit (at dynamic equilibrium) for the Hamiltonian string but the action at two points will never synchronize in the Newtonian.
So my question is how can I form the image of a point on the string during vibration. Given the frequency of the string is maybe 440 Hz, here are my questions:
1. Do I form the image inductively or photographically?
2. How many frames a second will I need?
3. Can I make an image of the string by plugging an electric guitar into an oscilloscope?
In the Hamiltonian model points on the string only move in the z-y plane but in the Newton points move in xyz directions. Could a laser measure the length of the string during vibration?
I think the experiment I am proposing is technically possible. There must be someone that know how to do it.
There is no experimental evidence of simultaneous multi-modal string vibration, in spite of what they say on Physics Stack Exchange. Showing waves on hanging ropes or strings that are driven by an oscillator do not count.