Science topics: Social PsychologyAttitude
Science topic
Attitude - Science topic
Attitude is an enduring, learned predisposition to behave in a consistent way toward a given class of objects, or a persistent mental and/or neural state of readiness to react to a certain class of objects, not as they are but as they are conceived to be.
Questions related to Attitude
In my opinion, if plowing is done CORRECTLY, then the problem of restoring soil fertility can be solved.
Since I'm a physicist but not a climate researcher, I can't say much about the technical points. However, what bothers me is the way the data is presented. For example, the instrumentally measured temperatures are displayed in a diagram with reconstructed curves. My main problem is that the IPCC is studying the human-caused climate change. The investigations are therefore not open-ended. There is also a very strong link between politics (195 governments are members of the IPCC) and science. Historically this was not good.
Although Buysse et al. (1989) declare that 'A global PSQI score > 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p < 0.001) in distinguishing good and poor sleepers.", many researchers regard value 5 as poor sleepers. However the symbol ">" means greater than, not equal to and greater than as symbol "≥". So in this case i would score value 5 as good sleeper. What's your opinion? Thank you!
Mathematically, it is posited that the cosmic or local black hole singularity must someday become of infinite density and zero size. But this is unimaginable. If an infinite-density stuff should exist, it should already have existed.
Hence, in my opinion, this kind of mathematical necessities are to be the limiting cases of physics. IS THIS NOT THE STARTING POINT TO DETERMINE WHERE MATHEMATICS AND PHYSICAL SCIENCE MUST PART WAYS?
Many times I read interesting papers presenting analyses and decisions on Real Cases (with data) to support the “theory” BUT NO data are provided.
In such cases there is NO possibility to analyse anything: “Take it or Leave it” situation very upsetting!
In my opinion it is rather dishonest this behaviour.
I would like to know the opinion of the scholars in RG about such papers presenting analyses and decisions with NO data…
I have done a study using DAI-10, GASS and BPRS scales. BPRS scale is an open scale. But DAI-10 and GASS scales could be used without permission for dissertation purposes only. If the study needs to be published in a journal, GASS and DAI-10 scales require permission from developers.
I would like to know your thoughts on these social challenges
´Variables. Independent, Religiosity, Knowledge, Attitude, Subjective Norms, Perceived behavioral control, Islamic Banking, Shariah Compliant Investment
´Dependent, Islamic Financial literacy, Islamic Microfinance,
´Mediating Variable. Attitude
in the same construct or variable I asked questions answerwed by Lickert scale and others answered by yes, no. Is it Statistically correct?
Thank you
I want an accurate scientific explanation.
Preliminary Professional Opinion

There are two different dependent variables (both data are ordinal).
Variable A is the "working performance of an institution in garbage management"
Variable B is "society participation, which is shown by the attitude toward garbage handling"
In my opinion some publications like review articles and meta analysis do not need to present an ethical clearance certificate to be published. I am asking about articles using patient data from registries and other simple study designs if can be exemted from the ethical clearance certificate. Looking forward to reading your experience and discussion.
PLANCK ERA / QUANTUM ERA and “DISAPPEARANCE” OF PHYSICAL CAUSALITY: FALLACIES FROM “OMNIPOTENCE” OF MATHEMATICS
Raphael Neelamkavil, Ph.D. (Quantum Causality),
Dr. phil. (Gravitational Coalescence Cosmology)
Cosmologists and quantum cosmologists seem to be almost unanimous (but happily today, a bit decreasingly unanimous) that, at the so-called level of the Planck era / quantum era of the original state of the big bang (of our universe, or of an infinite number of such universes existent in an infinite-eternal multiverse – whichever the case may be), where all forces are supposed to be unified or quasi-unified (but always stated without any solid proof), (1) either there did not exist and will never exist causality, (2) or any kind of causality is indistinguishable from the normal course of physical existents.
Is this sort of cosmological theorizing acceptable, where (1) the unification is supposed but is not necessarily physical-ontologically presupposable, and (2) causality and non-causality are taken in the mood of dilemma? This sort of theorizing is, of course, based on some facts that most physicists and other scientists agree on without much effort to search for causes of approval or disapproval.
But the adequacy of such reasons for this conclusion is questionable. The manner of concluding to non-causality or indistinguishability of causality and non-causality at spots in the universe or multiverse, where all forces are supposed to be unified or quasi-unified, is questionable too. The main reason is the lack of physical-ontological clarity regarding the status of causality and the status of unification of the forces.
In my opinion, this is based on the inevitable fact that whatever the mathematics automatically prescribes for such situations can be absolute only if all the parameters, quantities, etc. that have entered the equations are absolute. The prescribed necessity condition has not been the case in the physics that goes into the mathematical formulation of the said theory.
Even concerning the measurement that humanity has so far made of the speed of light is not exact and absolute. The reason for the fantastic cosmological conclusion regarding a volatile decision for or against causality and regarding a supposed verity of the supposition that all forces are unified therein, does not possess an adequate mathematical reason, and of course not a sufficiently physical.
The reason I gave is not strictly and purely mathematical, physical, or just generally philosophical. It is strictly physical-ontological and mathematical-philosophical. Things physical-ontological are not “meta-”physical in the sense of being beyond the physical. Instead, they treat of the preconditions for there being physics and mathematics. They being pre-conditions, not respecting them leads to grave theoretical problems in mathematics, science, and philosophy.
Hence, in my opinion, fundamentally mathematical-ontological and physical-ontological presuppositions and reasons are more rationally to be acceptable for the foundations of mathematics and physics than all that we have as strictly mathematical and physical in the name of foundations. I give here the obvious in order to assure clarity: I presuppose that physical ontology consists of the necessary presuppositions of anything dealt with in physics, astrophysics, cosmology, and other purely physical sciences, and of course of the mathematics and logic as applied to existent physical things / processes.
The main reason being considered for the so-called non-causality and indistinguishability between causality and non-causality at certain cosmological or physical spots seems to be that space and time could exist only with the big bang (or whatever could be imagined to be in place of it), whether just less than 14 billion years ago or doubly or triply so much time ago or whatever.
First, my questions on this assumption are based on an antagonism that I have to cosmologists lapping up the opinion expressed by St. Augustine centuries ago. That is, if space and time “exist” only if and from the time when the universe exists, then the question of space and time before the expansion of the universe is meaningless. These cosmologists presume that the expansion of the universe was from a nullity state, and that hence it could not have existed before the beginning of the expansion. What if it existed from eternity like a primeval stuff without any change and then suddenly began to explode? This is the basic premise they seem to hold, and then conclude that time, as an “existent” now, would not have existed before the expansion! What a clarity about the concept of existence! Evidently, this is due to the gaping absence of regard for the physical-ontological presuppositions behind physical existence.
Secondly, as is evident, some of them think that space and time are some things to exist beyond or behind all the physical processes that exist. Thus, some identify space even with ether. If we have so far only been able to measure physical processes, why to call them as measures of space and time? Why not call them just as what it is, and accept that these are termed as space and time merely for ease? After all, whatever names we give to anything does not exist; and we have not seen space and time at all.
Thirdly, is it such a difficult thing for scientists to accept the lack of evidence of any sort of “existence” of space and time as background entities? Einstein spoke not of the curvature of existent spacetime, but of the mathematical calculations within a theory of the measurementally spatiotemporal aspect of existent physical processes as showing us that the measurementally spatiotemporal aspect of the physical processes – including existent energy-carrier gravitational wavicles – is curving within mathematical calculations.
Now, if the curvature is of existent processes (including existent energy-carrier gravitational wavcles), then, at the so-called primeval spot in each existent universe (even within each member of an infinite-eternal multiverse containing an infinite number of finite-content universes like ours) where all forces are supposed to be unified or quasi-unified, there cannot be a suspension of causation, because nothing existent can be compressed or rarefied into absolute nullity and continue to exist.
This demonstrates that, even at the highly condensed or rarefied states, no existent is nothing. It continues to exist in its Extended and Changing nature. If anything is in Extension-Change-wise existence, it is nothing but causal existence, constantly causing finite impacts.
Why, then, are some cosmologists and theoretical physicists insisting that gravitons do not exist, space and time are entities, gravitation is mere spacetime curvature, causality disappears at certain spots in the cosmos (and in quantum-physical contexts), etc? Why not, then, also say that material bodies are merely spacetime curvature and cannot exist? Is this not due to undue trust in the science-automation powers of mathematics, which can only describe processes in a manner conducive to its foundations, and not tell us whether there is causation or not? I believe that only slavishly mathematically automated minds can accept such claims.
Examples of situations where causality is supposed to disappear are plenty in physics. More than century of non-causal interpretations within Uncertainty Principle, Double Slit Experiment, EPR Paradox, Black Hole Singularity, Vacuum Creation of Universes, etc. are clear examples of physicists and cosmologists becoming prey to the supposed omnipotence of mathematics and their unquestioning faith in the powers of mathematics.
It is useless, in defence of mathematics and physics, to cite here the extreme clarity and effectiveness of mathematical applications in instruments in space-scientific, technological, medical, and other fields. Did I ever question these precisions and achievements? But do the clarity and effectivity of mathematics mean that mathematics is absolute? If they can admit that it is not absolute, then let them tell us where it will be relative and less than absolute. Otherwise, they are mere believers in a product of the human mind, as if mathematics were given by a miraculously active almighty space and time.
All physicists need to recognize that all languages including mathematics are constructions by minds, but with foundations in reality out-there. Nothing can present the physical processes to us absolutely well. Mathematics as applied in physics (or other sciences) is an exact science of certain conceptually generalizable frames of physical processes. This awareness might help physicists to de-absolutize mathematical applications in physics.
Fourthly, the above has another important dimension. Physics or for that matter any other science cannot have at its foundations concepts that belong merely to the specific science. I shall give an example as to how some physicists think that physics needs only physical concepts at its foundations: To the question what motion is, one may define it in terms allegedly merely of time as “the orientation of the wave function over time”. In fact, the person has already presupposed quantum physics here, which is clear from his mention of the wave function, which naturally presupposes also the previous physics that have given rise to quantum physics.
This sort of presupposing the specific science itself for defining its foundational concepts is what happens when concepts from within the specific science, and not clearly physical-ontological notions, come into play in the foundations of the science. Space and time are measuremental, hence cognitive and epistemic. These are not physical-ontological notions. Hence, these cannot be at the foundations of physics or of any other science. These are derivative notions.
It is for this reason that I have posited Extension and Change as the primary foundational notions. As I have already shown in many of my previous papers and books, these two are the only two exhaustive implications of the concept of the To Be of Reality-in-total as the totality of whatever exists.
Many faculty members are facing this problem while applying for Assistant Professor 7000 and 8000 AGP (not getting shortlisted due to experience of not from NIRF Rank Institute or government institute) whereas faculty member have one or two degree from IIT's. Also, they are facing problem with age criteria (Less than 35 Years). Do you thing it should be change or it is good. Kindly comment on it.
How macroeconomic will be on the forth coming future, and how will be dominated by either better or worse condition
"Substitution of concepts" is a logical fallacy that occurs when someone replaces or substitutes one concept with another that is not equivalent, leading to confusion or distortion of the argument or discussion. This fallacy can be intentional or unintentional and often results in faulty reasoning or misrepresentation of the original point.
Here's an example to illustrate this fallacy:
Person A: "We should invest more in renewable energy sources to reduce our carbon footprint." Person B: "So you're saying we should just abandon all other forms of energy and live in the dark ages?"
In this example, Person B is substituting the concept of "investing in renewable energy sources" with the exaggerated concept of "abandoning all other forms of energy." Person A's argument is not advocating for completely abandoning all other energy sources, but Person B's substitution distorts the original point to make it easier to attack.
Substitution of concepts can lead to various negative outcomes in discussions:
- Straw Man Argument: This fallacy often leads to creating a "straw man" argument, where the opponent misrepresents the original argument to make it easier to attack. This distracts from the actual point being made.
- Misunderstanding: Substituting concepts can result in misunderstanding and miscommunication. It prevents productive dialogue by diverting attention away from the real issue.
- Dishonesty: In some cases, people may intentionally substitute concepts to mislead or deceive their audience, presenting an argument that is easier for them to refute.
- Polarization: By misrepresenting an opponent's argument, substitution of concepts can intensify disagreements and lead to increased polarization, as it becomes more difficult to find common ground.
- Loss of Credibility: Individuals who frequently employ this fallacy may lose credibility in discussions, as others recognize their tendency to twist arguments for their own purposes.
To engage in productive and meaningful discussions, it's important to accurately represent and understand the concepts being discussed, without resorting to the fallacy of substituting concepts. Please, share your experiences and opinions.
Is there any opinion on an experiment concerning photons, neutrinos, and gravitons?
Dear professors and colleagues
I would like to get your opinion on the following figure:
1- I made a ligation for the specific 200bp insert to vector and that exact location shows misreading in 3 different oligo primers and identified by 1 oligo primer. What is your opinion?
2- In this case, does it mean that the insert is not right?
Thank you in advance

Hi I would be grateful for an opinion on the following
When participants just complete early demographic questions in a survey and don't progress to the actual questions, is this classified as NMAR data? Given that I still have sufficient number of completed surveys if the incomplete ones are excluded to meet the requirements of power calculations, can I just exclude them. Are there any valuable references that will show this is acceptable?
Thank you in advance
Colin
I need a measure for RSB that I can use for a survey that would encompass a wide and diverse population demographic without restrictions to age, gender, race, sexuality, religion, and lifestyles. Thank you to whoever can be of assistance.
opinion in you reliability of the transaction via the Blockchain ?
Hi I would be grateful for an opinion on the following
When participants just complete early demographic questions in a survey and don't progress to the actual questions, is this classified as NMAR data? Given that I still have sufficient number of completed surveys if the incomplete ones are excluded to meet the requirements of power calculations, can I just exclude them. Are there any valuable references that will show this is acceptable?
Thank you in advance
Colin
Once we determine via X-ray the dimension of the root, I prepare the scaffold and add to the composition a substance called tideglusib as an initiator of the cellular process.
What is your opinion ?
Dear members,
I am conducting a short survey regarding the influence of AI to Digital Marketing.
Appreciate if you could share your opinions. The survey takes only 5 minutes to complete.
Thank you very much.
Kind regards,
Anthony
The concept of Circular Economy (CE) in the Construction Industry (CI) is mainly about the R-principles: Rethink, Reduce, Reuse, Repair, and Recycle. Thus, if the design stage following an effective job site management would include consideration of the whole lifecycle of the building with further directions of the possible use of the structure elements, the waste amount could be decreased or eliminated. Analysis of the current literature has shown that CE opportunities in CI are mostly linked to materials reuse. Other top-researched areas include the development of different circularity measures, especially during the construction period.
In the last decade, AI merged as a powerful method. It solved many problems in various domains, such as object detection in visual data, automatic speech recognition, neural translation, and tumor segmentation in computer tomography scans.
Despite the broader range of works on the circular economy, AI was not widely utilized in this field. Thus, I would like to ask if you have an opinion or idea on how Artificial intelligence (AI) can be useful in developing or applying circular construction activities?
In the optimization of truss structures, the DE-MEDT (Differential Evolution-Mixed Encoding Design Technique) algorithm is specifically designed to handle the trade-off between discrete and continuous variables. It achieves this by employing a mixed encoding approach, where both discrete and continuous variables are simultaneously optimized.
In truss structure design, discrete variables refer to design parameters that can only take on a finite set of discrete values, such as the diameter or cross-sectional area of truss members. On the other hand, continuous variables are design parameters that can take on any real value within a certain range, such as the length of truss members.
The DE-MEDT algorithm addresses this trade-off by representing discrete variables using a binary encoding scheme and continuous variables using their real values. This mixed encoding allows for the simultaneous optimization of both types of variables in a single optimization process.
Here's an overview of how the DE-MEDT algorithm manages the trade-off between discrete and continuous variables:
Initialization: The algorithm initializes a population of candidate solutions, each consisting of a combination of discrete and continuous variables. These solutions are randomly generated within their respective feasible ranges.
Evaluation: Each candidate solution is evaluated using the fitness function(s) that capture the objectives and constraints of the truss structure optimization problem. These fitness functions measure the quality and performance of each solution.
Selection: The DE-MEDT algorithm employs a selection process, typically based on dominance or Pareto dominance, to determine the most promising solutions in terms of the trade-off between objectives. This selection process considers both discrete and continuous variables.
Crossover and Mutation: In the DE-MEDT algorithm, crossover and mutation operations are applied to the selected solutions to create new offspring solutions. These operations combine and modify the discrete and continuous variables, allowing for exploration of the solution space and potential improvement.
Replacement: The offspring solutions are compared with the parent solutions, and a replacement strategy (e.g., elitism) is employed to update the population. This ensures that the best solutions, considering both discrete and continuous variables, are retained in subsequent generations.
Termination: The optimization process continues iteratively until a termination criterion is met, such as a maximum number of generations or convergence of solutions.
By simultaneously optimizing discrete and continuous variables, the DE-MEDT algorithm can effectively explore the design space, considering both discrete design choices (e.g., member sizes) and continuous design parameters (e.g., member lengths). This approach allows for a comprehensive search for optimal truss configurations that balance performance, cost, and other objectives.
It's important to note that the specific implementation details of the DE-MEDT algorithm may vary, and researchers may introduce additional techniques or modifications to further improve its performance and efficiency for truss structure optimization.
I wanted to know your opinion about this problem....
NOTES:
(1) Any Actor has four types:
i. Human or Person
ii. Smart hardware such as Robot
iii. Smart Software
iv. Creatures
(2) Any Party has four types:
i. Human or Person
ii. Organization, such as Family, Company, and others
iii. Countries such as the USA, India, China, and others
iv. Political Parties
Critical Question
"What should a qualified leader look like, in your opinion? Would you please share your opinion?
Here is my answer:
I received this question from a "LinkedIn Member," and he said: "I see that you are a very good person in your industry. What should a qualified leader look like, in your opinion?"
Thank you for your compliment. I play several roles and work hard to get people, businesses, and organizations to use good roles instead of evil because wrong (Evil) roles seem typical in any society.
A leader is a role that has nothing to do with business or industry. Unless he presents a particular movement or human rights cause or to benefit some group in a country, city, or even village or for the sake of science, knowledge, and evolution of humans anywhere on earth.
Examples:
(1) The CEOs or managers OR ANY AUTHORITY are roles, and he got a salary for playing this role. They should know how to play and perform their roles well for the sake of whatever they have been hired for and deposit themselves somewhere far away from their roles. Why? Because their EGOs can interfere in doing their roles effectively
(2) Any Party must play the roles that help all humans in their countries first and in the world in all aspects of life to grow and to be fair and justice by any means to collaborate and advance humankind.
(3) Nowadays, many people and parties damage every aspect of human nature because they utilize a self-center mentality for their needs and don't play or perform the supposed roles.
The following article will question the unification of "The Right Person, the Right Place, the Right Time." Thomas B. Holman.
"Transformation of Traditional Companies in the Digital World: Challenges and Opportunities" - This topic can explore how traditional companies adapt to technological and digital advancements and how strategic management can contribute to the success of this transformation.
Please give your opinion
Please help me.
In my result, in solar cell, when I increase the active layer thickness, Jsc was lowered (about half). However, EQE current was similar. Is there any opinion about this? (relating with low diffusion length of active layer, traps, low carrier density can be a reason... etc.)
Good morning,
Regarding Park transformation I note Matlab specifies by default q-axis aligned with a-axis and hence sinus-based transformation. I have the problem that many research I have reviewed is based on cosinus-based.
Would you kindly advice which of them is preferable in your opinion.
How could I "translate" the expression from one ref frame to another in order to make my calculations consistent?
Thanks in advance and Happy NY2K20!
Juan Cabeza
I have surveyed to evaluate the perception and attitude of consumers towards ultra-processed foods (Sample size-621). I am doing confirmatory factor analysis and Structural Equation Modelling. I have four constructs (Risk perception-3 items, Benefit Perception-3 items, Attitude-a4 items, Behavioural Intention-was 4, 1 deleted now 3) and 24 items. My measurement model fit was good—factor loading from a minimum of 0.44 to a maximum of 0.70. The problem is CR (0.70 for two constructs, 0.59 for the other 2), AVE (ranges from 0.29 to 0.45. HTMT has no issue. I think I can justify it, but I also need advice.
The main problem is I am getting risk perception is positively and significantly correlated with attitude, which is opposite to the theory and other research findings. My hypothesis was that the greater the perceived risk, the more negative attitudes towards ultra-processed foods. In this situation, what should I do?
Unless Until, we are not doing selection in manual mode, automatic seems quite false. Semi-automatic seems is comparatively better than automatic mode. What is your opinion ?
It is a growing tendency among authors to publish results without cross checking own results . This attitude generate wrong idea in new readers mind . On the other hand authors
get promotion by showing the wrong results .
They do not feel anything . Give your view .
Few resarschers argued "Dark side of creativity" what is your opinion on it ? Is it dark side of Creativity or Dark side of Human Brain (Psychology). Bcos if human intent are questionnable , you will have dark side of every innovation?
e.g. On the one hand, a creative idea resulted in value and profit; on the other, an individual was willing to be intentionally dishonest in order to execute his idea. It is this dark side of creativity—particularly the relationship between creativity and dishonesty—that has piqued the interest of researchers.
pleae do write your views.
Best Regards
Sandeep
Found different questions according to the different authors
Inquiry on Conditional Inference Tree to assess the knowledge and attitude in conservation science
I have used the MTT Assay to measure cancer cells' viability under an antioxidant compound's influence. But contrary to expectations, with the increase of antioxidant concentration from 5 μM to 150 μM, the viability not only did not decrease but also increased. In other words, with the increase in the concentration, the amount of light absorption increased. In your opinion, what is the reason for this technical error? Or what kind of problems could have occurred during the MTT Assay?
In my opinion, academic conferences (ACs) are (very) necessary in the tertiary institutions of higher learning, as it creates a platform for knowledge sharing and exchange. Scholars from all walks of life dare to attend because, they believe they will gain useful "something." Sadly, these conferences run for only two or three (maximum) days: a duration some scholars and institutions may find unnecessary to spend a lot of money for.
If the impact of AC is to network for further collaboration, look for external examiners for your faculty/department/students, peer review of pedagogical practices for the sake of standardisation, then 2/3 days are just not good enough.
Therefore, I think/suggest that:
- AC should have minimum of 5/7 days, whereby, experts in their various fields can be invited to come and share their experiences and how they made it, new trends and current research areas that can advance our lives.
- Experts in analytical software that are more appropriate for the chosen field to come and empower scholars in that discipline (I hate giving my work to someone to analyse for him to tell me the outcome). When you analyse yourself, you gain more insights that can lead to multiply knowledge creation.
- The suggested duration can help to achieve the listed impacts in the second paragraph.
Who can afford not to attend a conference of such benefits? and who will dare to say that such a conference is of little benefit?
I invite comments!
Hi guys, i want to ask you something. I'm doing some study on antioxidant activity in Palm Kernel Cake (PKC) by using DPPH assay method. I observed that my the color of my bio-oil extracted from PKC was appeared in likely clear colour. Last time, I saw someone wrote that it's better to use 0.1mM of DPPH instead of 0.6mM of DPPH for a clear solution sample. Is that true? Can someone give me some opinion how can i know how much concentration for dpph i should use? It's okay if I use 0.6mM DPPH? Thankyou. 😁
I was reviewing a paper for a journal, and the authors stated that they randomly selected company reports from several databases to give them a sample size of 600 firms.
I was of the opinion that “sampling” is inherently associated with primary data collection or am I mistaken?
My suggestion would be that the authors state that they selected 600 “cases” for their analysis from the databases (using whatever criteria) and not make reference to the word “sampling”…?
Dear professors and colleagues
Situation:
I am currently preparing a recombinant DNA consisting of a 1000bp insert and a 6000bp vector.
- Insert: The 1000bp insert is prepared by amplification using PCR and then double-digested to create sticky ends.
- Vector: The 7500bp vector has been double digested using the same restriction enzymes and then extracted and purified 6000bp part from 0.6% agarose gel.
Trouble:
1- After extraction and purification the final concentration is 12-20ng/ul and the desired concentration to perform successful ligation is 100ng/ul or more. What is your advice to get a high concentration from the gel?
2- After proceeding to ligation using different protocols and companies,
insert to vector ratio is 3:1 using 270ng/ul of insert and 12ng/ul of vector; the end result in gel electrophoreses is way longer than the desired length by approximately 10000bp.
Do you have any suggestions?
I will appreciate your opinion and advice....
What if we don't know anything about history? 00 How would the world be different?... What if everyone had the same opinion about everything?
Whats your opinion on intermediata molecules quantum states
Dear Milan Dordevic, Rade Tešić, Srdjan Todorović , Miloš Jokić , Dillip Kumar Das, Željko Stević, and Sabahudin Vrtagic
Reference is made to your paper
“Development of Integrated Linear Programming Fuzzy‐Rough MCDM Model for Production Optimization”
I read it and my comments are:
1- I n the abstract you say “Exactly such a problem is solved in this paper, which integrates linear programming and a Multi‐Criteria Decision‐Making (MCDM) model”
In reality, Linear Programming (LP) is part of MCDM. As a matter of fact, it was the first method of MCDM, created by Kantorovich in 1940.
In “First, linear programming was applied to optimize production and several potential solutions lying on the line segment AB were obtained”
This is correct, and some times it happens, but only when the objective function has the same slope as a criterion. This is your case, and a-b t is a Pareto frontier, with infinite optimal solutions between a and b, if the alternatives are not finite.
2. In page 2 “This model includes qualitative and quantitative indicators, which is an advantage considering that the disadvantage of various multi‐objective programming models is that they are basically mathematical and often ignore qualitative and subjective factors”
In reality, practically all MCDM methods work with quantitative and qualitative criteria, however not LP, that works only with quantitative criteria or indicators, as you call them.
3. In page 2 “fuzzy analytic hierarchy process”
You can’t apply fuzzy in the AHP method. It was expressly said in writing by Saaty, its creator, because AHP is already fuzzy.
4- In “I” Figure 1, in my opinion, there is a sequence mistake, since determination of criteria must precede LP, not as is shown in the figure. You can’t solve a LP scenario if you don’t have the criteria.
5- In “IV” Figure 1, what is the gain in comparing rankings from different methods addressing the same problem? What information can you extract from this comparison?
6- In page 2 “When applying LP for the optimization and management of production processes in this special case, several potential solutions are obtained instead of one which is usually the case”
Again, LP can give several optimal solutions ONLYwhen the mono-objective function is parallel to a criterion. LP was designed to give, if it exists, only one optimal solution, like maximize benefits, or minimize costs, but not the two at the same time. However, you can use maximize a benefit or minimize a cost, at the same time, if they are criteria. The method will try to find a solution that balances both criteria, that is, it will find a compromise solution.
By the way, if you use LP, you don’t need weights. These are determined in each iteration of the LP method. DM preferences can be applied after a mathematically correct results are reached.
7- What are ‘Rough MCDM methods”? You did not explain this concept. The same for rough numbers. Remember that not all readers in RG , probably most, are not mathematicians
8- In page 9, Figure 2 where is the objective function Z?. In reality, it coincides with criterion C2
Look at your equations. Criteria C2 or Equation 2 is:
C2 = A + 0.5 B = 3000
Look at your objective equation
Z = 2000 A + 1000 B
Z = A + 0.5 B
The only difference is that C2 has a goal (3000), i.e., it is definite, while Z is not.
The Figure is correct, but the objective function must be identified, if not, the reader will be asking where is it.
Z can be displaced to the right parallel to itself, because it is maximizing. Since it is indefinite it can take any value. Suppose that you assign it the value 3000. It means that whatever its initial position in the A-B coordinates system, at that value Z equals C2, which is what you have in your diagram.
Thus, this parallelism between Z and C2 has been forced by establishing that Z and C2 have the same slope (1 and 0.5). Therefore a-b constitutes a Pareto Front where all pairs of A and B are optimal.
But remember that this is a particular case. Most PL problems determine the optimal value when the Z line tangents one vertex of the polygon.
9- In page 9 “The optimal value of the objective function is unique because it is six million regardless of how many products A and how many products B will be produced, but there are infinitely many admissible solutions that provide this function value”
Exactly
10- In page 9 “Since we have as a solution many points that represent the optimal solution, a multicriteria model can be applied further”
I don’t understand. You already applied MCDM using LP. Why do you need to apply another method?
What is the ‘t’ value? In may opinion you should explain that it is a parameter or percentage.
11- On page 10 “3.3. Determining the Significance of Criteria Using the IMF SWARA Method”
What do you need that for if you already have the solution?
12- There is something that puzzles me, and it is why to develop so a complex procedure when the same result of A21, can be reached just putting ‘=’ instead of ‘≤’, by indicating in this inequation of yors.
1.5 x1 + 1.5 x2 = 6000
Since x1 = x2, I don’t see why you say that the best alternative is x1.
Obviously, the authors must have had reasons to follow a complex mechanism, when it can be replaced by a simple operation. I would like to hear from them about this.
In my opinion the article is very valuable, but very difficult to understand, mainly because it appears that the authors take for granted that readers don’t need any clarification.
Hope it helps
Nolberto Munier
Some people reject new technologies or techniques in the their field, especially when it suggested by the others. Sometimes they put obstacles in the way of it's use.
Had you faced such people?
Your opinions are appreciated.
Dear All,
I am looking for opinions and ideas on what are the important components required for ethical governance.
Kindly include links or search engines.
Thanking you in advance for your guidance and help!
Game companies use embedded marketing to influence players' purchasing decisions to push them to make in-game purchases
How do you think I can best measure this effect?
Through a questionnaire for players or conducting personal interviews
Note that sales and purchase figures are not disclosed by the company
I would be grateful for the answer and look forward to reading your opinions
Dear All Respected researchers.
If anyone has researched or worked on the impact of communication or lean communication on knowledge sharing in an organization, kindly share your opinion with me.
Much appreciated and prior thanks.
Psychological theories are scattered and lack sufficient coherence. Researchers have worked without attention to each other. These valuable theories, if they are put together, find great value. isn't it?
We blame the consumer because he deals with these products and desires them out of demand, creating supply... Or do we blame the governments for not criminalizing and banning black products?
I need a new tool to collect data about the opinions of a sample of customers about a particular product. Need to hear your views and experiences
Dear researchers,
"From below Figure, the extensional viscosity estimated from CaBER (Capillary breakup extensional rheometer) is too apparent, not true rheological data".
What do you think about this statement?
- In my opinion, a capillary rheometer is an apparatus designed to measure shear viscosity and other rheological (= flow) properties such as extensional viscosity, extrudate swell, thermal stability, wall slip. So the data showed in this Fig is the rheological data. I am not sure about above statement. Please give some advice.
Thank you very much.
Do you agree with me that Research Gate is considered an appropriate platform for the circulation of scientific research and the expression of opinions and ideas among international academics, researchers and scientists, and makes the world as one country to serve the world, societies and scientific research?
there are many statistics software that researchers usually use in their works. In your opinion, which one is better? which one do you offer to start?
Your opinions and experience can help others in particular younger researchers in selection.
Sincerely
Acceptance of a specialized doctoral student in distance education planning
Metaverse can be the best option for universities and we are your companions in this way.
What is your general opinion about Metaverse and university and studying in the field of distance education?
Many researchers use the term of effect or impact but I think the have to use only relationship. What is your opinion?
Hi researcher, please need your opinion if we want to classify all method/algorithm from long time until today.
You can classify become 4-5 or more group of method.
It will help new researcher to understand while reading paper a lot.
Thank you very much for your kindnes.
Hlw
I am doing a research on Homophobia among youth and parents for which I am thinking to use a standardized questionnaire- Homosexuality Attitude Scale (HAS), but the scoring information of the questionnaire does not provide the range to level the attitude. Is it possible to set my own range by using proper statistical method?
Thanks,
Eli Nasrin Farhana
Rest-Mass, Charge of an electron is still an unsolved problem in physics! Why?
Einstein: "A theory setting mass and charge a priori is incomplete!" So Dirac's Electron Theory (restmass and charge are fundamental constants) is incomplete in the sense of Einsteins Opinion. The same to SM & GR up to now?
Dear: Jean Dezert , Albena Tchamovag, Deqiang Han, Jean-Marc Tacne
Reference is made to your paper :
The SPOTIS Rank Reversal Free Method for Multi-Criteria Decision-Making Support
My comments as follows:
It is a very good news to have a method without Rank Reversal (RR)
1- In my opinion you use the phrased ‘score matrix’ to indicate what in reality is the initial matrix, composed by performance values. This induces to confusion to readers for who score matrix is a matrix with different scores or results derived from applying a MCDM method.
2- You say in page 3 “The score matrix S = [Sij ] is sometimes also called benefit or payoff matrix in the literature.”
What happens if the matrix, as is most usual, also calls for minimization, using ‘cost’ values?
3- I don’t think that an initial decision matrix (IDM) can be considered incomplete because it does not have bounds for criteria. A matrix is incomplete when there is no indications of the quantity of resources for each criterion, procedure unfortunately followed by most MCDM methods, except PROMETHEE and those working with Linear Programming.
4- I agree with what you say about validations.
5 – You say “Classical MCDM problem becomes a well-defined MCDM one, where all scores values for each criterion are between its bounds”
6- “SPOTIS method will provide the best multi-criteria decision-making solution with preference ordering of all alternatives.”
Are you sure it is the best? On what grounds do you assert that?
7- In page 3: You consider criteria independent from each other. This s is a serious drawback, since in most projects criteria are interrelated. According to this, if you have two criteria like ‘Sped’ and ‘Fuel consumption’, that are interrelated, you can’ use SPOTIS? Why not?
8- How do you determine an ideal solution a priori? Based on what? Of course, if this solution is say very high, is does not matter what alternative you add, because it will be always above the maximum.
I grant you that it is a very elegant procedure.
9- I don’t think is correct to work with difference types of distances in the same problem?
10 - Where does weights come from? Are they subjective or objective?
11 – In page 5 “Once the MCDM is well-defined thanks to the specification of the bounds values of each criteria, the SPOTIS method does not suffer from rank reversal because the evaluation of each alternative is done independently of the others”
I agree 100% with this statement, because I also believe that the only way to avoid RR is evaluate each alternative independently. There is another method that applies this same principle and does not produce RR, but is not based on distances to a fixed point.
12- In page 7 “It could be argued that the SPOTIS method is more difficult (or risky) to use because of the freedom left in the choice of min and max bounds of the criteria”:
More difficult, risky? I don’t think so. It looks as a transparent method and very easy to understand. In my opinion its only drawback is using subjective weights.
Do you have a software for SPOTIS?
I hope my comments may be useful to your paper.
Nolberto Munier
As you probably know and it has been said in the articles, for example, this method does not work 100% for the rest of the work. For this reason, every researcher should increase and decrease different conditions and variables in their experimental tests to reach an optimal and correct method.
I would be happy if you active researchers in this field express your opinion and solution.
Thank you in advance for your kind opinion.
Good luck to you dear researchers.
Robotic surgery is the future - any opinion or thoughts? Would like to hear everyone’s thoughts
It's just a more general question. I understand that the objectives and methodology must be considered