Science topic

# Intuition - Science topic

Knowing or understanding without conscious use of reasoning. (Thesaurus of ERIC Descriptors, 1994)

Questions related to Intuition

**Triage Decision Making inventory (TDMI)**questionnaire 27 questions have three dimensions of

**cognitive abilities**,

**experience**and

**, and in the study we want to use the 7-question questionnaire**

*intuition***.**

*"acknowledges using intuition in nursing scale (AUINS)"*1- Is it possible to use the above two questionnaires simultaneously in the study and perform correlation measurement, considering that one of the dimensions of the TDMI questionnaire is intuition?

2- If it is not possible to use simultaneously and the possibility of communication, is it possible to remove the intuition dimension of the TDMI questionnaire? And how can this be done with reason and logic?

Some believe that teachers are born naturally as teachers; some think that teachers must be trained. Where do you stand?

Hello everyone,

I am working with IEEE 12 bus system for Wind integration studies. I modelled the system in Simulink and carried out load flow analysis using PowerGui block. I lack the technical understanding of Load Flow analysis as I am relatively new in this field. I have two questions:

1) How to interpret the results of Load Flow analysis to ensure system stability and how to extract particular information at individual buses regarding power flow

2) Using the load flow analysis, can I determine at which buses I can replace the synchronous generation with Wind turbines?

I went through literature but mostly the iterations and mathematical aspects of load flow analysis are discussed and not the general intuition of why we specifically do it or can benefit from it. I would be really grateful if someone can guide me to some good literature regarding this and enlighten me for better understanding.

Regards,

Yasir Shamim

I have no mathematics background and learning the reconstruction of species phylogeny.

I understand the of principle of MCMC is to use baysian statistic, which is to deduce the posterior prob. from the prior prob. and likelihood of the parameters.

The question is:

what is the intuition behind RJMCMC? how does it defer from conventional MCMC? and when to use it?

I think MCMC is well-explained in the internet, but what RJMCMC explained in the internet is the math equation.

Hello,

I'm trying to model a transient heating process in COMSOL with a thin metal film on a polymer substrate. I'm trying to gain intuition into the mechanical and thermal properties as a function of temperature. In particular, how do the heat capacity and Young's modulus of polyimide change with temperature especially near or above the glass transition temperature.

Regards,

Adam

How Researchers come up with new catalytic material for HER? Is there any method to predict the catalytic activity of a material other than computational techniques? Especially all those new complex materials, are they results of digging computational data, intuitions, or serendipity?

Since I am from a physics background, I can understand physical treatment and fabrication methods of nanoparticle synthesis like arc discharge or lithography. But recently I have undertaken Sol-Gel and Hydrothermal synthesis of metal oxide nanoparticles. Let me show you and Fe- Sem image of WO3 nanoparticles. This is synthesized using the Sol-Gel technique. If the precursor isn't a typical metal alkoxide then the further reaction falls flat on literature.

Like I have used Sodium Tungstate (Na2WO4.2H2O) as a precursor. Now the acid would have a specific role, so would the capping agents, so would the rate of rotation in magnetic stirring. Even if I somehow figure out the chemical reaction, there would be a particular variable that could have been highly significant while others not so much in causing the structure to form. How do I develop a chemical intuition regarding the result? How can I reason the structure that is formed if I know the chemicals used. Is there perhaps a direct correlation between stoichiometry and structure? Kindly Suggest some literature that can help me.

I just need basic intuition that how I can reach to this particular equation.

I am adapting a scale ( TIntS scale from Pretz, 2014 -23 items) into my native language. After I have done the following steps ( Gudmundsson, E. (2009). Guidelines for translating and adapting psychological instruments.

*Nordic Psychology*,*61*(2), 29-45.) I ran in SPSS a Factor Analysis - Principal Components ( Rotation - Direct Oblimin).The problem is that 3 items from one subscale (Inferential Intuition) loaded on another subscale (Affective Intuition) (0.46; 0.56; 0.64 respectively) and not loaded on the Inferential scale (>0.40) -and I do not know how to explain that or why is that the case.

Undergrad learning Patch Clamping here. I was hoping I could imitate an oscilloscope using Clampex. My intuition tells me I have to change the connections on the Digitizer but I am unsure. Anyone who can help, you are greatly appreciated. Thank you, have a great day.

I have a question, I am struggling to get intuition and intrinsic motivation, why do we study Riemann Problems? I mean I understan a bit the application with shock waves and so on, but can anyone give me more poetic and humane application? How can it help the well-being of people? My question might be weird, but any comment or inspiration is appreciated .

Thanks

Most MCDM problems use subjective weights mainly obtained from AHP.

These weights are developed by preferences, qualified using a dubious table (as is the opinion of many researchers), and criticized since decades ago.

They are also obtained under the aggravating circumstance that the DM works only with criteria without considering the different projects, WHICH THEY MUST EVALUATE, according to AHP hierarchy.

That is, in comparing, say environment and disposable income, the DM decides, by intuition, that the first is more important than the second, and assigns a numerical value to that preference, using values of the above-mentioned table.

Then, this assumed weight, for it is not a weight, but a trade-off value, is used to select alternatives. On what grounds? None given.

Consequently, the DM decides that said preference is valid for everything in life, since he does not have any reference, other that a single objective, where this preference will be used.

Not too much thinking is needed to conclude that this process is invalid, because that preference may apply well to a certain project but not in others, or even in comparing alternatives within a scenario.

Now, the DM, using mathematical methods (The Eigen Value method, or the Geometrical Mean), determines priorities for each criterion - after his estimates are subject to the verdict of a formula - because the method demands that his estimates MUST fulfill, with a 10 % of tolerance, that they are consistent, or that meet transitivity.

That is that if criterion A = 3 times more important than criterion B, or 3B, and B =2C, then A=6C.

Now, one wonders why his estimates must be transitive?

Nobody knows, but what is worse, is that the AHP method assumes that said transitivity MUST also be satisfied by the real-world problem, not considering that in general, the world is intransitive.

And now the question:

**On what ground, on what theory, on what relationship is it assumed that those preferences from a DM or from a group of DMs, are valid for the real world?**

As a matter of fact, a very well-known theorem, the ‘Arrow’s Impossibility Theorem’, says the opposite.

I have posted several times this question. Is there anybody that can give a RATIONAL answer?

Your response and discussion will be greatly appreciated.

Thank you

Nolberto Munier

Some people are not impressed by the development of intuitive near-optimal closed-form solutions to some business problems because the exact optimal solutions can be obtained using a spreadsheet solver. The objective functions do not lead to exact closed-form optimal solutions. The approximate closed-form optimal solutions are very intuitive from a business perspective. My argument is that Little's Law is used to estimate the average WIP levels when you know the average throughput rate and the average cycle time, and it is applied in many different contexts. Of course, you can model all of the complexities of the shop floor and make this calculation more accurate. Aren't we better off if we can come up with some simple and intuitive equations that fit many business scenarios? Solving to exact optimum is in fact not reliable either, because the parameters are not quite precise in the first place.

I tried many sources but I could not understand how exactly encoding and decoding works...i.e how data is being compressed like which all inputs are getting combined and how values/features are being updated. Could anyone explain in detail with an example?

Thank you

There are two plates with horizontal center crack, one with crack length of

*a*with the other with*a+da.*The bottoms are fixed while the tops are under the same displacement load. For intuition, the second configuration is from the first one with crack propogation. Assuming the load is applied first and then crack propogates. During this proecdure, there is no extra external work but the energy dissipates in the second one. If the displacement is fixed, the whole system should have less force to propagte crack in the second configuration. This is to say, the second configuration is more stable and thus it should have lower stress intensity factor and lower stress concentration near the crack tip. Also, since the total stiffness is decreasing due to longer crack, the raction force should also decrease.But from numerical simulation, the stress concentration and reaction force on boudanry are both higher in the second configuration. Both model has the same geomtry, mesh, load except the crack length. The crack is simulated by decoupling the correspoding node pairs of two patches.

How really we perceive the Time? As a frame? and so assimilates or merges frames as an editor? Or in a continuous way?

Generally, there are three theories or models about time and consciousness:

1. our perception of time is

**Cinematic**perhaps as separated frames2.

**Retentional**model that basis on recent past3.

**Extensional**model, such as chunks.If we accept that time perception is a type of consciousness so we must express which model could say the best explanation and could compatible with our common sense.

How we sense time in our intuition? continues or discrete?

Possibly because I work a lot with ocean deployed devices, I have a lot of encounters with potting and marinizing using epoxies and silicones. I usually am working on intuition and the word of vendors but I wondered if there was any applied books out there describing the various categories of plastics and epoxies and their properties. A brief look at Amazon did not reveal anything.

Thanks!

Fritz

Hello everyone,

I am dusting off my knowledge on adaptive control for my research. I came across with a concept that I did not quite get back then and is still elusive to me: persistence of excitation. After looking in many places, I always found the expression for the condition with the integral of the regressor matrix upper and lower bounded. However, I do not get the intuition of the meaning and why the persistence of excitation condition is formulated like that. Any comments on this will be greatly appreciated.

Thanks in advance!

Currently writing a paper about Hegel with particular reference to his 'Phenomenology of Spirit'. Just wondering if anyone could point me in the right direction with this? Really appreciated.

In medical practice, there is such an important thing as medical intuition. When a patient visits a doctor, the doctor just looks and literally asks a couple of questions and he already knows what is going on and what disease a patient may have.

Let’s look at this from a data analysis perspective. So, what is medical intuition? It is like a doctor using his built-in neural network. Having analyzed a sufficient number of cases, a doctor is able, consciously or not, to identify some additional factors that help him narrow down the solution space. And further, he comes to hypotheses, which he considers a possible diagnosis.

The typical approach to designing flight control systems is to design an inner control loop to stabilize vehicle attitude and angular rates, while an outer loop tracks vehicle position. The reason given for the above is that attitude dynamics is "faster" than the translational dynamics. The theory of time scale separation in dynamic systems seems provides an answer to this question but What is the physical intuition behind this argument? Also, Is it possible to design a controller that does not consist of multiple cascading loops?

Hello,

a seemingly simple design question: The aim is to visualize the dependence of A and B by connecting A and B by a straight line (possibly with a label). The design options are: line type, line strength, text or symbolic label.

How would you visualize the "significance" and/or "strength" of the dependence?

Details:

- A and B are either independent (no line) or dependent. They are considered dependent if the likelihood of being independent (the p-value / "significance") is small (which corresponds in each setting to a certain value of a test statistic).

- The "strength" of dependence of A and B might be given on a scale, e.g. [-1,1] if one considers classical correlation.

(The use of colour is a further design option, which breaks down in black and white print. Therefore it was excluded.)

### all below can be skipped, it provides only further details for the reader interested in the background of the question ###

The detection of dependence and its quantification are usually separate procedures, thus a mixture of both might be confusing...

Background:

Apart from many other new contributions the paper arXiv:1712.06532

introduces a visualization scheme for higher order dependencies (including consistent estimators for the dependence structure).

Based on feedback there seems to be a tendency to interpret the method/visualization by a wrong intuition (rather than by its description given in the paper)... so I wonder if this can be moderated by an improved visualization.

If you want to test your intuition use in R:

install.packages("multivariance")

library(multivariance)

dependence.structure(dep_struct_several_26_100,alpha = 0.001)

dependence.structure(dep_struct_star_9_100,alpha = 0.01)

dependence.structure(dep_struct_ring_15_100,alpha = 0.01)

# which performs dependence structure detections on sample datasets

The current visualization does NOT include the "strength" of dependence, but that's what some seem to believe to see.

The paper is concerned with dependencies of higher order, thus it is beyond the simple initial example of this question. But still, it depicts dependencies by lines and uses as a label usually the value of the test statistic. Redundancy is introduced by using colour, line type and in certain cases also the label to denote the order of dependence.

It seems that using the value of the test statistic as label causes irritation. The fastest detection method is based on conservative tests, in this setting there is a one-to-one correspondence (independent of sample sizes and marginal distributions) between the value of the test statistic and the p-value - thus it provides a very reasonable label (for the educated user). In general the value of the test statistic gives only a rough indication of the significance.

A further comment to the distinction between "significance" and "strength": In the paper also several variants of correlation-like measures are introduced, which are just scaled version of the test statistics. Thus (for a fixed sample size and fixed marginals) there is also a one-to-one correspondence between the "strength" and the conservative "significance". These measures also satisfy certain dependence measure axioms. But one should keep in mind that these axioms are not sufficient to provide a sensible interpretation of different (or identical) values of the "strength" in general (e.g., when varying the marginal distributions). ... that's why currently all methods are based on "significance".

I ask for specific references to publications

Are there categories that can guide us in choosing the relevant themes? Does this contribute in any way to increasing the impact of our publications? Or should we follow our intuition?

(If you think this issue may be important, please recommend it to broaden the scope of the discussion)

Some theorems can reach a conclusion when the derivative of some real-valued function of a real variable is continuous, and some other theorems can reach a conclusion if the derivative merely exists. So, I guess that existence and continuity are not equivalent statements. But thinking about the definition of the derivative, my intuition is telling me that existence implies continuity. Or, equivalently, discontinuity implies nonexistence. My intuition visualizes a discontinuous derivative as a sharp corner in a curve. But a sharp corner looks to me like an undefined derivative at the corner. Am I wrong? I think I am wrong, because the literature makes a distinction between existence and continuity, but I don't understand how I can be wrong. I am looking for an example of a function that is differentiable at every point in some interval but with the derivative discontinuous at some point in that interval. Can you give an example?

Referring to standard textbooks, one finds frequent appeals to readers’ intuition, when fundamental theorems are proved or basic definitions are explained.

What exactly is intuition in such an abstract environment as topology, and how it can be developed?

-Mathematics is known to every scientist: it is a synthesis of geometry and arithmetic.

-Metaphysics is generally (because here and there, there are misconceptions, e.g. with 'intuition' which remains unexplained) known as worldview.

-Abstraction is mathematically explicitly unexplained (that is the why of my question): it is generally a shorthand, an abbreviation, or compression, by which you can think about thinking itself, a sort of 'meta-level'.

-Metaphysical abstraction is only generally known as 'imagination' or 'intuition' (ascribed to humanities and arts).

Thanks for your answers! Marc.

The Praecox-feeling is experiencing a resurgence in psychiatry, or - better to say - a conceptual reconstruction. Several authors (the first of all was Henricus Copernicus Rümke in 1941) claim that it is a type of intuitive recognition that does not compete with diagnostic categories and bundles of symptoms, but which detects the presence of schizophrenia in a complex phenomenological way, not in the subject itself - patient or his or her psychiatrist - but in the mutual intersubjective space which, by the way, resembles transference phenomena. However, in the concept of Praecox-Gefühl the term "feeling" is replaced (as Rümke has already done it) by the term "experience" ("Praecox-Erlebnis", not "Erfahrung").

Does it now make sense to open the chapters of intuitive diagnostics, at a time of psychiatry well equipped with operational criteria?

In non-mathematical sciences what I observed is usually people apply the existing material and methodology to different location, species, scenario, conditions etc (May be some other type of research method may exists as I don't have full knowledge of them). And in mathematical sciences researchers either apply the existing methods (with possible modifications in the methods) to different problems or the extended the existing general results. (may be some other). So a natural questions arises what is a good research. I guess an intuitive way of thinking will be better than above?

Hello there,

not sure if I'll reach anyone through this platform, i feel kind of lonely hahah. Currently working on my bachelor thesis and having several mental breakdowns thanks to the methodology chapter. since i'm not a big fan of statistics, I haven't really thought about my methodology choice a lot. I decided it based on intuition and "easy" methods i knew from previous projects (Likert scale). It turns out that this was not such a good idea... :-)

So my aim was to find out the impact of citymarketing events on the local retail sector. Do these events (town fair, open for business sunday...) benefit the retailers? Do they have a positive impact? Do they gain more customers? Do they make more profit?

Unfortunately, the retailers didnt give out real numbers such as profit, revenue,....

because of that i let the local retailers rate some statements with a likert scale. For example, "the revenue on the day of the event is higher than on a regular business day" or "on the day of the event, more customers are in my store". Then they rated it from 1-6 (totally agree - don't agree at all).

My problem is now, that i can't find any experts /researchers saying this is the right method to answer the research question. which is obvious, because i haven't thought about it previously.

But i thought, maybe somebody here can help me or find somebody that said something that supports my stupid choice of methodology?

don't really know what to do now. Its too little time to change everything and i have to work with what i've got.

This thesis is giving me grey hair and i am thankful for every comment that brings me a step further.

thanks and love, eva

In physics, many problems arise in the form of boundary value problems in second order ordinary differential equations. We are discussing here the Matrix Variational Method, as an efficient approach to bound state eigenproblems [1--8], proposed by the author starting in 1977 and used in top peer-reviewed literature in physics, in general [1,6,7] and in the calculation of scaling laws of Rydberg atoms [2], bound states of QM systems [3], bound states of three quarks [4,8], and other areas, such as [5].

We will also use this RG space in a new way, to conduct an open course, as a discussion. This course is physically offered at the same time, in Pasadena, CA.

The objective here is to present the topic as a method in mathematics, for second-year students in college, generally when they see differential equations, not just the epsilons and deltas of calculus, but the more advanced tools and intuition used in physics and maths.

This discussion will aim, as much as possible, to be free of the original connection to physics, in order to be more easily used in other disciplines. It represents the “translation” of a method in physics to mathematics, for general use, while benefiting from the physical intuition that started it.

We will use the theorem that says, “Any second order linear operator can be put into the form of the Sturm-Liouville operator,” and treat the Sturm-Liouville operator in closed-form. This will be done not by using eigenfunctions of any expansion, but an expansion that already obeys the boundary conditions for each case and provides a closed-form expression, which we will calculate following [1-8].

Contributions, and other examples, are welcome.

REFERENCES

[1] Ed Gerck, A. B. d'Oliveira, Matrix-Variational Method: An Efficient Approach to Bound State Eigenproblems, Report number: EAV-12/78, Laboratorio de Estudos Avancados, IAE, CTA, S. J. Campos, SP, Brazil. Copy online at https://www.researchgate.net/publication/286625459_Matrix-Variational_Method_An_Efficient_Approach_to_Bound_State_Eigenproblems

[2] Jason A C Gallas, Ed Gerck, Robert F O'Connell, Scaling Laws for Rydberg Atoms in Magnetic Fields, Physical Review Letters 50(5):324-327, Jan 1983. Copy online at

https://www.researchgate.net/publication/243470610_Scaling_Laws_for_Rydberg_Atoms_in_Magnetic_Fields

[3] Ed Gerck, Jason A C Gallas, Augusto. B. d'Oliveira, Solution of the Schrödinger equation for bound states in closed form, Physical Review A 26:1(1), June 1982. Copy online at

[4] A. B. d'Oliveira, H. F. de Carvalho, Ed Gerck, Heavy baryons as bound states of three quarks, Lettere al Nuovo Cimento 38(1):27-32, Sep 1983. Copy online at

[5] Ed Gerck, A. B. d'Oliveira, The non-relativistic three-body problem with potential of the form K1r^n + K2/r + C, Report number: EAV-11/78, Laboratorio de Estudos Avancados, IAE, CTA, S. J. Campos, SP, Brazil, Nov1978. Copy online at

[6] Ed Gerck, Augusto Brandão d'Oliveira, Continued fraction calculation of the eigenvalues of tridiagonal matrices arising from the Schroedinger equation, Journal of Computational and Applied Mathematics 6(1):81-82, Mar 1980. Copy online at

[7] Ed Gerck, A. B. d'Oliveira, Jason A C Gallas, New Approach to Calculate Bound State Eigenvalues, Revista Brasileira de Ensino de Física, 13(1):183-300, Jan 83. Copy online at

[8] Ed Gerck, A. B. d'Oliveira, The logarithmic and the square-root potential as confining potentials for quarks, Report number: EAV Report 02/79, Laboratorio de Estudos Avancados, IAE, CTA, S. J. Campos, SP, Brazil. Copy online at

Technical Report The logarithmic and the square-root potential as confining p...

There are intuitions that we value as the grammatical intuition of a child who, without having a lot of grammatical knowledge, achieves an adequate use of the language. But what about statistical, political intuition or the controversial aesthetic intuition? This is a topic that touches epistemology, philosophy and psychology.
I will hold your comments in high

I found this CO2 Incubator for tissue culture and I've never seen one that wasn't glass-like polished shiny steel on the inside.

See the attached pictures. The inside looks like an aged copper statue.

My intuition is that this is not suitable for maintaining sterile mammalian cells, but I would like to get other's opinions.

Is there a systematic way of approaching this problem or are most researchers using guess & test or intuition?

I have experience in teaching in Europe (France and Ireland) and in the US. The ways of teaching physics doffer quite a bit. In the US, Calculus is not required. Therefore, a Physics teacher has to teach physics using algebra. While in France, Mathematics courses incorporate calculus, even basic calculus. Also Physics courses implement some 'maths tools' like derivatives and so on, which makes the physics course a mixture between conceptual physics and calculus based physics. Some French teachers might say that the level dropped significantly, may be, but this is still another approach much more calculus based. Landau's famous textbooks use calculus, while Feynman's lectures are more intuition based (even though one cannot object that calculus is also a part of the lecture, at a second level of reading, I agree). Some teacher say, we don't need calculus as one can 'feel' the concepts. Some others will say that we should use calculus to solve problems, as one uses a screw driver for DIY at home. I think this is an interesting debate (less controversial, or may be not) and as usual your opinions are more than welcome!

I would like to know why point defect will not influence density .Please answer me with close reference to a proper diagram.My intuition is because of a localized affair but would be happy if anyone would highlught on the same.

Can

**non-native English speakers**( who are of course applied linguists) rate appropriateness of EFL learners’ speech act production elicited through Role-plays and Discourse Completion Tasks (DCT)? Would it be acceptable in Interlanguage pragmatics (ILP) research where recruiting native speaker raters could not be practical?I'm doing some quantitative comparisons of microbial community composition. I have two datasets (one is my own, one is a published dataset), and the two sets were sequenced with two different primers. My samples were sequenced from primers targeting the V4 region (515F/806R) of 16S rRNA, and the public dataset was from sequencing with primers targeting parts of the V3-V4 region (341F/805R). I've come across a few papers (Tremblay 2015 is a nice one) that talk about inherent primer bias and reveal biases in datasets from primers targeting more distant regions (V4 vs. V7-V8, for example).

I'm wondering if it's sound to just cut the extra ~200bp from the public dataset's sequences so they align to my own and I can pick OTUs from the whole thing at once. My intuition is that since the targeted regions overlap so much, primer bias should be minimal above the standard baseline, but I wonder if anyone else with experience in this area would have some ideas!

Thanks!

Several instruments have been developed to measure intuition, including: Himaya's HINTS, Miller's MII, Pacini & Epstein's REI, Pretz & Foltz' TIntS, Rew's AUINS, Smith's SIINS. But I found out that they measure the preference of using intuition not intuition itself.

So if we want to measure the extent of intuition that professionals possess, what methods and means can we use?

I feel uncomfortable working with truncated distributions at their truncation point. A truncated probability distribution can be very useful in modelling populations that are known to be finite for working with the bulk of the model. If there is very little data beyond a point, truncation is a simple and effective way to deal with the "finiteness" of the distribution. But a jump discontinuity of a density function to zero does not sound right. In fact, why would a jump discontinuity (jd) to zero be more justifiable than any other jd in a density function.

I know the basics of entropy, and something tells me that (sharp) truncation in a density function has a low entropy when selecting a model. I don't know if the Akaike criterion would favour a smooth, yet nonanalytic function above a truncated distribution or not? I don't know if discontinuities has any effect on the Akaike information criterion (it does add parameters).

Also, from a formal point of view, a truncated distribution does consist of two separate analytic functions: the "support", and the part above or below which it is zero.

I am stating what I think the answer is, but is a truncated distribution not a bad choice for a model where the region of truncation is of importance? And is my suspicion correct that entropy maximization does not favor truncation?

"The only real, valuable thing is intuition" A. Einstein

"...trust... your gut instincts. If you feel something is wrong, it usually is" Anonymous

And I am sure that even Popper would agree that you should pay attention to your intuition is it relates to something being WRONG.

A recent article on attitudes and learning describes a weak relationship between attitudes and learning in Physics. What other research is there that shows a relationship between attitudes and motivation related to learning outcomes?

Our intuition tells us that the result of the study above is surprising, so what else is there?

As a mathematician trying to understand the way the Signal-To-Noise ratio works in Digital Signal Processing, I have the following observation:

A signal is recorded, suppose I recorded a class lecture. When I insert this recording in audio-software which shows the recorded sound waves over time, I am able to determine the amplitude of the teacher's spoken voice and the amplitude of (static class) noise when the teacher is silent for some time. Suppose my recording indicates that the amplitude of the sound waves when my teacher speaks is at 50 dB and 20 dB when he is silent. For a signal-to-noise ratio I would instinctively divide 50 over 20, obtaining a ratio of 2.5. Or maybe more instinctively, the noise is 40% of the total incoming sound (noise-to-signal). Is my intuition failing me because the scale of sound (dBs) is not linear?

From one source I read that I could interpret determining the signal-to-noise ratio as [Teacher+Noise in dB]-[Noise in dB]=[Signal-to-Noise in dB], which would result in a 30 dB signal-to-noise ratio in the above mentioned example. Can anyone confirm if this is correct?

Do you have formal models of Intuition and Image Thinking? Thank you, Yurii

It seems they may actually be somewhat interdependent. For example, highly inharmonic combinations of tones are likely to yield the same sensation of dissonance that is often used to explain roughness with an example.

Forgive me if this question is already answered in a published paper. If it is, a reference is sufficient. A verbal description would be most helpful to me, but of course I can also take the time to decipher a formal mathematical one as well.

Positivism assumes that the social world exists objectively and should therefore be measured using objective methods rather than subjective ones such as intuition.

We all know that some foods (cheese, natto, durian, stinky tofu) smell quite disgusting but have a great taste. Can anyone clarify the chemical basis for this phenomenon? My intuition is that it has something to do with the volatility of the different aromas (the bad ones are more volatile, so they are smelled first, while the good ones aren't perceived until the food is chewed).

My goal, for a university project, is to manipulate aroma additives to produce a food that smells disgusting (sulfuric, decaying) but has a normal and pleasant taste (raspberry, chocolate, etc)

Broadbent in his model presents The dichotic listening task in the selective attention and The participant’s task is to pay attention to, or shadow !!

After listening, the participant remembers the sounds he has heard before

for both the shadowed and unattended channels

Please explain

My greetings * ahmed *

Is this even a meaningful question, in the sense of having specific parameters and scope that define it? What are its underlying assumptions, or, better, what questions must first be asked and answered before even the first question can be answered?

The test should consist of 3 dimensions; one for measuring analytical intelligence; one for measuring creative intelligence; and one for measuring practical intelligence.

I came across a piece of literature which suggests that the characteristic length for calculation of wind turbine should be the rotor diameter, but my intuition suggests that it should be the chord length of the blade at that given radius. Please help.

I am interested to identify tendencies of reasoning among students of design in order to prepared better exercises and tasks for my courses. Any suggestion may help.

What does it take to do phenomenal inventions?? What researchers must have in their minds?

Dear Researchers,

I'm currently doing a survey on the intuitive understanding of complexity of objects

and would like to ask you for 8 minutes of your time to answer a few very

simple questions from the survey[1]. This will hopefully help to better

understand and characterise the complexity of objects. The survey does

not gather or store any kind of personal information, and it does

not require any prior knowledge or skills. Your help is greatly appreciated.

With kind regards

Felix Baumann

decision making, motor skills, brain, neurosciences

In behavioral economy, a "risk owner" is clearly defined as a person or a decision maker.

But in financing and investment theory, the risk ownership is less clear. In a corporate setting, who owns the risk of a large project? Is it the project manager? Is it the CFO? Is it the CEO? Is it the Shareholders?

Does anyone know of literature examining risk ownership, or can you provide definitions of a "risk owner"? Alternatively a source on how the responsibility is shared among the stakeholders would be appreciated.

I.e. if a project has a reserve capital, is that then to be controlled by the "risk owner". And can you be a risk owner without access to a reserve capital?

I find myself starting to use "risk owner" in my work and publications, but I must admit it is an intuitive use and not one based on an accepted definition, and I would happily adapt to an established definition.

In sense of science, phenomenal concepts are discernible and reducible. So, are these extraordinary events. i.e., extrasensory phenomena like intuition, explainable? Intuitions may be considered as perceptions of inexplicable modalities but that which cannot be explicitly observed.

Quite often some myths turn out to be facts on account of scientific truths brought forth. But how about that when something occurring yet cannot be accounted for by means of tools of scientific reasoning―turns into a myth? An astute scientific observer’s job is to investigate these given myths and turn them into observable phenomena (physicalize). Note that something which is incompatible with the truth of physicalism─ is a mystery, unless proved otherwise. Yet, there is nothing mystical or phenomenal in such sense about the experiences of the mind, or consciousness. Consciousness and mental states are not incompatible with the brain functioning. They co-exist, the one latter (brain) supporting the one former (mind). So, what is so mysterious about this coupling? And from where does these extrasensory phenomena arise?

It strikes me that the longstanding debate about this issue does not include (at least to my knowledge) a comparison of the responses given by adults with social experience vs. children. The results of such a study may have bearing on Singer's (2005) conclusion that we shoud simply jettison our moral intuitions about this problem as the biological residue of our evolutionary history.

Does anyone also measure difference between probability and higher success when humans are intuitively selecting right answer?

I appreciate if anyone can help me to better understand the difference between interaction term and heterogeneity code (RPL) in random parameter logit model for NLOGIT/LIMDEP.

For example how differently should I interpret following two models for my attribute and interaction term:

Model 1: I added to my model an interaction term income*attribute as a variable in addition to attribute itself (normally distributed).

Model 2: I rather used ;RPL=income; Fcn = attribute(n) instead of model 1.

I am also looking for an intuitive explanation for the difference.

Thank you very much!

If yes how do you make your decisions in this kind of situations?

Hello,

Let us consider the classic random walk:

x

_{0}= 0, x_{n+1}= x_{n}+ r_{n},where r

_{n}are equally distributed zero-mean Gaussian random variables. It is well-known that the probability distribution of x_{n}is a zero-mean Gaussian with standard deviation proportional to sqrt(n).Now, I will say that the instant k is a zero-crossing for a particular realization of the random walk iff x

_{k}has opposite sign than x_{k+1}, and I will consider the random variable T defined as the temporal distance between two consecutive zero-crossing.What can we say about the probability distribution of T?

My intuition would suggest such probability distribution is time dependent and its expected value would also grow as sqrt(n). Am I correct or wrong?

I thank you all, Giuseppe Papari

Can this be accepted as a bridge between the quantum and the macroscopic level by introducing new elements such as quality, degree of order, consciousness, arrow of space, and time? Interpretation of observation and experience as two different events? The selection of only one event in opposition of the possible statistics of uncertainty events?

"Intuitive and creative decisions under uncertainty and limited time positively associated with the use of data visualization in Business Intelligence tools"

Especially in the realm of psychiatric assessment and treatment, reductionistic methods are often unable to accurately and consistently identify the etiology of a disorder. Does the answer to this problem lie in improving existing reductionistic methods, or rather in incorporating more intuitive or non-linear forms of investigation?

Many say, intuition plays an important role in many disciplines, not at least in those that demand rigorous thinking, e.g. physics and mathematics. Can we measure the extent of 'looking inside' in laying out an idea, proof, or any other improvement of science and technology? Or, do you contest the importance of intuition in the progress of science?

How would I research pre-cognitive judgment when it comes to a product's design? Specifically, I'm looking to see if there's a way to measure an individual's intuition and judgment of condom packaging and design at a level where they cannot explicitly say why they like/don't like the product. I've been racking my brain about how to even start designing a research project around the topic. I think I'd need to start by examining structure and development of mneumonic networks in certain populations, and then compare them to actual findings, but I'm clueless as to how to measure how an individual interfaces with a product without asking them questions.