ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
- What are the characteristics of distant large earthquakes?
1. Do distant large earthquakes located several hundred kilometers far from the sites (from 100 to 1000 km) have any characteristics which influence the ground motion and response spectra of these sites?
2. Do distant large earthquakes located several thousand kilometers far from the sites have any characteristics which influence the ground motion and response spectra of these sites?
In particular Subduction Earthquake.
Waves are the response of the medium to the transmitting of the seismic energy. Very rigid basal rock the speed to transmit seismic energy is very fast, the amplitude of displacement of the body wave is very short compared to the amplitude of the speed wave, and this means that the seismic energy is transmitted in the form of kinetic energy. When the seismic energy tries to pass to a very low stiffness (sediment basin) medium impedance phenomenon is produced. Seismic energy in the form of kinetic energy cannot be transmitted in a very different environment. Kinetic energy is obstructed in any way continue to transmit and becomes deformational (potential) energy, i.e. the amplitude of the seismic wave of displacement is greater and the amplitude of the seismic wave speed is less, as the energy approaches the surface of the basin, medium density is lower and the amplitude of the seismic wave deformations is greater and the amplitude of the seismic wave speed is lower. When the seismic waves approach the surface of the ground, the physical phenomenon changes, by the effect of edge or border, generating surface seismic waves, with major effects.Following
- Does anyone know of work investigating Fire Instructor health?
This is specifically in regards to fire instructors - those that undergo heat exposure numerous times per week. I am not aware of any in the UK but hopefully there may be some work completed elsewhere in the world.
- What is the average drops/ml & the the average drop size in commercial anti-glaucoma medication in India?
I found out that the drop size is between 25-70 microL in western literature, but could not find any studies on Indian commercial preparations.
@Anna madam, I cannot read German and i cannot access also.. :-)
Since it is about glaucoma medications , most of them are safe after opening the bottles. my concern is when a patient buys a bottle, how much does it costs for him, per drop.... RegardsFollowing
- If supporting services underpin flow of other services, does limiting valuation to only supporting services avoid double-counting?
While I would assume such an exercise is highly prone to underestimation of economic value, I would love to know what others think about it.
You should also take a look at Fisher et al. 2008: Defining and classifying ecosystem services for decision making.Following
- Is there any relationship between your memories and architecture and/or interior design?
Did you ever noticed that your memories not only about people or events and certain acts. But it's related to a certain place, building, park, hallway,room, piece of furniture... If yes, Do you have any examples?
I had not so long ago dreams of fantastic architecture of the rebuilt house of my grand grand parents, i.e. many years after the original building was soled and destroyed (new decorations, new floors, new art pictures, etc...). Why do such images appear in the mind of grand grand children?Following
- Can you suggest a remote sensing techniques for mapping submerged aquatic vegetation?
What are the best satellite or aircraft remote sensing techniques for monitoring and mapping submerged aquatic vegetation?
I think the LIDAR idea might be very fruitful, though I'm not expert in that area. I've heard of it being used, esp. given many new technologies and processing techniques.Following
- What is “infinite”?
Is it anything other than an idea of the human mind, or something real objectively?
“Infinite”, “potential infinite”, “actual infinite”, “potential infinitesimal”, “actual infinitesimal”, “potential infinity”, “actual infinity"
Are they all “infinite”, but with different natures?
1. How many definitions of “infinite” with different natures in science can we have as ideas of the human mind?
2. How many definitions of “infinite” with different natures in science can we have as something we can find in the real world?
3. How many definitions of “infinite” with different natures in science can we have as co-products of human mind and objectivity?
4. Can we really have many different definitions with different natures for the concept of “infinite” in human science and how can we distinguish them theoretically and practically?
Pupils understand immediately that there is not something like a biggest natural number. So they understand that the set N is infinite. Also, they understand that if n becomes bigger and bigger, 1/n tends to become 0, without reaching 0. So, the next step is to explain them that "n becomes bigger and bigger" is the same thing as saying "n tends to infinity, without reaching it". Children have a sense of continuity, and then we tell them that "infinity" is introduced for the sake of completion. I know that pupils of fifth class or sixth class will understand this.Following
- What are the advantages and what are the problems of the hypothesis about “retinoid system”? Is the hypothesis about “retinoid system” (by Professor Arnold Trehub), as described in “Consciousness and Cognition” 16 (2007) 310–330 and in the other works of Professor Trehub, a plausible hypothesis? What are its advantages and what are its problems?
Professor Trehub describes it in the following words:
“Activation of the brainʼs putative retinoid system has been proposed as the
neuronal substrate for our basic sense of being centered within a volumetric
surround –- our minimal phenomenal consciousness (Trehub 2007). Here, the
assumed properties of the self-locus within the retinoid model are shown to
explain recent experimental findings relating to the out-of-body-experience. In
addition, selective excursion of the heuristic self-locus is able to explain many
important functions of consciousness, including the effective internal
representation of a 3D space on the basis of 2D perspective depictions. Our
sense of self-agency is shown to be a natural product of the role of the heuristic self-locus in the retinoid mechanism.” (Abstract, from: Where am I? Redux.)
For the publications of Professor Trehub see:
The question has been already discussed on the folowing thread:
I am glad to see You, Bernd.Following
- Can RNA be degraded in -20°C after 24 hours?
I extracted RNA by GeneJET Plant RNA Purification Mini Kit from Thermoscientific and got 4 bands in undenatured electrophoresis (without DNAse treatment). Then I storage it in -20 °C. After about 24 hours, I ran that RNA sample again and got no band at all, only "smiley smear" (bottom picture). I dont know if my RNA got degraded or there was any problem with electrophoresis.
Patricia is right, I do not see the rRNA bands in any of the samples... accordingly, they all look degraded for me. What plants do you work on? Some species (for example grapevine) are really difficult to handle with column-based kits because of the presence of secondary metabolites...Following
- What is the inner filter effect in fluorescence spectroscopy quenching?
Can any one explain what the inner filter effect in fluorescence spectroscopy quenching is?
How is this quenching related to the Stern-Volmer equation?
As Adam mentioned, you can correct for the inner filter effect relatively easily.
F-corrected = F-observed*10^((OD-excitation+OD-emission)/2)
If after the correction is applied you have a linear Stern-Volmer plot you than only have one type of quenching. If you still have a curved plot you may have both dynamic and static quenching.
Distinquishing between static and dynamic quenching is slightly more complicated, since the Stern-Volmer plots for each are linear (with both together it becomes hyperbolic). The distinguishing feature is that increased temperature increases the SV plot slope, K, for dynamic quenching (more collisions) and usually decreases slope for static quenching (weaker association).
See the Lakowicz text for more details.Following
- Acid-butanol method. liquid became two layer.Can we measure the total amount of condensed tannins with the upper layer ?
Leaves are air dried and grinding ，for each leave sample, leave extracts were created by extracting leaf powder in deionized water and left to shake for 12 h.We used acid-butanol method to quantify condensed tannins,the ratio of reagent to sample was 4:1,but the liquid became two layer.Can we measure the total amount of condensed tannins with the upper layer ?Following
- How would you determine both necrosis and apoptosis occurred in your culture?
If you have the same cell population (eg endothelial cells), in vitro, and treat them with a chemo-agent and you see cell death in your culture, how would you determine both necrosis and apoptosis occurred in your culture? Is it possible that a chemo-agent induce both necrosis and apoptosis in cells?
AnnexinV/PI time lapse microscopy can be a good way of differentiating necrosis vs apoptotic cell death. This way you can also look at the morphology of dying cells. Keep in mind that in tissue culture cells undergo secondary necrosis, this makes the FACS double positive population questionable. Also some cells stain very poorly for AnnexinV, so you might not see an increase in annexinV positivity as the cells will take up PI, even if they are dying via apoptosis.Following
- Can someone please help me to find a some classical simple method for the quantification of copper in water at ppb level ?
Currently im using Ion chromatography but due to some interferences from Na ions, which have very close retention time to that of copper so that i can't quantify the copper very accurately. So i want to switch to some classical kind of method..
Classical methods may not help you in finding ppb level of Cu in water. ICP-MS is a good bet for this LOQ. It is not that expensive You can test one sample for about $100.
- How remittances may promote dependence on a source of revenue?
Natural resource rents can have negative effects on a country’s economic growth. Remittances can also produce the same effects. Indeed, it can lead to
a number of deleterious outcomes, namely moral hazard effects (families will react by reducing their labour efforts at home), depreciation of human or physical capital over time.
Are these effects real or fictitious ?
The argument about remittances having a moral hazard and dependency effect is pretty old and is summarized in this World Bank survey paper:
- Has anyone used the BPRS or PANSS as measures of psychosis symptoms retrospectively (i.e. based on individual subject recall)?
I am interested in substance-induced psychosis, and was wondering if anyone has come across measures of psychosis in this population that can be conducted based on individual recall, i.e. when the person is no longer psychotic.
Has anyone used the BPRS or PANSS in this manner before? Any comments on their utility/validity in this population?
Would appreciate any help anyone can offer.
- What are pros and cons of running a microcontroller at low and high frequency?
A micro-controller can be run at multiple frequencies as provided in its datasheet.
What factors control the decision to run the micro-controller at a higher frequency and lower frequency? Also suggest with the perspective of a board with RF section as well.
A micri-controller or a chip operating settings is usually governed by a DVFS (dynamic-voltage/frequency scaling policy) that controls both operating voltage and the frequency. The decision that the policy makes is surely depends on the objective, e.g. is it optimum performance, maximum performance, optimum PDP, etc. Recent DVFS policies concern energy consumption along with the performance. It usually runs in kernel and should be accessible in open source OSes.
The factors that such policies monitor is usually the internal CPU/MCU unit status, which they obtain from special counters (i.e. search for EMON counters) that keep information such as the utilization of units.
I would also recommend you to have a look at the following publication from IEEE Xplore, or search keyword "DVFS policies".Following
- How to calculate power for path models in SEM?
I appreciate your help to calculate power for different path models in SEM with observed variables.
I have a path model with discrimination as the exogenous variable, 4 mediating variables (measures of stress and socioeconomic status) and smoking as the endogenous variable. I saw in other posts that some of you recommended the use of an online package that calculates power for RMSEA such as the one on this website:
I appreciate it if you could help me clarify some details:
1) How do I calculate the degrees of freedom for this model?
2) How do I decide what is the Null RMSEA vs. the alternative RMSEA? Is the null RMSEA any value that indicates poor model fit? And the alternative RMSEA the minimum value that indicates good model fit (0.05)?
3) What if the model above included an interaction term between discrimination and another two constructs (coping and social support)? Would that change the way power calculations are made?
Brad is correct. Holding constant parameter estimates and sample size, larger models that have greater df will have greater statistical power, compared to smaller models that have fewer df.
Viable options for estimating power to detect an interaction term as statistically significant in an SEM path model include the following:
1. Conduct a power analysis for the regression model containing the interaction term, all other relevant predictors, and the dependent variable in question, to estimate power to detect the interaction term given assumptions about the proportion of variance that it and the other predictors explain in the dependent variable.
2. Follow Satorra and Saris' (985) method of obtaining the noncentrality parameter for the nested models approach to detecting the significance of estimating versus fixing at zero the direct effect of the interaction term in the path model, and using the NCP along with a published power table to determine power [Satorra, A., & Saris, W. E. (1985). The power of the likelihood ratio test in covariance structure analysis. Psychometrika, 50, 83-90].
3. Use Monte Carlo simulation to construct a population based on presumed values for the parameters in the path model and use resampling methods with the N for your study to build a distribution of values for the interaction term's path coefficient by imposing the path model on each resample and storing the path coefficient estimates. The percentage of significant coefficients in the resampling distribution represents the statistical power of your model with the given N to detect the interaction term as significant.Following
- Seawater purification in poor communities?
I am interested in low cost seawater purification technique using solar energy, any recommendations?
Dear Suleiman, I have seen this system the other day but you should also look at a system called SunVention
When you use concentrating solar power with a sun tracking system then the seawater distillation technique is often used in desalination of water from sea or boreholes.
- Does anyone know any recent papers on Statistical Optimum Estimation Techniques?
We are currently working on ways of assessing the quality of heuristic solutions to combinatorial optimization problems. SOET (see the question) is one way of doing so. There is a recent review in Journal of Heuristics by Giddings, Rardin, and Uszoy (2014). But is someone able to suggest ongoing work on this topic?
I prefer "to be certain that b0 is somewhere between b1 and a1" with confidence level = 1 and even don't consider "almost certain that b0 is between b1 and s1" with any confidence level < 1 . This is not a case where the theory of probability. As example, you can image the following statement: "This theorem was proven with confidence level = 0,999" . As to me, my answer would be NO. Here we can consider only
a single case: "This theorem was proven with confidence level = 1". For all other confidence level < 1 theorem not be considered to be proven. Well, you can say a phrase: "I found solution with confidence level = 0.999". I think that answer would be NO. Thus, my resume is the following: we must consider seriously only reliable events (a1) with confidence level = 1. To as s1 > a1, this metric can be consedered as any "recomendation". To as me, in case of b1 = s1 I will not interrupt the search of finding b0 by claiming a phrase: "I found an optimal solution b0".
In conclusion, I again want to remind about great importance of finding quality a1 to evaluate a heuristic solution b1 at the moment.Following
- Would it be acceptable to compare my data findings with other published data in a graph?
I got a mean of total microbial load that are isolated from eggshell, however, other researcher has quantified the microbial load before washing eggshell with water was banned in Europe, my question is can I compare the result that I got with other researcher results that were performed before the washing ban and see if there is a significant difference. If yes, do I need to obtain the mean of total microbial load that were conducted by other researcher results ? What is the right way to treat the data?
That is another important factor but I think the microbial load would decrease since the hens were more stressed and prone to microbial attack in battery cages than in barn.Following
- What may account for students ignoring punctuation marks in their writing nowadays? How can the practice be checked?
Has someone also noted that students of today ignore punctuation marks in their writing? For the past eight years, I have observed that students, most often ignore punctuation marks in their written works. In a recent assignment, I asked students to answer five question eliciting information about their uncle. Out of the 540 students who participated, 510 (representing 94%) wrote their answers without a full stop at the end of each of the five sentences. The students are of sixteen different nationalities with this common problem. The international dimension of the problem is what is quite worrying. What could be responsible for this kind of behaviour? Lack of or inappropriate use of punctuation marks has always been penalized as part of grammar errors when marking students’ exercises but this has not minimized the problem. How can this problem be tackled?
Writing; behaviour, evaluating
The reason is due to the teacher who ignores errors when students do not write punctuation marks, so to solve this problem, teachers must guide and admitted in awareness sessions to ensure the safety of their language and then head to resolve students' errors.Following
- Do you support Tom Leinster's call not to help intelligence services through mathematics? "Intelligence agencies hire lots of mathematicians, but would-be employees must realise that their work is misused to snoop on everyone, says Tom Leinster"
New Scientist has published an article recently, where Tom Leinster asks mathematicians to stay away from supporting NSA, CIA, GCHQ, (former) KGB and all the other organizations that spy on us. I even don't know the name of their Chinese colleagues' organization.
What are your thoughts on this?
@Fairouz, I am wondering how 'those services' (we talk about tens of thousands of employees) can be thoroughly monitored if there is not political or authoritative power steering them in the right direction. Regarding the US, those services were harshly criticized for their 'limited power to collect useful information'. In the aftermath of 9/11 the government pushed them ahead to control the communication networks more effectively. Now they observe us at our computer desks and spy on our mobile communication - in soccer we say, they massively overhitted the ball.
I am afraid, George's analysis posted yesterday is very true. The problem is not the behavior services as such but the agenda of the steering committees populated by politicians and lobbyists. The agenda has to change.
Thanks, @Kevin, for introducing me (and others) to this important field of study. A satirist would probably talk about the De-Ku-Klux-Clanization of mathematics education.Following
- ArchiMate vs UML. When use one or the other?
While modeling systems, when do we have more advantages using ArchiMate?
Are ArchiMate and UML meant to be used simultaneously or one should always opt by one of them?
It depends on what you're trying to model. UML is very well known and there are a lot of resources to help get you started (software, tutorials etc). UML can also be a bit overwhelming at first but it is worth the time and effort. ArchiMate is useful and possibly faster to learn, but is more suited when you also want to incorporate business/management elements in your chart. My opinion is that if you need a lot of detail, use UML. If you don't require a lot of detail but want to incorporate elements of business/management processes, use ArchiMate instead.Following
- Is it possible to calculate atomic (or ion) distribution amount of A atoms (or ions) in B2C3 crystal structure in FullProf (A replace to B) ?
it is B2-xAxC3 type magnetic molecula.
Yes if there is enough Z contrast between A and B, otherwise you would need to do anomalous scattering or neutron diffraction.Following
- Can any one help with providing information about application of genetic algorithms in neural network apps?
Evolutionary Computation can be used to optimize neural networks (NN) in several ways:
- optimize NN design or structure;
- optimize NN weights or training;
- optimize the preprocessing of data to feed the NN;
- post-process NN outputs (knowledge extraction).
More details can be found in this presentation of mine:
Also, in my webpage I have several evolutionary neural network related papers,
- Why did sections incubated with sense probe in my RNA in situ experiment show signals at the same place with that of antisense probe?
In my RNA in situ experiment on rice tissue, antisense probe signal showed specific distribution in a region of sections. It would be perfect if the sense probe did not show signal at the same place. The signal of the sense probe was always weaker than that of antisense probe. Was the signal true? Or it was just a specific region attract all kinds of probes? Or sense probe tends to show signal in the same region with antisense probe when the probe concentration is high.Following
- Would someone please give me some reference for the existing source code of HMIPv6 in Omnet++ ?
I am working on HMIPv6 and want to modify it by some means. I am using Omnet++ as a simulator. Can someone please share the existing code for hmipv6 in oment++ so that I can modify that.
Thanks in advance
I could easily find it in Google. Hope to give a useful reference for you.Following
- Can you suggest a simple method for making a silicone rubber (PDMS) surface permanently hydrophilic?
I am trying to find a simple method for treating a medical-grade silicone rubber surface to render it permanently or quasi-permanently hydrophilic such that the adsorption of proteins is greatly increased. I have used low pressure gas plasma with some success but would like to make the protein-surface interaction stronger and longer lived.
Will your PDMS be exposed to air? One of the things you could try is to oxidise the surface - via oxigen plasma, corona discharge or UVO treatment - and keep it submerged in water. A quick search revealed this article.Following