ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
- 15Why is there an interval of more than 30.000 years between the first visible traces of art and writing?
The first signs of art date from more than 30.000 years ago (e.g. cave art) whereas the first signs of writing (e.g. Tamil) appeared let say ca. 5000 years ago?
Any idea why artistic humans waited such a long period before they decided to start writing?
What stories cavemen of more than 30.000 years would have to tell, besides using body signals and some simple vocalizations to transmit personal experiences?
Perhaps a question is at what moment human vocal communication became substantially more complex to transmit more complicated stories, which would the basis for the development of writing, e.g. as a tool to memorize and transmit more complex messages in the absence of the writer?Following
- 2What is the difference between sensitivity and linearity when we evaluate the performance of an instrument?
I made a dilution series of a mixure of different compounds and significant correlations (R2 > 0.99) between the response of the compounds and corresponding concentrations were observed. Does this relationship indicate good sensitivity? How to evaluate sensitivity?
'Sensitivity' is a parameter that - as far as I'm aware - is out of date. It no longer appears in analytical chemical or pharmaceutical method validation protocols.
It used to be very important, and was equivalent to the slope of a calibration curve. Thus, in a long and detailed chapter on the classical analytical (beam) balance, Kolthoff & Sandell* define sensitivity as 'the amount of deflection of the beam or pointer produced by a small excess of weight on one of the pans'. They went on to state that If the balance is designed to be too sensitive, the response becomes 'irregular', presaging the use of the signal-to-noise ratio. In practice, the sensitivity of a balance usually varies as a function of total load.
Another example, from physics and electronics, is the galvanometer or pointer meter that measures current.The most sensitive workshop moving-coil meter movement gave full-scale deflection for 50µA; the sensitivity was given either as that current, or (when used for voltage readings) as '20000 ohms per volt full scale', reflecting the amount of perturbation a voltmeter introduced when testing high impedence circuitry. When sensitivity wasn't an issue, one used 1mA FSD, in the interests of robustness, stability and accuracy.
Nowadays, particularly with digital read-outs or recordings, we should think in terms of the accuracy and repeatability of the response function, which may or may not be linear. The noise level is treated as a separate parameter. In general, noise, accuracy and repeatability are functions of the quantity we are measuring, a trap for the unwary if the signal is linearised electronically or in software, as in spectrophotometers.
To conclude, 'sensitivity' is no longer considered a useful parameter. Ideally, we should report noise, accuracy and repeatability at several points covering the stated working range. Commonly, we determine several sets of parameters, for example short-term, between days, long term, between instruments (same and different models) and between labs.
- - -
That's enough for the moment about sensitivity. To return to the question, the correct approach is to use inverse regression analysis, as presented by Miller and updated by Tellinghuisen. If you do a full method validation, you may be able to use single point calibration; I've made some remarks about the use of replicate points on RG.
Two other points: 1) serial dilutions don't give independent calibration points if the weighing and making up of the stock solution are subject to significant random variation (you may underestimate the method uncertainty), 2) I'm not convinced the usual assumption of normal distributions is valid; when weighing a powder, some may fall on the balance pan or the outside of the weighing vessel, but (apart from loss of adsorbed water) it's hard to think how the recorded weight can be too low.
*Kolthoff, I. M.; Sandell, E. B. (1951). Textbook of quantitative inorganic analysis. Macmillan.
Miller, J. N. (1991). Basic statistical methods for Analytical Chemistry. Part 2. Calibration and regression methods. A review. The Analyst, 116(1), 3. doi:10.1039/an9911600003
Joel Tellinghuisen, Simple algorithms for nonlinear calibration by the classical and standard additions methods. Analyst, 2005,130, 370-378 DOI: 10.1039/B411054D
Lee, ResearchGate (2013) Calibration uncertainty in pharmaceutical analysis: replicate single-point calibration revisitedFollowing
- 1Do we really understands the ripples of space time?
Ripples of space time what we call gravitational waves, even after the discovery of these ripples , do we really understands the physical origin of these ripples?
I am asking it because, I even don't understand the physics of mass energy tensor to space time curvature ( a mere mathematical concept for me).
Einstein's relativity theory has solid physical foundations. I would suggest you to derive Einstein's field equations from scratch. You will need tensors for sure and then apart from the joy of finding things, you will understand the physical origin of gravitational waves way better.Following
- 5Why the negative control in ovary and uterus of rat was stained in IHC
we want to detect alpha estrogen receptor in ovary and uterus of rat with IHC method. But in the result, the negative control stil stained and it he granulosa cell was not stained.
Have you performed other IHC tests with this procedure successfully? Are the species of your protocol compatible?
primary is from mouse, secondary is from which species against mouse?
I assume, that incubation over night with the ready-to-use antibody is too long. Ready-to-use needs usually about 30 min incbation at RT. This could cause unspecific staining in the negative control.
Can you exclude, that there happend no mistake with the application of primary and negative control?
I would try to repeat the test with either diluted primary or shortened incubation, until the negative control is clear. A well-known positive control should also be stained with the experiment (breast, portio).Following
- NewAlternative splicing or somatic reversion?
Which is the best way to dilucidate if a mild phenotype in a patient with primary immunodeficiency is caused by alternative splicing or somatic reversion (particularly when alternative splicing is present but it seems that something is missing to elicit a partial phenotype)?Following
- 3What is the best suitable solvent for DNA ?
any solvent which completely and rapidly evaporate after being used and has environmental friendly impact.
If you want to use a buffer, not just water, to maintain the pH of the DNA solution, then some volatile buffers are ammonium formate, ammonium bicarbonate, and ammonium acetate.Following
- NewI want to characterize actinobacteria on the basis of its chemical properties, please suggest some methods?
i want to find the cell wall chemotype, whole sugar pattern, peptidoglycan type, fatty acid pattern, major menaquinones, phospholipid type and G+C content of different actinobacterial isolates for their identification. please suggest me some methods which can be done easily in laboratory?Following
- 11If Avrami coefficient value (n) is <1, then what is the significance of ‘n’ and how to correlate the ‘n’ with the propensity of nucleation and growth?
I have studied the kinetics of a multiple stage (three stages) thermal decomposition reaction of three solid materials (layered double hydroxides). These materials are similar in structure with a very little variation in composition (layer charge). I found the value of Avrami coefficient (n)= 0.85, 0.7 and 0.6 for material 1, 2 and 3 respectively- for the first stage of the decomposition reaction (stage 1- follows Avrami model, [-ln (1-a) ^ (1/n)] where ‘a’ is fractional reaction and ‘n’ is Avrami coefficient- signifies- the dimension of growth of the nucleus.).
My analysis shows that, with increasing the interaction among the constituents (i.e. increasing the layer charge: material 1 to 3) ‘n’ value decreases and activation energy (Ea) increases (so does the decomposition temperature). One quick thought came to mind about a probable correlation between ‘n’ and ‘Ea’. If I can correlate the value of 'n' with the ease of formation of nucleolus and subsequent growth- then it might possible to justify the variation in the decomposition temperature and corresponding Ea interms of n
Response to Dr. Nobuyoshi Koga sir,
In one of my paper i have used SEM to justify the proposed kinetic model and mechanistic pathway. In recent paper (communicated) i have indirectly support my proposed kinetic model by the following way-
1. First assigned kinetic model for the three stage decomposition reaction
2. Based on the kinetic parameters a TTT diagram is developed
3. The TTT diagram predicts different extent of decomposition for a different set of time and temperature
4. We justified one of the predicted % decomposition with help of IR study with reasonable accuracy
Thus in a indirect way we justified our proposed model.
In addition, we have done the similar kind of research to study the spinelization reaction (modelling, TTT and justification- communicated) and justified our proposed model by matching the % spinel formation from XRD analysis.
Having said that, I am still unable to explain the mechanism based on the kinetic models. Hope you can guide me on how to handle such questions ( i am facing this type of questions very often). Thank you very much sir for the suggestions, comments and help.Following
- NewPlease, any suggestion how to estimate the thickness of thin films Co/Cu Multilayers using Low angle-XRD?
In this case, thin films fabricated using DC magnetron sputtering method on Si substrate.Following
- 2Zero-inflated continuous covariates: Is there a standardised way to deal with such covariates in logistic regression?
For instance, if you measure habitat selection and you have many types of habitat. The research unit is usually small and contains only 1-3 types of habitat, the rest have zero percentage. Do this kind of data cause problems in logistic regression and how to deal with them.
Dear Sean, thanks for the advice, it's very helpful.
All the best!Following
- NewIs 5 month Central Rolling of the penetration data for FMCG brand is Statistically fine?
I want to set realtionship between Penetration and Brand health Metric say, 'Use Now a Days'. I have done the correlation on the monthly discrete data of both the metrics i.e. Penetration & 'Use Now a Days' and getting the value of '0.52', now I rolling(5 month central rolling) of both the data the correlation improves to '0.74'. So Is 5 month Central Rolling of the data for FMCG brand is Statistically fine? and by 5 month central rolling I mean..taking average of 2 months back, 2 month after and 1 month of current data.Following
- NewIs is possible to find burger's vector for a non-cubic system? If yes, how? If no, why not?
If yes, how? Kindly mention the formulae.
If no, why not? Kindly elaborate the reason.Following
- 1Does anybody know how to extract plasmid DNA from a Sterivex filter?
I'd like to get plasmid DNA from a Sterivex unit containing bacterial cells from ocean samples. The purpose of this is to do a shotgun sequencing run but ideally on a DNA extract enriched for plasmid DNA. I could go brute force and get total DNA and then computationally identify DNA reads of plasmid origin but that would cost more money and analysis time than simply starting from a plasmid DNA extraction.
Any advice is appreciated
If the bacteria are intact, you can just break open the filter, collect the bacteria, and then use a standard mini prep kit.Following
- NewCan anyone know or provide a research self-efficacy questionnaire based on factor analysis from Kathleen J. Bieschke?
I'm looking for research self-efficacy questionnaire based on factor analysis from Kathleen J. Bieschke. The Research Self-Efficacy Scale (RSES) was designed to measure self-efficacy beliefs regarding one's ability to successfully perform various research-related behaviors. This study examined four factors of the RSES: (1) Conceptualization; (2) Early Stages; (3) Presenting the Results; and (4) Implementation. But I can't find the detail about that. Can anyone help me? Thank youFollowing
- 1How Ricci Flow is related to General Relativity?
General Relativity tells us about curvature of spacetime while Ricci Flow is a "heat equation of the metric". How related this two equations?
Please, read my paper regarding this topic
- 5Can you please help me with particle filter matlab coding for object tracking?
please anyone has matlab code of particle filter so that can pass it to me. I still, there are some ideas that I can not get it even I had gotten one code for it, I still have confusing about it, as I can not make it working.
Any help please?
thank you guys,
really I had managed to download two codes for particle filter from mathwork site, but the two I found them complex to understand. the code is long and the ideas is not that clear for me.Following
- 1Whether atmospheric correction should be done before land surface temperature estimation?
I always confuse that some different methods for retrieveing LST, for example, radiative transfer equation, mono-window algorithm, single-channel method, seemly have included elimating atmospheric effect processing why atmospheric correction still is needed in some papers. If the procedure is necessary, I only need to use for thermal band?
A short answer to your question is yes, atmospheric correction is required while estimating the land surface temperature in raw remotely sensed radiance data. If you do not correct for atmospheric effects how can you be sure that the surface temperature which you have estimated is really of the land surface on the ground or is it because of the atmosphere or is it a combination of both? Obviously, all land observing satellites capable of sensing in the thermal infrared wavelength region (usually 8 µm to 12 µm) have to look through a column of atmosphere. When a sensor records the raw signal which it receives through its IFOV, it cannot differentiate whether the signal is due to the land leaving radiance or the radiance from the atmosphere. It just records what it sees. That is why raw radiance data in the thermal infrared wavelength region have to be corrected for atmospheric effects to estimate the Surface Emitted Radiance which is ultimately used for estimating the Land Surface Temperature using various algorithms. This principle remains more or less the same for all algorithms used for estimating the LST. Consider studying the latest review article on LST. Another crucial aspect which cannot be neglected while estimating LST is Emissivity.
You can try to find the difference yourself. Estimate the temperature using raw thermal bands and using atmospherically corrected thermal bands and see which method shows greater accuracy with measured value on the ground.Following
- 4In the past a secret system of measures was used?
My research detected a common measurement system in all buildings, from the protohistoric period until the introduction of the metric system. It's simple, is based on sides and diagonals of a anthropometric reference square, and is very efficient for the design and dimensional coordination works (see my publications).
However no written sources are known. So, could it be a secret for millennia? We have a new paradigm on the measure in the past?
Dear Francisco Javier, I have been reading your dissertation, and I have been learning a lot!
I think I understand what our aim is here in your question. My naive answer would be that measurement, linked to sacred rites —most of them are repetitive and have to do with the measurement of time and place, as Mircea Eliade has studied and commented upon— was probably sacred itself and thus bound to secrecy. That implies the important status of measurement in culture. Books like the Kabbalah, and the Bible itself when they refer to the Tabernacle and other sacred buildings like Solomon's Temple, or when other cultures apply it to sites like Stonehenge or the placement of the Moai in Rapa Nui, are qualifying measurement as sacred and thus reserved for the initiates. There were rites of measurement: the rite has probably been lost to us, but wecan still measure the places because the objects and buildings can still measure by own own standards and methods. Whichever the rite was, the fact that we can still measure them means that it is, in itself, proof that we can recover the method even if we do not understand their reason or meaning.
Palladio comes to mind. A colleague architecture design teacher invited me to a student pin up. The students were required to imitate Palladio's measurements and proportion in the process of designing a contemporary house. He asked the students to talk about "rhythm and proportion", but he was not aware that Palladio was not using "fractions", but had devised a combinatory scheme of whole spatial modules and built up by adding them i different symmetrical arrays. He said: "Palladio divides this rectangle in two or three rectangles..." and I explained that he did not part from a whole to make fractions but the other way around. My colleague did not know that, in Palladio's time, the idea of "fraction" was not used yet in Italian architecture. Palladio summed up modules, he did not break them down. In other words, my colleague used a different method to arrive at Palladio's proportions. But I though it was important for students to know that Palladio's houses were designed by symmetrically adding up spatial modules.
Best regards, LillianaFollowing
- 12Is there a way in statistics, to consider multiple variables as one ?
For example for illustrating the development of ICT, there are six different indicators.
How can I consider these six different indicators as one single variable in order to use it for further statistical test?
Is there a way ?
If anyone has knowledge of statistics, it could be great to help me.
Dear Elahe Meydani,
I agree with all my colleagues on the possible techniques that can be applied to your problem. However, if all variables are continuous, I would like to suggest the construction of a graph not oriented using the inverse of the correlation matrix. Thus, it is possible to make inferences about conditional independence and determine which variables are directly related to your variable of interest. For a quick read on the potential use of graphs in statistics, I suggest
Wasserman, Larry. All of statistics: a concise course in statistical inference. Springer Science & Business Media, 2013.
and to use the inverse of the correlation matrix I suggest
Whittaker, Joe. Graphical models in applied multivariate statistics. Wiley Publishing, 2009.Following
- 2Is it unknown freshwater tapeworm ?
Found this specimen of about 3 cm long and 5 mm width along a freshwater brook in the province of Limburg, The Netherlands. I have seen millions of freshwater invertebrates from our country and I consider myself a specialist on annelids (leeches, oligo- and polychaetes) but I don't thing it is either of them. The only thing I can think of is some sort of (free living?) tapeworm. As it is collected with a pondnet, it is not sure if this is truly aquatic. It is flattened and gradually becoming smaller towards the end or head? Is it decapitated? Does anybody has an idea? many thanks
edit: I can't see an oral/anal opening, chaetae are absent.
Somehow my last answer disappeared. Short version is you need to stain and/or clear this specimen for more anatomical details. Only microscopic oncospheres/ coracidia are free living stages of cestodes Rarely adult or metacestodes will be expelled to exterior. Also look into expelled pentastomids.Following
- NewHow much time taken by Pseudomonas sp. to achieve mid log phase in minimal media ?
I am working on Bioremediation.Following
- 6Does anyone know what is wrong with this gel?
Does anyone know why my leader runs Crooked?
Conditions are TBE, 1% agarose, run at 85V for about 30 minutes and was added EtBr before running it. 1 kb ladder.
I would really thank you for your attention
I recently optimized an agarose gel with almost similar specifications.
1. Gel preparation: I prepared a 1% agarose gel. You may refer to Gene Cloning by Sambrook for selecting appropriate gel percentage for your DNA based on the number of base pairs. I think your gel was disrupted during the process of solidification. Incidentally, non-uniform solidification occurred and the density of the gel in different areas differ. It may be the reason why the DNA has taken a twisted path. In addition, you should also make sure that platinum wire of anode and cathode are present in throughout the length of the apparatus. If not, it leads to changes in direction of DNA migration.
2. Voltage applied: I usually run the gel at 4-5V/cm. You should calculate the distance between the electrodes, not the length of the gel. For example, if the distance between the electrodes is 15 cm, I would run the gel at 5*15 = 75 V. I run it for around one and a half hour (patience is precious) and I have generated band separation of the 1kb DNA marker just like they have shown in their manual. Its Fermentos 1kb DNA marker in my case.
3. Buffer & EtBr: I used TAE Buffer 1x. You should add EtBr into the gel and into the buffer. EtBr usually migrates in the opposite direction of DNA. So you may add it at the positive end of the apparatus.
4. DNA Miniprep: I usually use an invitrogen miniprep kit. but I have done minipreps using traditional methods and I am sure that you can achieve superior purity using traditional methods.
Please refer to Gene Cloning by Sambrook & Russel - Volume 1, Chapter 5. Protocol 1. It has all the information for tweaking an agarose gel.Following
- 65Are there any of the mysteries of physics that are beyond our intellectual capacity to conceptually understand?
Richard Feynman said, "If you think you understand quantum mechanics, you don't understand quantum mechanics.” Is the problem that the human intellect is incapable of understanding some aspects of quantum mechanics or are we merely missing a model which will make the mysteries of quantum mechanics conceptually understandable? Another quote attributed to Feynman is “Shut-up and calculate”. This implies that a student is being told to suppress their natural desire for conceptual understanding and instead work on mathematical analysis. Do you feel that some questions are beyond our intellectual capacity to successfully answer? If so, perhaps you should identify these to stimulate discussion. For example, questions about the composition of fundamental particles or string theory strings are often treated as unanswerable. What are your thoughts?
I looked to your attached paper. I have read the following:
'When light grazes the sun we find again several forces
with the Maxwell analogy, but partly other forces then
these of (6.8). Since the rest mass of light rays is zero we
must not consider the gravitation force of Newton!
Only a mass at speed c must be taken into account, and this
will generate a gyrotation force.'
Can you explain to what 'the mass at speed c' refers, since photons have no mass and neither electric charge ?
- NewOrganic pest management for snails and aphids in aquaponics?
Working with an Aquaponics system in the Dominican Republic and we are having trouble with aphid and snails. Any ideas for organic pest management that will not harm the fish in the system?Following
- NewAnyone knows about "Research Self-Efficacy by Kathleen J. Bieschke" research?
Anyone knows about "Research Self-Efficacy by Kathleen J. Bieschke" research? I need the details about the factor of research self-efficacy. Can anyone help me? Thank youFollowing
- 12Are there studies about Mayaro virus in Colombia and other Andean countries, in addition to Venezuela?
With the expansion of Aedes-borne diseases, such as Dengue and Chikungunya, there would be also concern about Mayaro and Zika viruses in countries such as Colombia.
This article is the only reference about Mayaro in Colombia:
- 1What do you think will be the major challenges for today's children when they become adults in a few years?
Many technological changes happening this days will have societal impact and will provide opportunities and risks. Those of us in science and technology are already following and previewing what will be the future. Young people will have to learn fast and cope with such challenges. What will be the major challenges?
Very relevant question, Fernando.
In all disciplines, the familiar problem of information scarcity (few sources/channels, high associated costs) has been substituted by a never before experienced ever-increasing attention-consuming information abundance. Without adequate tools and practices, the individual knowledge worker will be unable to master the needs for self-development in order to successfully navigate the changing spheres of work.
Following a design science approach, I am suggesting a novel Personal Knowledge Management Concept supported by a Prototype System and an Educational Agenda. Although the innovative approach aims at departing from the centralized institutional developments and at strengthening individual sovereignty and personal applications and devices, it is not meant to be at the expense of Organizational Knowledge Management Systems, but rather as the means to foster fruitful co-evolutions. More on https://www.researchgate.net/profile/Ulrich_Schmitt2/contributionsFollowing
- 2Management of Delusional Disorder?
Any recent update on management of Delusional Disorders?
Dear Andrew Yockey,
Thanks a lot.
- 1What is the difference between Bulk density and Apparent density?
What is the difference between Bulk density and Apparent density?
No difference. Apparent density is the volume occupied by a material, divided by its mass. Therefore, all the porosity (in grains, between grains, or in the bulk of a monolithic block) is taken into account. This is the same for bulk density, which is synonym. In contrast, true density or skeletal density (also synonyms) only considers the density if the solid from which the material is made, hence the porosity is not taken into account. As a result, buld density is lower than, or equal to (in the absence of porosity), true density.