ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
- 2How can I explain different effect of overexpression on exogenous end endogenous mRNA?
Recently, I've got very strange phenomena.
When I transfected a plasmid expressing protein 'A' with a plasmid expressing reporter mRNAs, which harbor specific 3' UTR, into HeLa cells, over-expressed protein 'A' repressed expression of the reporter mRNAs. However, over-expressed protein 'A' increased level of the endogenous mRNAs.
I don't know what it happened in cells, and I don't understand why over-expressed protein 'A' have made different effects on both the reporter mRNAs and the endogenous counterpart.
Is there anyone who has ever experience such case?
This is an interesting question. Let's assume there is also an endogenous form of "protein A" and that it regulates the endogenous mRNAs. Thus, when you transfected in and overexpressed the reporter mRNAs, maybe you altered the stochiometry of the system such that you have bound up most of the endogenous protein A (as well as the transfected protein A) with the reporter mRNAs. The reporter mRNAs may have increased affinity to protein A and become downregulated/degraded. However, the endogenous mRNAs are no longer under negative regulation by the endogenous protein A, due to lower affinity to protein A, and you see increased levels of the endogenous mRNA.Following
- 3How to prevent rundown of NMDA currents in cultured hippocampal neurons?
I am doing whole cell recordings of NMDA-evoked EPSCs in cultured hippocampal neurons, P14-17, holding at -70mV. I apply a three second pulse of NMDA (5 microM) every minute, while bath applying other drugs, looking at WT and transfected cells. However, I am regularly getting run down of my NMDA-evoked currents, which is making it difficult to discriminate real drug or mutant effects. Although I might get a window of baseline to bath apply a drug, the rundown obscures any wash off. I have some okay data using this protocol, however it would be ideal to remove the confounding effects of rundown as the rundown limits how confident in the results. Does anyone have any advice on how to get over rundown? Here are my bathing and internal solution recipes…
Bathing Control Kreb’s solution – TTX 500nM, Bicuculine 50uM, D-Serine 10uM, Magnesium free, 2.5 mM Ca2+
Internal solution – KCl internal, with TEA-Cl, QX-314 (2mM), Na-GTP (0.5mM) and Mg-ATP (2mM)., EGTA (5mM)
My two thoughts were to up EGTA conc and also add phosphocreatine to internal?
Thanks very much,
I would not recommend increasing NMDA, you are right, increasing agonist will only enhance rundown. At the end what you want is not the maximum-response but a clearly measurable (signal to noise) response that is repeatable and sustainable by cells.
- 3Is it possible to bring into use again P53 protein?
Some researchers in Italy have discovered a peptide which brings into use again P53 protein. This pepdide could be useful against cancer.
I thank Divaker Choubey for his anwer. I ask him what's his opinion about the following my question opened.
- 10Which factors/indicators are relevant to evaluate a company’s CSR performance?
Please recommend me some perspectives that should not be missed when evaluating CSR performance.
My suggestion is the need for a theoretical framework for addressing the issue. As a finance person I would frame the research question to fit within a risk – return modelling and use Stakeholder theory (expanded shareholder framework) as the starting point. From this positioning I can enquire about the obvious – impact on equity returns, signalling theory and communicating about CSR from the firm and even test whether CSR lowers agency costs through an additional component of monitoring.
There are obvious issues about inputs – money spent, compliance with regulatory requirements etc. On the outcome side the metrics are more difficult. Take tree planting as an example. Who owns the trees and who trades the carbon credits – while not trying to be too cynical? The sustainable social and development change from CSR is much more difficult to measure than the number of metres of drainage dug and hopefully maintained. My suggestion in this space is to think of metrics relating to morbidity, education, health, income and ask what changes have occurred. If there is no data then this certainly says something in a signalling theory context.
Enjoy the study; it should be great. I would put in a plug for my book on CSR and the oil industry but think it is not really relevant to your topic.Following
- NewIt is possible to delete an endogenous variable to fit a model in CFA?
I'm dealing with model fit problem in a CFA in AMOS. The model has 4 independent variables and 1 dependent. The instrument has four question for each independt variable and three question for dependent variable.
I use an SPPS pattern matrix to run CFA. The pattern matrix shows an item loading in two factors. I delete this item to fit the pattern matrix. Can I delete an item from the pattern matrix?Following
- NewWhat is the distribution of Log (Inverse Gamma) random variable?
Suppose I have a random variable X∼Inverse Gamma(k,θ) , and I define:
I would like to find the probability density function of Y
- NewIdentify and define each of the following terms and factors used in the theoretical treatment of chromatography: α, H, k′, N, A, B, C, L?
Identify and define each of the following terms and factors used in the theoretical treatment of chromatography: α, H, k′, N, A, B, C, L?
how would you change these factors (increase/decrease) in order to increase the resolution between two peaks?Following
- NewDoes anybody know if there is a reaction between molecular iodine (I2) and nitronium cation (NO2+)?
I suspect a reaction occurred involving these reactants. It was carried out in MeCN. It was the same un aqueous solución. An organoiodinated substrate reacted, releasing I2, which formed crystals on the glassware. It was performed a while ago... And I don't have access to the reagents to repeat it!
Does any one have an idea, or a reference about this?
Many thanks un advance!Following
- NewIn LC, what is the effect on the retention time of a peak (increase, decrease, remain the same, become zero) by..?
(a) Decreasing the particle size in packed column HPLC (at constant pressure)?
(b) Increasing the column pressure?
(c) Increasing the polarity of the mobile phase in “reversed-phase” HPLC?
(d) Using a less-polar stationary phase in “normal-phase” HPLC?
(e) Using a more efficient column with twice as many plates with the same length, mobile, and stationary phase compositions, operated at the same flow rate?Following
- NewWhen trace element can be heavy metal and unsafe for environment in marine organism?
trace element is need for living organism in very less quantity, how in excess of these element we consider as heavy metal and is unsafe for marine organism, how we can remove them in lab condition using algae or other way let me knowFollowing
- NewIn GC, what is the effect on the retention time of a peak (increase, decrease, remain the same, become zero) by..?
(a) Raising the column temperature (if head pressure is kept constant)?
(b) Lengthening the column?
(c) Increasing the gas flow rate?
(d) Increasing the volume of stationary phase in the column?
(e) Forming a more volatile derivative of the analyte compound?Following
- 4Is quaternionic differential calculus a proper tool for both quantum physics and general relativity?
Two different kinds of multidimensional differential calculus exist that can cope with parameter spaces that contain elements, which are constituted from a scalar and a three dimensional vector. These sets of differential calculus differ mainly in the choice of the scalar part of the parameters.
Quaternionic differential calculus uses a parameter space that has quaternions as its elements. Thus, Its parameter space corresponds to a quaternionic number system. This situation is complicated by the fact that several versions of quaternionic number systems exist, which differ in the ordering of their elements. For example, the ordering can be applied via a Cartesian coordinate system, but a Cartesian coordinate system can be ordered in eight mutually independent ways. A spherical coordinate system usually starts from a selected Cartesian coordinate system. It can then proceed by first ordering the azimuth angle or it can start with ordering the polar angle. These angles can be ordered up or down. The radius has a natural ordering. Half of these orderings correspond to quaternions that feature a right handed external vector product. The other half corresponds to quaternions that feature a left handed external vector product. These orderings appear to influence the behavior of elementary objects.
These ordering choices also appear in the parameter spaces of the other set of differential equations that we will indicate as Maxwell based differential calculus. The Maxwell based equations use a spacetime model that has a Minkowski signature. Where the real parts of quaternionic parameters can be interpreted as representing pure progression, will the Maxwell based equations interpret the scalar part as coordinate time. In comparison to the quaternionic parameter space, the coordinate time plays the role of quaternionic distance. This difference may explain the differences between quaternionic differential calculus and Maxwell based differential calculus, but this is not the fact. Trying to convert the quaternionic differential equations into Maxwell based equations is only partially possible.
The difference between the two sets is far more subtle, than just a change of the scalar part of the parameter space. The difference between the two sets clearly shows in the second order partial differential equations. The Maxwell based second order partial differential equation is a wave equation, while the quaternionic second order partial differential equation does not feature waves as its solutions. Physics is full of waves, thus this fact might classify quaternionic differential calculus as unphysical. That conclusion is not justified. The clue of the difference is hidden in the Dirac equation for the free electron and free positron. That equation couples two sets of equations. The usual formulation of the Dirac equation uses spinors and Dirac matrices. It is also possible to formulate these equations in quaternionic format and then it becomes clear that the Dirac equation splits into two coupled first order partial differential equations. In this way the Dirac equation couples solutions that use different quaternionic parameter spaces. One of these parameter spaces uses right handed quaternions and the parameter space of the coupled solution uses left handed quaternions. For each of the solutions a second order partial differential equation can be derived via the coupling of the two solutions. This equation is a wave equation! Still both solutions separately obey a regular quaternionic second order partial differential equation. That equation does not accept waves as members of its set of solutions.
The quaternionic second order partial differential equation accepts other solutions. For example the homogeneous version of this equation accepts shape keeping fronts as its solutions. Shape keeping fronts operate in odd numbers of spatial dimensions. Thus, a one dimensional shape keeping front can travel along a geodesic in the corresponding field. These objects keep their amplitude. Three dimensional shape keeping fronts diminish their amplitude as 1/r with distance r from the trigger point. The wave equation accepts similar shape keeping solutions, but apart from that, the wave equation also accepts waves as its solutions.
Both sets of equations do not reach further than second order differentials. As a consequence they cannot handle violent disruptions of the continuity of the considered fields. Also the coverage of longer ranges will require higher order differentials.
With other words the considered fields will be the same, but the conditions can require methodology, which covers higher order differentials. GRT covers only slightly higher differentials. It neglects the simpler and more localized causes of disruptions of the continuity by point-like artifacts. That should be the task of quantum physics. However, also quantum physics ignores the mechanisms that generate the point-like artifacts, which cause the discontinuities of the physical fields. The equations only describe the behavior of the fields and Physicists seem to interpret the discontinuities as the artifacts that cause the behavior of the fields.
This outcome shows the need to be able to treat fields independent of the equations that describes their behavior. This is possible by exploiting the fact that Hilbert spaces can store discrete quaternions and quaternionic continuums in the eigenspaces of operators that reside in Hilbert spaces. The reverse bra-ket method can create natural parameter spaces from quaternionic number systems and can relate pairs of functions and their parameter spaces with eigenspaces and eigenvectors of corresponding operators that reside in non-separable Hilbert spaces. This also works for separable Hilbert spaces and the defining functions relate the separable Hilbert space with its non-separable companion.
Hilbert spaces can only cope with number systems that are division rings. This restricts the tolerable number systems to real numbers, complex numbers and quaternions or their rational subsets. Biquaternions are not division rings. So with biquaternions you can only build models that do not use Hilbert spaces. In quantum physics the Hilbert spaces take the role of structured storage spaces. So with biquaternions you must use a different kind of structured storage of geometric data.
I tried to formulate the Dirac equation in quaternionic format and encountered some remarkable facts. The quaternionic Dirac equations that represent electrons and positrons use different parameter spaces. The coupling of these equations produces a second order partial differential equation that acts as a wave equation. The natural quaternionic second order partial differential equation does not offer waves as its solutions.
Geometric analysis uses Green's functions in order to cope with point-like artifacts that cause local discontinuities. However, the function does not cause the discontinuity. The discontinuity is a response of the function on the presence of the artifact. Something must put that artifact at that location. (Here I identify the field as a function. The reverse-bra-ket method enables that.)
Physics misses the description of the mechanisms that generate point-like artifacts and thus indirectly cause a dynamic reaction of the affected continuum.Following
- NewJe voudrais entrer en contact avec des chercheurs du LCPC France, est ce qu'il y en a parmi notre communauté reseachgate?
JE VOUDRAIS DISCUTER AVEC DES CHERCHEURS DU LCPC France EN VUE D'UNE COLLABORATION SCIENTIFIQUE DANS LE DOMAINE DE LA VALORISATION DES DECHETS ET LEURS UTILISATIONS DANS LE DOMAINE DES MATERIAUX DE CONSTRUCTION.Following
- NewIf a resolution of 1.75 is desired in separating methylcyclohexane and methylcyclohexene...?
If a resolution of 1.75 is desired in separating methylcyclohexane and methylcyclohexene:
(g) How many plates are required?
(h) How long must the column be if the same packing is used?
(i) What is the retention time for methylcyclohexene on the new column?
I have calculated already:
an average number of plates from the data
standard deviation for the average
average plate height for the column
resolution of methylcyclohexene and methylcyclohexane
Resolution of methylcyclohexene and toluene
Resolution of methylcyclohexane and toluene
- 2Can I use CLC sequence viewer to translate an alignment file?
I ran ClustalW to align my sequences for phylogenetic tree. I tried to translate all the aligned sequences together so I can also compare the amino acid sequences. But it seemed that CLC can only translate single DNA sequence. Is there any better way to do this?Following
- 5Can anybody tell me which type of biasing circuit is preferrable for Inductively degenerated common source amplifier?
Low Noise Amplifier, IDCS topology
You have only asked half the question: the other half is
"What specific functionality are you aiming to achieve
by your choice of biasing?" This will set the zero-signal
operating current of the active device (is that a BJT or
MOS transistor?) to meet a mixture of objectives. Note
that these are mainly zero-signal criteria; the presence
of large input signals will - as matter of necessity - alter
the bias conditions dynamically and deeply. Depending
on the magnitude of these "blockers" (as they are often
identified) the overall power gain of the LNA will change
on a cycle-by-cycle basis, thus causing intermodulation
of the "wanted" carrier, leading to many side effects; the
chief of which are the generation of spurious harmonics
(known as "spurs"), and time-varying modulation noise
in the output spectrum, Further performance errors are
introduced by the modulation of the input match, as
the instantaneous working point of the active device
is pushed around by the blocker.
But back to the choice of the zero-signal (essentially the
same as the small-signal) bias point.
First, the bias current (and the associated voltage at the
collector/drain node) will determine the nominal fT of the
transistor, upon which we depend to convert the emitter/
source branch inductance to a nominally purely-resistive
input impedance at the base/gate input node of the LNA.
The key word 'nominally' is used in all these comments
to make it clear that every single variable in the design of
real - as opposed to theoretical - circuits is susceptible to
large variations from its nominal 'modeling' value, These
variations arise in the manufacture of all components in
the circuit, as well as being induced variations caused
by the signal itself. This fact renders elaborate equations
often used to 'analyse' circuit behaviour all-but useless in
the real world of design.
Second, the bias current will determine the overall power
gain of the LNA, This is one of several key considerations
in the design, and may permissibly vary over a few dBs.
The power gain (and associated matching criteria) often
needs to be held as close as possible to its nominal value
over wide variations in operating temperature, the extent
of which depends entirely on the application of the product.
Thirdly, it will determine the the nominal noise figure of an
LNA. Here, one must be careful, in practice, not to degrade
this variable by poorly-conceived biasing arrangements. It's
also important to keep in mind that the overall noise figure
of a receiver will always be considerably worsened by the
noise introduced by later stages, notably by the mixer(s).
There are other factors to be considered in forging a highly-
optimized LNA for a particular application. Biasing accuracy
is important, and there are many ways the required circuitry
can be designed. But only when all the essential desiderata
have been considered, and the necessary trade-offs have
been made can one turn to the details of biasing.
The point to be made here is that you need to be quite sure
of your performance objectives before "worrying" about how
your circuits are to be biased. Don't misunderstand the point:
biasing is a very important aspect of good design practice.
Sometimes, it actually is the starting point of a certain design.
Only when the OVERALL OBJECTIVE of a design has been
FIRMLY ESTABLISHED can this determination be made.
- NewIs somebody following the work of Deming?
Is there a group of people interested in applying Deming principles in current state of management?Following
- NewIs it possible to synthesize EGTA?
EGTA :chelating agentFollowing
- NewTranslation of Merton's Normative Structure of Science?
Can anyone tell me when Merton's 'Normative Structure of Science' was first translated into german?Following
- NewIs there still a gulf between scientists and ‘intellectuals’? Which of them have more influence on people’s worldviews?
Traditional intellectuals –thinkers, writers, political and social commentators, and artists- have historically played a major role in the diffusion of the ideas that shape the ways people see the world and their own society and lives.
In the prominent book The Third Culture (1995), John Brockman claimed that these kinds of intellectuals have “become increasingly marginalized”. They are being replaced by scientists who, “through their work and expository writing”, communicate directly with the general public. These “third-culture intellectuals” would be represented by the likes of Paul Davies, Martin Rees, Richard Dawkins, Steve Jones, Daniel C. Dennett, Brian Goodwin, W. Daniel Hillis, Nicholas Humphrey and many others.
The culture of traditional intellectuals, says Brockman, “dismisses science”, is “often nonempirical”, uses “its own jargon”, and “is chiefly characterized by comment on comments, the swelling spiral of commentary eventually reaching the point where the real world gets lost”.
The idea of a Third Culture has its origin in C.P. Snow’s influential “Two Cultures” essay (1959), in which the British scientist and novelist deplored the “mutual incomprehension” –“sometimes hostility”-- between science and the arts. Scientists shared a “culture” –no matter their political, religious, social class and even disciplinary differences-, with common attitudes, standards, approaches, assumptions and patterns of behavior, At the opposite pole, attitudes were more diverse, but the total incomprehension gave an “unscientific flavor” –often almost “antiscientific”— to the whole “traditional culture”. Moreover, scientists largely overlooked traditional literature, which they perceived as irrelevant to their interests, while most intellectuals were unable to describe something as basic as the Second Law of Thermodynamics.
Snow saw such disconnection and polarization as a “sheer lose” to society and stressed the need to build bridges between the sides. In a second essay, published in 1963, he suggested that the gap would be closed by a “Third Culture” that would eventually emerge. In his version of this new culture, intellectuals would communicate with scientists.
Not long ago, a column in Scientific American stated that Snow’s vision “has gone unrealized” (see Krauss, Lawrence M.: “An Update on C.P. Snow’s ‘Two Cultures’”, August 17, 2009).
What is your opinion? Is there such a cultural divide? Are intellectuals scientifically illiterate? Do scientists ignore the basics of the humanities? Which of them have more influence on the public? What kind of Third Culture –if any- is emerging?Following
- 13Anyone familiar with Maximizing Entropy v/s Minimizing Energy ?
We know that while solving for dynamics of physical systems, the Lagrangian of the system is modelled as an energy minimizing objective function. I understand that is so because all systems have the tendency to converge to an equilibrium state that possesses the lowest energy state.
However, we also know from the laws of thermodynamics that with all processes, entropy increases.
I was curious to know which is more fundamental - choice of a minimum energy state or choice of a state transision that maximizes entropy ?
Dear Gujrati, You are perfectly right since the god fathers of this term they claimed that no friction and no dissipation of energy should be considered in isentropic process: dS=dSex+ dSin=0 not only dSin=0 but also dSex=0. That means it an adiabatic reversible process. For me it is an unnecessary terminology. Haase in one place in his book tells that adiabatic process looks the same if the variations or departure from the equilibrium state is very small? Even he is not convenient very much what does it means!
Actually he defines two different reactions or processes: 1: adiabatic -isochoric process where U and V stay constant. 2: adiabatic -isobaric process where H and P stay constant. The defines entropy S(u,v, Phi) or S(H,P, Phi) where Phi is the decrease of advancement of the reaction or process. He writes the Maclaurin expansion of entropy with respect to Phi in vicinity of the equilibrium state denoted by Phi-bar, and shows that first derivative of entropy is equal A/T, where A is the affinity of the reaction, which is zero at the equilibrium. Then as a first order approximation the entropy change becomes zero. That is the story of an isentropic or adiabatic process where the reaction is really taking place in the proximity of the equilibrium but still entropy is invariant !!!!!!! He calls this adiabatic isochoric or isobaric depends on the constraints on the system Since both cases the first derivative of entropy is A/T. Best Regards.Following
- 2What tools, methods and techniques can be used to control human resources and that can quantify the costs they generate?
Please I'm interested in experiences carried out in other countries
Thank for your presentation. I wonder, what are the tools that you use to monitor these operational costs and also to project solutions with the aim of reducing them. I wonder if they use the Balanced Scorecard
Thank you very much for your collaborationFollowing
- NewDo you use a Database to monitoring / measuring quality indicators at your ICU?
Do you use any electronic database to monitoring / measuring quality indicators at your ICU? Whitch indicators? What type of Database? Can you suggest a related article?Following
- 1What books about teaching students constructing of phase–parametric portraits can you advise?
I have read some works by Faina Berezovskaya et al. I'd like to understand their methods more deeply. For example, methods of bifurcation theory and constructing of phase–parametric portraits. I think there are some textbooks or manuals in English or in Russian.
What books about teaching students constructing of phase–parametric portraits can you advise?Following
- 3Do high-intensity workouts give more benefit rather than endurance?
According to the latest researches I tend to think that much more benefits in shorter time are reached by high-intensity trainings in the comparation with the traditional cardio activities. If I start prefering Tabata and HIIT protocols to the endurance training, will i get more health benefits? are there any less-known proceses traumatizing on the skeleton which later cause serious problems? (postural, mal-adaptation, to demanding for the heart output...?
The following debate between Stuart Biddle and Alan Batterham is also worth reading. Not only physiological, but psychological processes need to be consideredFollowing
- 1What is the reason for white signals in northern dot-blot detected with SAP and BCIP/NBT?
Hi! I have problems with detection of northern dot-blots with SAP and BCIP/NBT. In my RNA dots I get white signals instead of blue repeatedly. PCR control dots are blue. The probe is biotin-labelled PCR product. What could be the reason for white signals? I'll be gratefull for any advice and suggestions/Following
- NewHow can mainstream anthropologists and archaeologists in the United States persuade our colleagues of the value of rock art?
Do you agree that the full potential of rock art studies has not been embraced in the US?
Do you feel that rock art is still seen by many as a marginal subject for scholarly study?Following