ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
- In support vector machines (SVM) how can we adjust the parameter C? Why is this parameter used? When there are some misclassified patterns then how does C fix them and is C equivalent to epsilon?
Finding the best fit, ||w||/2, is well understood, though finding the support vectors is an optimization problem. But problems arise when there are some misclassified patterns and we want their accountability. Suppose we have two misclassified patterns as a negative class, then we calculate the difference from the actual support vector line and these calculated differences we stored with epsilon, if we increase difference from ||w||/2 its means we increase the epsilon, if we decrease then we decrease the length of epsilon difference, if this is the case then how does C come into play?
From the previous answers it is very clear that parameters like C, gamma , epsilon play a very important role in rbf gaussian kernel based SVM classification. The appropriate value for these parameters depends and vary according to data. Can someone please answer how to find the appropriate range for tuning these parameters based on data and that too in a time effective manner..Following
- Do objects move in relation to space-time in GR?
Generally sympathetic to Carlo Rovelli's pronounced "relationalism" regarding space and time, I still find some of what he says about this puzzling. This question seeks clarification. He argues, in his paper "Localization in QFT," (in Cao ed. Conceptual Foundations of Quantum Field Theory, 1999, p. 215, that "General relativity describes the relative motion of dynamical entities (fields, fluids, particles, planets, stars, galaxies) in relation to one another." This seems true enough. But this is supported by the idea that space-time itself in GR is a "dynamical object," which curves or changes by relation to mass and energy present. But that does not seem a reason to hold that objects do not move in relation to space-time in GR. Instead, it seems that the gravitation field (which determines space- time) is one of the things in relation to which objects move, and consequently that objects move in relation to space-time in GR. In spite of that, Rovelli can be found to say, on the same page, that "Objects do not move in respect to space-time, nor with respect to anything external: they move in relation to one another." Is it inconsistent to think that if objects move in relation to one another, then they move in relation to the encompassing space-time?
> Mr. Low,
You should be wasare of that you are engaged with an expert in mathematical logic
>1/ Robinson and nonstandard analysis. Roughly speaking, he used model theoretic arguments to show that given a model of a first order axiomatization of the reals, there exist other models (call them the hyper-reals) which extend the first model and contain positive quantities smaller than all the positive reals (of the first model), others larger than all the positive reals (of the first model). Then one can use the transfer principle to draw conclusions about the first model of the reals by making arguments in the hyper-reals. One can then use this to draw conclusions about the standard reals (whose axiomatization is second-order, so the categoricity of the reals as a second-order concept, and the consequent impossibility of infinitesimals in this context is not an issue). One can then think of the derivative as the (standard part of) a quotient, and the integral as the (standard part of) a hyperfinite sum, and do lots of real analysis without having to deal explicitly with limits, since all that has been built into the framework.
>Personally, I like the ultrafilter construction, which gives one a concrete handle on the idea of how any particular infinitesimal is essentially a sequence which tends to zero at a particular rate: the smaller the infinitesimal, the faster the convergence; similarly for infinite numbers. (Of course, the fact that one cannot exhibit an ultrafilter explicitly means that this feeling of comfort is largely an illusion, but it's a comforting illusion.)
This is not what a professional mathematicians should post. The issue here is what is the basic idea behind his construction. he extended what mathematicians did to extend from rational numbers to real numbers. Yes, the sequence of rational numbers gave real numbers, which was improved by the cut construction by Deddekind cut. Robinson tried to create infinitesimals and infinity through considering infinite sequences of real numbers some of which become real numbers and some of them become infinitesimals.
The ultrafilter construction is a model theoretic triviality which the first year undergraduate maths student in advanced countriess learn. Do you know why he used? It was not his construction btw.
You failed to connect ultra filter to the sequences of real numbers. You did not understand the whole construction. Robinson wanted to use Ultra power (power) construction to consider the sequences of real numbers. Using this construction applied to real numbers he built a theory of infinite sequences of real numbers. Then what did he do? You did not understand. He then used equivalence of real numbers in the first order real number theory to quotient the Ultra power of real numbers. Why did he do it? You again failed to understand. It is because, as it was the case for the sequences of rational numbers, many sequences of real numbers are equivalent. Now you have a theory of real numbers and infinitesimals.
So, kit is not the ultra filter which was important. Ultra filter was used to define Ultra product and Ultra power.
So, it does not help you to consult with Wikkipedia which seems to create more and more simple minded "scientists: who seem to think they understand. Most of the people who contribute there are idiots with big egos. Advanced mathematicians do not bother.
You should have visited Warwick Mathematical Institute. They used to have excellent mathematicians, a top mathematical institute in the world. Of course after Thatcher revolution, Britain lost all of her real academics as The Manchester Guardian warned about a decade ago.
Do you know Prof. Takeuti. From him I learned nonstandard analysis.Following
- Any notes on stepped foundation??
i have been working on a project of stress distribution in stepped footing , how to make it economical, and how to design number of steps according to bearing capacity of soil,Following
- How large is the calcium concentration in skeletal muscle under normal condition? How about that shortly after slaughter?
As mentioned in the title. Could anybody help to provide some references about these questions?
I assume that you are interested in muscle fiber free cytosolic calcium concentration and not skeletal muscle calcium concentration. If you are interested in the latter, be aware that it varies greatly within skeletal muscle, i.e., from 10-300 nM levels in the muscle fiber cytosol at rest to mM levels in sarcoplasmic reticulum, mitochondria, and extracellular space. Following slaughter, one would expect the free cytosolic calcium level to increase as the level of ATP decreases; this in turn activates a series of calcium-sensitive degradative enzymes. However, I wouldn't expect the overall skeletal muscle calcium concentration to increase following death.Following
- Can a forward reaction rate of chemical reaction be calculated from Gibb's energies of substrates?
The idea is to divide the equation of equilibrated chemical reaction into forward and backward part. The equation dG = - T R ln(kf/kb) into dGp - dGs = T R ln(kb) - T R ln(kf), where dG is Gibb’s energy of reaction, dGs is Gibb’s energy of substrates, dGp is Gibb’s energy of products, T temperature, R gas constant, kf forward rate coefficient, kb backward rate coefficient.
Looking at this form I suggest the following equation:
B + dGs = T R ln(kf), where B is some constant (may be dependent on type of the chemical bond).
Is the equation correct? What should be behind the parameter B? For which chemical reactions has B the same value?
Yes, thank you. It starts to be clear for me.
The B as is not constant, it is dependent on the transition state of the reaction, which is together with substrates "the driving force of the forward kinetics" and together with products "the driving force of the backward kinetics". As a result the reaction rate coefficient is as in the transition state theory: kf = (kB*T)/h *exp((Gs-Gt)/(R*T)), where Gs is Gibb's energy of substrates, Gt is Gibb's energy of transition state, kB is Boltzman constant, T temperature and h is the Planck constant, R gas constant.
The enzyme creates different chemical path, which is faster, than the original reaction, because of lower Gibb's energies of transition states. And the equilibrium of both paths (the orginal reaction and the enzymatic reaction) will be the same because of the principle of detailed balance.Following
- Are the right to social inclusion and the right to work included in the prescriptions of the labor law?
It is necessary to found a new juridical branch to ensure this human rights.
Is possible to think in a social inclusion without the access to work?Following
- Synergisms between plant pathogens and disease occurrence?
Plant diseases are often thought to be caused by one species or even by a specific strain. Microbes in nature however mostly occur as part of complex communities. What is your opinion about interspecies and/or interkingdom interactions and plant disease occurrence?
There is definitely synergistic effects when it comes to plant viruses. There are cases where a plant infected with just one virus will not express any symptoms until a second virus also infects the plant. I know of such interactions between otherwise 'latent' viruses in potato and blueberries.Following
- Could you give me an explanation about the historical background of 'peer pressure' theory?
the first theorist, definition, how could develop, dimensions/componentsFollowing
- Dose anyone have WMC verbal and visual test?
working memory capacity
- Which is the most efficient algorithm/package to solve delay differential equations?
I wish to know the methods of solving a system of equations that require, and those which don't require computer algebra. Is there a method, which is both theoretically and computationally efficient?
System R is friendly to me :)
Try my name here:
after clicking on 'search' :)
and see why here:
- What is the physical meaning of negative kinetic energy as we often consider in QM?
The simplest application of SE is that of a particle incident on a step barrier. In this case also we consider negative values of energy of the particle. What does it mean to have negative energy?Following
- Which normality test is more appropriate on residuals with sample size 1000? Is it the Jarque-Bera test or the Shapiro Wilk Test?
Which provides the best resuls?
Q-Q probability or Q-Q plot can help to see some deviation to symmetry.
But, remember the Central Limit Theorem (CLT), that states that on any continuous variables (even for discrete variables such as Binomial or Poison distributions), when n tends to infinite (in practice n>30) the Fc. of Distribution of probabilities tends to fit a Normal (Gaussian) distribution.
So, there is no need to test the normality, you can cite the CLT.
Fischer, Hans (2011), A History of the Central Limit Theorem: From Classical to Modern Probability Theory, Sources and Studies in the History of Mathematics and Physical Sciences, New York: Springer, doi:10.1007/978-0-387-87857-7, ISBN 978-0-387-87856-0, MR 2743162, Zbl 1226.60004 (Chapter 2: The Central Limit Theorem from Laplace to Cauchy: Changes in Stochastic Objectives and in Analytical Methods, Chapter 5.2: The Central Limit Theorem in the Twenties)
Zabell, S.L. (2005) Symmetry and its discontents: essays on the history of inductive probability, Cambridge University Press. ISBN 0-521-44470-5. (pp. 199 ff.)Following
- Is there any reliable techniques for remote sensed data processing on images obtained with low sun elevation angle?
I study plant cover dynamics and I work with Landsat 1-8 images always preferring to overcome all images that have low sun elevation angles in its metadata. But there are some images that are critical to me because of its acquisition data. I need to analyse them because there is no analogous images acquired in more appropriate environmental conditions. But all the techniques I know recommend to be aware of low sun elevation because of presence of significant shadow influence the land cover reflectances.
Can anybody tell if I can apply any specific methods for such problematic images to obtain appropriate data on plant cover?
I am assuming that you have certain hyper-spectral images which you want to analyze but they are corrupted by the shadows cast on the object of interest. Links to some papers that you may find helpful are given below. One of the papers is about image in-painting for cloud removal and employs a special class of wavelets called bandlets. Although this may not be specific to your problem but I think wavelet based techniques may be helpful in shadow removal.Following
- What is the best methods for resource allocation in cloud computing ?
what is the latest methods and technique in resource allocation that considered as cloud computing methods ?
I think you can't really say what is the "best" method. That all depends on your objectives. Moreover, the type of the tasks that you are dealing with are important too. I mean if the tasks are data intensive, that means you have to consider QoS metrics. On the other hand, if you have compute optimized or memory optimized tasks, then the story will be different. In the meantime, cost plays the key role here too, the users are always looking for the lowest monetary cost for executing their tasks while the service providers are willing to have the profit to be maximized.
Hence, based on the objectives you define, you may improve the algorithms in terms of QoS metrics consideration like delay, efficiency, reliability and so on, or u may focus on SLA systems or bidding models, but I think we can't strongly say e.g SLA based systems are better than agent based systems or bidding models are better than dynamic resource allocation models.
- What is the mechanism of helperphage (hyperphage) in usage of p3 minor coat gene located on phagemid for packaging?
How helperphage detects this vital sequence on vector?Following
- Are there any metals structurally similar to silver, that have a similar toxic effect on bacteria?
I am investigating the antimicrobial effects of silver, and am struggling to find information about any chemically similar metals that may also exert similar toxic effects. If you could point me in the direction of any papers that may cover this, or even just a search term, I would be very grateful.
Copper is chemically similar (in the same period on the periodic table) and has an antimicrobial activity.Following
- What is the date of publication?
What is the date of publication please?Following
- How can I plot alpha vs time graph in a multiphase solver of openFOAM?
Hi to all,
I am using interPhaseChangeFoam to simulate the phase change inside nozzle flow in OpenFOAM. To plot alpha vs time graph using foamCalcEx with volIntegrate is one way and i did it. But i would like to learn that does anyone know to plot this graph directly inside paraview when i view the case?
Thanks in advance.
cut and paste into Gnumeric
It produces by far better quality (publication quality!) plots than Excel.
In a lon run, gnuplot is the way to go
Gnumeric probably uses gnuplot (present in most open source systems with some kind of visualization).
I used gnuplot in my paper below (2nd link) and will in the improved draft of the first one):
- Is Chalmers' so-called "hard problem" in consciousness real?
In his 2014 book "Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts" Stanislas Dehaene wrote "Chalmers, a philosopher of the University of Arizona, is famous for introducing a distinction between the easy and the hard problems. The easy problem of consciousness, he argues, consists in explaining the many functions of the brain: how do we recognize a face, a word, or a landscape? How do we extract information form the senses and use it to guide our behavior? How do we generate sentences to describe what we feel?
“Although all these questions are associated with consciousness,” Chalmers argues, “they all concern the objective mechanisms of the cognitive system, and consequently, we have every reason to expect that continued work in cognitive psychology and neuroscience will answer them. By contrast the hard problem is the “question of how physical processes in the brain give rise to subjective experience … the way things feel for the subject. When we see for example, we experience visual sensations, such as that of vivid blue. Or think of the ineffable sound of a distant oboe, the agony of an intense pain, the sparkle of happiness or the meditative quality of a moment lost in thought … It is these phenomena that poses the real mystery of the mind”."
Stanislas Dehaene's opinion is "that Chalmers swapped the labels: it is the “easy” problem that is hard, while the “hard” problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmers’ “hard problem” will evaporate".
Personally, I agree with Stanislas Dehaene's opinion.
Toward a Standard/Working Model of Consciousness
Tausif: ".... mathematical terms that people always associate with accurate physical location that is precisely defined (x=0, y=0, z=0, in other word the mathematical origin of a space that itself has no physical dimensions)."
The 0,0,0 coordinate is precisely defined as the neuronal coordinate of origin in the theoretical model that I propose as the 3D topological structure of our brain's retinoid space. This is a precise biophysical location in the retinoid model. Moreover, retinoid space does have physical dimensions, otherwise it wouldn't exist.Following
- Importance of PIC value in SSR marker study ?
please suggest an easier method too
Thank you SharmajiFollowing
- Are there theoretical models or scientific hypotheses explaining why a human population would produce more male children than female children?
I am currently examining the socio-demographic data from a small indigenous population and I see that there are about 5% more men than women, even in younger age groups (eg 18-25 , 26-35). I wish I could explain this phenomenon with a theory.Following
- Is normality test needed for small data sets (2) before doing stuent's t test?
For example, I have measured an enzyme production in bacterium under normal and stress condition. Should i need to do normality test before going for student's t test?
There are small dataset and very small datasets... However, t-test is robust enough even for small datasets (de Winter, 2013).
de Winter, J. (2013). Using the Student’s t-test with extremely small sample sizes. Practical Assessment, Research & Evaluation, 18(10), 1–12.Following
- Does anyone have a fortran code for melting with natural convection by using lattice Boltzmann method?
I'm trying to simulate meltig with natural convection
Hmm. Try one of the symbolic system.
Maxima and Octave are free...Following
- Why some bacteria grow in TSA (solid) media but not in TSB (Liquid) media?
We have grown some plant endophytic bacteria in TSB medium. Now we are trying to grow them in liquid medium (TSA) for genomic DNA isolation ,some are growing and some are not ,what can be the reason?
Try to check the REDOX potential in the TSB . How much
sediment is being recognized? Did you incubate with shaking
- How to create periodic texture on flexible substratures like Polyimide (PI), PET ?
i would like to create periodic texture structure on flexible
substrates like Polyimide(PI), Polyethylene terephthalate (PET) for
multiple reflections of light. Can any suggest me, how to create ?Following
- Why does the color of Cd(1-x)ZnxS thin film deposited using CBD does not change to yellow as expected?
I deposited the film using CBD ,but it's color does not change to yellow as expected.I changed the ratio of the chemicals but i got only white color not as supposed to be white to yellow color?Following
- Could anyone provide some reasons why after a ligation-transformation experiment I get colonies on ampicillin resistant plates that are not correct?
I am doing a ligation with an 8kb vector and inserts that are 727 and 767 bp. The vector is ampicillin resistant. After ligation I transformed into lab made competent cells. I got 1 large colony with visible satellite colonies, which to me indicated that the large colony was expressing the plasmid and degrading the ampicillin surrounding. As negative controls I transformed the cut vector alone and the cut vector treated with ligase and got no colonies. As a positive control I transformed uncut plasmid where the vector originates from and got many colonies. I inoculated the positive control and the one colony in 100 ug/ml ampicillin and grew overnight. I mini prepped the samples and did a restriction digest. The positive control yielded the two desired bands and one unexpected band. The colony from the ligation yielded two bright bands of the incorrect size and one weak band that could be correct (the original vector size). Is it possible that along with the colony I picked some satellite colonies and they took over when the bacteria with the proper plasmid degraded the ampicillin? To make sure it wasn't degraded ampicillin that allowed the colony to grow in the first place, I re streaked the colony on a fresh lb amp plate and got a lot of growth, along with large well isolated colonies! How can there be growth without the competent cells taking up the plasmid with resistance? Or is it there but non-resistant bacteria took over after the ampicillin was degraded? Thanks in advance
Did you go straight from glycerol stock to o/n broth culture for the original plasmid prep (template DNA) or did you streak a plate from glycerol stock, then use a colony for original plasmid prep? What plasmid prep kit are you using? We always run T4 DNA ligase reactions for at least 30 min and then leave untransformed o/n just in case. But your o/n ligation didn't work either?? ...it sounds like you are covering all your bases. Let me know how you prepped your template DNA. Hopefully we can figure this out :)Following
- Can quantum entanglement be explained in terms of Einstein-Rosen bridge?
What do you think on quantum entanglement?
What is the distinction between mathematical entanglement and physical entanglement.
What is the whole point you are making? It has been shown that QM through UP predicts that there is no trajectories. Then what you call physical entanglement which must be experimentally detected entanglement must came from analyzing trajectories. There is no such thing according to the theory of QM. Are you out of your mind?!Following
- Have any solution?
I am facing a problem to solve the model. Need a solution urgently........ Please suggest me how can I solve it.
Which software package are you using?Following