Science topic

# Classics - Science topic

Explore the latest questions and answers in Classics, and find Classics experts.

Questions related to Classics

I assume it's true that physics can generate statistical integration that replaces the classic FDM techniques used as the basis for Simpson's, trapezoidal, etc. integration rules.

Additionally, there are one-step physical statistical integration formulas for any arbitrary number of free nodes n = 3, 5, 7 ..etc where, as Simpson's rule is limited to n = 3 and its multiples in the repeated steps.

Consider the quantum field theory (QFT) operator (an operator for each space-time point) that the field amplitude becomes when making the transition from classical field quantities to QFT operators. We will call this the field-amplitude operator. The type of field considered is one in which the classical field amplitude evaluated at a given space-time point is a complex number instead of a real number. In the QFT description, the field amplitude is not an observable and the field-amplitude operator is not Hermitian. Can we still say that an eigenstate of this operator has a definite value of field amplitude (equal to the eigenvalue) even when the field amplitude is not an observable and the eigenvalue is not real number?

I have a protein that seems to relocate from the cell surface to an intracellular compartment when phosphorylated. When not phosphorylated the same protein appears to be associated to the cell membrane or outermost plasma membrane.

A classical membrane fractionation western blot experiment shows that this protein is always associated with the membrane pellet and not with the cytosolic fraction. The fraction markers are working perfectly.

I think that fusion to the outer membrane results in filopodial extensions that are enriched with my protein.

The attached image shows the distribution of a GFP fusion of my protein in 2 different conditions.

What should I do to try to test or demonstrate my idea that this protein is associated with membrane regions that can endocytose and become intracellular or can fuse with the surface membrane?

There is no transmembrane region.

In the elementary quantum mechanics (QM) of a single particle responding to a given environment, the state of the particle can be specified by specifying a set of commuting (i.e., simultaneously knowable) observables. Examples of observables include energy and angular momentum. Although not simultaneously knowable, other examples include the three rectangular spatial coordinates and the three components of linear momentum. Each observable in QM is a real number and is an eigenvalue of some Hermitian operator. Now consider quantum field theory (QFT) which considers a field instead of a particle. First consider the classical (before introducing QFT operators) description of the state of the field at a selected point in time. This is the field amplitude at every spatial location at the selected time point. For at least some kinds of fields, the field amplitude at a given space-time point is a complex number. Now consider the QFT corresponding to the selected classical example of a field. Is the field amplitude an observable even when it is not a real number? It is not an eigenvalue of any Hermitian operator when not real. So if the field amplitude is an observable, there is no Hermitian operator associated with this observable. My guess (and my question is whether this guess is correct) is that the real and imaginary parts of the field amplitude are simultaneously knowable observables, with a Hermitian operator (assigned to each space-time point) for each. This would at least explain how the field amplitude can be an observable but not real and not have any associated Hermitian operator. Is my guess correct?

It is a classical philosophical book.

I am facing this #error while giving input of river to Visual #MODFLOW Classic. The input is as a #shapefile. The error is
'00-1' is not a valid integer.
PFA the screenshot of the error for reference.
I would be highly obliged if anyone could #help in this regard.

I often encounter in different research paper the terminology "top-down" approach or "bottom-up" approach for Human Action / Activity Recognition. These two terms are usually used by papers that use classic approaches and classic ML methods. What does each one mean ? And what could be the difference between the two ?

Dear colleagues in System Engineering and Automatic Control:

I used to apply classic controller on industrial processes such as PID controllers

Do you have an idea about what are the recent and modern techniques that scientists and researchers are using ?

Being inspired and encouraged by the quantum science revolution in the early 1990s up until our present time, lately I've been following the research by Roger Penrose and Stuart Hameroff on consciousness and mind-matter duality where the extent of knowledge gained thus far concerns how neurons and neural networks in the brain give rise to the phenomenon of human consciousness. My research seeks to go further by proposing that it is non-material information external to the body that interacts with neurons in the brain that gives rise to the mind-body duality. So my specific question is this: What is the mechanism or process through which information interacts with neurons and neural networks in the brain so that a quantum moment becomes a classical action or behavior?

Hello,

We just got from a collaboration some culture supernatant from a hybridoma cell line containing a monoclonal antibody that we are supposed to use for immunostaining. I have experience with classical purified commercially available antibodies for stainings, but haven't used conditioned media before. How does the protocol change then? Any tips?

Thank you,

Lluís.

I see several studies on optimal tax audit that assume the revelation principle (Myerson, 1981) on a tax game. The conventional tax game refers to a taxpayer who has a true income (private information) and reports a taxable income for a tax agency. The taxpayer has incentives to report a taxable income which is lower than the true income, so to pay less taxes. If the tax agency caches the taxpayer in a probable audit, the tax agency observes the true income, requires the payment of the full tax plus a fine over the evaded taxes. In this case, the taxpayer problem is to maximise the expected net income after taxes. The tax agency problem is to design a policy to maximise tax revenues subjected to costly audits, over the full population of taxpayers. Everything fine up to now.

Nonetheless, for the analysis of the tax agency problem, several studies declare that they assume the classical revelation principle by Myerson (1981), so to simplify the analysis. For example, on the study of Border and Sobel (1987) entitled

*Samurai Accountant: A Theory of Auditing and Plunder*, they state (page 526 of the study):"

*Without loss of generality we can restrict attention to incentive compatible direct revelation schemes, i.e. those in which the agent truthfully announces his wealth and makes a payment (which for convenience we will call a tax) based on his announcement to the principal and the principal chooses the probability of auditing based on reported wealth.*"This whole sentence does not make so much sense for me: if the tax policy is equivalent to a direct revelation scheme that makes the taxpayer report truthfully, true income and reported income are the same (Nash equilibrium), so tax audits would be unnecessary. At the same time, we know that tax audits are necessary, for the taxpayers would have strong incentives to evade if there are no tax audits. In this case, I understand that the tax policy is not equivalent to a direct revelation scheme, so the revelation principle would not be applicable to this case.

So, what am I missing here?

Besides one of the classical readings on multiple literacies from a Deleuzian perspective (Mansy & Cole, 2009), I am interested to know what else is worth reading in this area.

I am a Chinese scholor majored in metonymy studies in Chinese classics. If you have interest, let's share some points.

Genetic engineering should be seen as one of the many tools available for use by plant breeders to improve crop varieties so that we increase food production, control pests, and improve farm profits.

At the point we have reached in taxonomy and systematics today, it seems that we are in a situation where details and extremes (in the popular sense) ignore the basics (in the classical sense). Therefore, in a popular sense, we seem to be in a situation (especially by amateur researchers) in which many researchers publish articles without adequate knowledge of the scientific foundations, or even if they do, ignoring these foundations. From this point of view, I think that we should remember the scientific foundations again and know what and how the studies serve.

In this sense, what is taxonomy iessentially and clearly? From what need and how did it arise? What is its main subject and approach? And again, what is systematic essentially and clearly? From what need and how did it arise? What is its main subject and approach?

I think these questions should be answered clearly.

Can a systematic study be done without knowing the taxonomy and a taxonomic study without knowing the systematics? Concisely and clearly, what is a taxonomic study and what does it encompass? What and how does it serve? Also concisely and clearly, what is a systematic study and what does it encompass? What and how does it serve?

I would appreciate if you could share your valuable ideas...

Looking for information on Microscopic Thermodynamics? Check out this repository of information and examples made available free on the Web by Professor Pohl, derived from his classic textbook,

*Microscopic Thermodynamics*, by Irey, Ansari and Pohl, John Wiley and Sons, 1976, ISBN 0-471-42847-7. http://thermospokenhere.altervista.org/Will instalment of quantum technology at global scale prove to be eco-friendly? Will it be able to solve all the environmental issues that classical/current technology has created?

Hello everyone,

Because of the various differences in the mechanisms of TNT and hydrogen explosion, i've been wondering if JWL parameters existed for hydrogen to replace the classical TNT mass equivalence approach. If not, have there been equations of state developed specifically for the purpose of hydrogen cloud explosions?

Thanks in advance!

Canny algorithm is a

**classical edge detection model. I want to use an automatical thresholds algorithm for Canny to identified the water in Sentinel-1 images.**The poblem now is that i can hardly find a suitable automatical thresholds algorithm. I have test the result with Otus, but not all histograms are bimodal, so i wonder whether Otus is suitable here? And, is there any other model can used on Canny?

The code is best implementable in Google Earth Engine.

[This classic Lenski paper][1] computes the effective population size of an evolving *E. coli* population subjected to daily population bottlenecks as $N_e = N_0 * g$, where $N_e$ is the effective population size, $N_0$ is the population size directly after the bottleneck and $g$ is the number of generations between bottlenecks.

Unfortunately, the formula was not derived in the referenced paper and the referenced articles appear to not describe the formula directly, but only provide the fundamentals for deriving it.

Can someone explain how this formula comes about?

Could you please suggest any articles/book chapters where I could start with to learn the concept of Total Variation in classical signal processing? I would like to relate to Graph Signal Processing in understanding Fourier Basis.

Dear all,

We have performed a classical trypan blue stainning. We would need to decolore it, but by using no toxic products. Do you know if alternatives to chloro hydrate decoloration are existing?

Thank you very much.

When a group of motions are possible in many body problems, it is said that the motion that actually occurs is the one with least classical action S.

In looking at the integral of Lagrangian, for calculation of S, it appears that the motion that occurs is the one that flattens space time as much as possible.

One possibility is that space time is in some way elastic in curvature strains, with neutral stress in the case of flat space time, not surprising when considering the special case of gravity.

It doesn't seem to explain why many body problems would always tend to decrease stress energy.

Why Does Classical Action S Trend Toward Flat Space Time?

We know classical prospective cohort study is an analytical study . But can a cohort study be there without any comparison group and what would be the strength of association in this case?

Software for plotting Geochemical data

We note that more and more hikers are using Apps, social networks or other technologies to support their hiking activities. Are there differences between technological hikers and classic hikers? What do they use these technologies for? Are they useful?

It seems that sustainable agriculture could mean the main goal of future agriculture. Implying increasing in yield, with a lower impact on the environment and also enhancing the social aspect of people. Nevertheless, this idea involves so many metrics and factors that we could easily lose the point and not move forward to this goal. I am sure that many institutions and companies announce that they are working towards sustainable agriculture, but how we can be sure about this? So far any metrics standardized and compared this topic among companies or institutions, seems to work. For instance, I found very interesting the SAFA framework proposed by FAO. However, it seems to be so complex or difficult to follow that as far as I am concerned, this framework has not been widely used as perhaps was expected.

Continues recording and working with classical economics measures appear to be the most common and typical form to assess sustainable agriculture. But could we develop better measures towards this goal?

If we use Qiskit what we get is probalistic data. However if we want to process quantum image processing, the first thing we need to do is quantum image representation in matlab. So what will be the logic behind the representation of quantum image where we take classical image as base.

Dear researchers,

I hope you are well and in good health

Concerning fractional derivatives and analysis.

If the notion of classical derivative gives us more information on the variations (Df is positive implies f is increasing), what information is provided by the fractional derivative, for, example D^{1/2}?

That on the one hand, and if you want to do the numerical, what do you think is the approximation D^{1/2}?

All help is appreciated. Best regards

Working GAN which makes use of deep learning (neural networks) to create synthetic data. Does anyone have any idea or material that inculcates the use of classical machine learning algorithms instead of DL? Or is this possible in the first instance?

What is the fastest and most accurate way to solve an MINLP model with nonlinear constraints, including triangle and quadratic terms? classic solvers, metaheuristics, or ...?

Any tips are appreciated.

What are some good books (both new or classics) on community ecology and restoration ecology, which can be helpful to build concepts for a PhD? (My specific field of study is ornithology)

Please if anyone has a recipe for determining or developing or constructing a classical force field for a new molecule for which there is no experimental data. These classical force fields are required for the classical modelling of a refrigerant (MEA). That is, using classical force fields to simulate the equilibrium composition of a bulk reacting system. Also, how would one simulate an equilibrium adsorption isotherm when the adsorbent undergoes a chemical reaction with the adsorbate particles?. I am looking at information on classical thermodynamics, particularly as pertains to phase and chemical reaction equilibria. Any information (including suggestions, articles, books or links) on these questions is welcomed.

I am trying to do the classical Molecular Dynamics simulation of the crystal structure of ethanethiol (or ethyl mercaptan) C2H5(SH) To do this, I need the experimental crystal structure of the molecule to use as the starting configuration for my simulation.

Just for some context, I'm isolating non parenchymal cells with Pronase+Collagenase D, which is a well known protocol in my lab. I use DAPI as a viability dye in my FACS panel but I encounter some issues as it also stains high DNA content cells or mitotic cells, representing around 20% of the cells (see picture). No solution in the protocol contains detergent that could permeabilize the cells, and the mouse used is a classic healthy C57 so mitotic cells are not expected, at least to that extent. Did anyone encountered the same issue in the past? Thank you

I have project to determine the action spectrum for phototherapy of neonatal jaundice. Since 1958 this has been widely assumed to overlap the in-vitro absorption spectrum which has a maximum at 460 nm. There is now convincing evidence for an action spectrum centred on about 480 nm. I want to illustrate my report to CIE with graphs of the "classic" absorption spectrum and the spectrum recommended by authors such as Lamola and Ebbesen.

Although there are plenty of papers showing the in-vitro absorption spectrum centred on about 460 nm, no one seems to give a reference shwing where that came from. I can only refer to it as the "classic" spectrum, as if it has the status of "E=mc2" and everyone knows that came fro Einstein 1905!

Has anyone got an original reference for the often quoted absorption spectrum centred on about 458 nm?

Was Heisenberg a Third-Rate Natural Philosopher because he denied the reality of micro-objects that cannot be tracked by humans? Has this misled physics for 100+ years?

Surely, because we have created a whole civilization from the

**manipulations of electrons**, especially digital electronics, then "LOOKING" at an object is NOT a requirement for existence.? Electrons interact with everything; so surely quite sufficient, eh?Heisesenberg was educated in the Classical Philosophy of Aristotelian Classicism in the archaic German Education system. He failed to think for himself, substituting a Platonic idealist view of mathematics, as being superior to our imaginative/operational view of reality.

Hello,

I wanted to ask if someone can summarize which physical processes can and can not be explained by classical and quantum light material interactio formalism?

I know that single photon experiments need full quantum description, also spontaneous emission is same, needs quantization of light to explain,

I know that Absorption can be explained full classically (electrons as harmonic oscillators and light as EM field that drives it),

So can someone summarize most well known (even less known if possible),

It is interesting to know how successful semiclassical approach is ( I heard that it explains nearly all processes)

thanks

A

**Fresnel imager**is a proposed ultra-lightweight design for a space telescope that uses a Fresnel array as primary optics instead of a typical lens. It focuses light with a thin opaque foil sheet punched with specially shaped holes, thus focusing light on a certain point by using the phenomenon of diffraction. Such patterned sheets, called Fresnel zone plates, have long been used for focusing laser beams, but have so far not been used for astronomy. No optical material is involved in the focusing process as in traditional telescopes. Rather, the light collected by the Fresnel array is concentrated on smaller classical optics (e.g. 1/20 of the array size), to form a final imageI am planning to conduct a study that examines the behaviors of people who "defect" or behave non-cooperatively in an online social dilemma game. Thus, I am looking to find a game where a large proportion of players tend to defect or behave non-cooperatively. Does anybody have ideas about which games are the best for this? The classic Prisoners Dilemma Game? The Investment Game? Another?

Recently, I got a revision on one of my papers in which the reflection and transmission phenomenon of waves has been studied in a piezoelectric medium with the consideration of a flexoelectric effect.

In the said article I used the classical method for finding the amplitude ratios of the waves.

However, the reviewer suggests that

"

*It would have been better if the solution methodology was based on Lame displacement potentials where the dilatational and the distortional character of the waves are more easily distinguished.*"But as far I know this methodology, the Lame potentials are best suited for the isotropic media.

So,

Is the Lame displacement potential method can be used for transversally isotropic media?

We are working with experimental data in which the dependent variable of interest (is count) has a lot of zeros and some few 1's across the treatments under investigation. we therefore need direction on the best way of analyzing the data aside the classical ANOVA technique.

In classical times there was a concept of pure science that was produced entirely in the intellect.

More recently sciences were developed by testing the intellectual product with empirical data. These sciences are not regarded as being pure.

Mathematics, often regarded as pure science, has for most of history been based on postulates of geometry that could not be proven. Then came relativity and other geometries. In the past century there was considerable effort to reformulate mathematics on a firmer basis of conditional sets. Math is now regarded as being somewhat more pure than before, while producing two generations of graduating students in some countries who are not able to do simple arithmetic.

Fortunately I had some excellent teachers who explained the two systems and why they were both needed. Other teachers displayed the Gödel's incompleteness theorems.

In academic settings there seems to be a difference of opinions about whether or not math is a science, and whether or not it is pure.

Is Mathematics A Science?

In accordance with the classical limit, the probability per attempt of tunneling decreases towards zero as the mass m of a particle, the deficit V − E between its energy E and the barrier height V (E < V), and/or the width W of a barrier becomes large. The probability P per attempt of tunneling of a particle of mass m and energy E through a barrier of height V (E < V) and width W in the classical limit (in the limit of ever-smaller P) is

(1) P = 16[E(V – E)/V

^{2}]exp{−[8m(V − E)]^{1/2}W/h-bar}.In accordance with the classical limit, P approaches zero as m, V − E, and/or W become large.

But, by contrast, there seems to be a paradox if E > V. For the probability per attempt of traversing the barrier is then

(2) P = {1 + V

^{2}sin^{2}[2m(E – V)]^{1/2}W/h-bar]/[4E(E – V)]}^{-1}.The average of sin²x over one or more complete oscillations of any argument x is 1/2, so if E > V a typical value of the smoothed-out probability per attempt corresponding to a given E is

(3) P = {1 + V

^{2}/[8E(E − V)]}^{-1}.Letting E = NV (N > 1), Eq. (3) can be rewritten as

(4) P = [1 + (8N

^{2}− 8N)^{-1}]^{-1}.Inverting and solving Eq. (4) for N in terms of P (with the help of the quadratic formula),

(5) N = {1 + {1 + [2(P

^{-1}− 1)]^{-1/2}}^{1/2}}/2.Corresponding to P = 1/2, N = 1.15. Corresponding to P = 0.99, N = 1.92. This is reasonable for an electron traversing a barrier of atomic dimensions. But this is not reasonable for a baseball thrown over a fence of width W = 1 cm and height H = 10 meters in Earth's gravitational field g, which has P = 1 of clearing the fence if its energy E even marginally exceeds V = mgH, not merely P = 1/2 if E = 1.15 mgH and not merely P = 0.99 if E = 1.92 mgH. But if E > V the formulas (2) through (5) take no account of a classical limit: they are identical for an electron and a baseball. [Although g acts vertically, E and V for the baseball can still be construed as one-dimensional, as functions of its horizontal position directly below its path. The same lack of taking into account of a classical limit by the formulas (2) through (5) obtains for any system, however macroscopic.]

Perhaps this paradox is resolved because a PERFECTLY square potential barrier is physically UNrealistic. At the edges of any REAL, PHYSICAL, barrier, the potential increases from zero to V over a FINITE distance greater than zero. If this is taken into account, the probability that a baseball thrown over a fence with E > V = mgH being reflected, i.e., not traversing the fence, is reduced to zero. See Quantum Theory by David Bohm, Sections 3.9, 11.3, 11.4, and 12.1−12.4.

Dear Researcher

I want to use the Bayesian estimation method for more than two continuous dependent variables. I prefer to use the Bayesian estimation approach than the classical approach. Is the anyone who can help me?

I am by no means an expert on this subject, but a few papers on this subject sparked an interest into whether instantons give rise to a non-zero vacuum expectation value or could be involved in the generation of the Higgs field.

Instantons in mathematical physics arise as solutions to a set of non-linear differential equations than minimize the Yang-Mills functional for a non-abelian gauge theory. This is part of the differential geometric way of writing classical fields in terms of a connection and the curvature of a connection. The classical electromagnetic field is a U(1) connection and the curvature form of this connection is an anti-symmetric matrix that whose entries are the electric and magnetic fields. For non-abelian groups such as SU(2) and SU(3), the connection and curvature of the connection formalism give rise to the weak force of the Z and W-, W+, and the 8 gluons of the SU(3) strong force. The instanton number can be thought of as describing the number of instantons present and is an expression of how "twisted" or topologically non-trivial the vector bundle or underlying space is.

The Higgs field is what gives spin 1/2 particles mass as well as giving mass to the Z and W-, W+ particles. The masses of spin 1/2 particles are determined by something called the Yukawa coupling. My question is how can instantons contribute to a non-zero vacuum expectation value and are there theories that say the Higgs field is built up in this way?

Our answer is a consistent YES. A qubit (or quantum bit) is today the quantum mechanical analogue of a classical bit. In classical computing the information is supposed to be encoded in bits, where each bit can have the value zero or one. In quantum computing the information is, then, also encoded in qubits.

This is inconsistent in (1), in a most basic point, because even classically, information is no longer understood to be encoded in bits. Years ago, this was true 50 years ago, but failed.

Today, one uses SystemVerilog with tri-state chips, as opposed to Shannon's theory with binary states, and two-state chips as relays.

Information is encoded in three logical states, in 0, 1,and Z, where Z is an open circuit standing for indeterminacy, with a coherent semantics for interconnects.

The qubit view is inconsistent in (2), another basic way, because one needs to move from the macroscopic, from a classical Boolean analogy of relays or switches, valid for the Law of the Excluded Middle (LEM). Then, in a formless and classical “fluid” model for particles, information was seen in the double-slit experiment as GF(2^m), and now must change to a more complex microscopic structure, with a quantum tri-state+, not qubit in two-state. The photon (e.g., a particle) is now modeled by an algebraic approach with ternary object symmetry, modeled by GF(3^n).

Comparatively, the current two-state quantum theory of qubits is linked, however, to the classical two-state “bit”, following Boolean or classical logic laws, such as the LEM, which carry only two possible values, “0” and “1”. This emulates the workings of a relay circuit, and uses the formless “fluid” analogy of classical information, that can only be blocked (relay open), routed or replicated (relay closed). However, information can also be encoded, in analogy to network encoding as announced in 2000, and not covered by Shannon's theory.

What is your qualified opinion?

I would like to model the results of a symptom severity questionnaire of a certain disorder using GLM. However, I have a big problem with what distribution to adopt for the questionnaire results. The tool consists of 47 questions using a 1-5 scale, so the score is always positive, takes only discrete values, and has a finite range of values it can take (47 - 47*5). The empirical distribution is additionally strongly right-skewed. Is there a "classical" probability distribution that I can use to model such a variable?

Dear Scientific Community:

I am looking for an effective alternative readout of glycolysis inhibition in cancer cells.

In addition to monitor the production of Lactate or Glucose uptake (that are the classic experiments),

**do you know if there are some genes that is down--regulated or up-regulated after glycolysis inhibition ?**(for example, some gene of OXPHOS)thank you for the help

Marco

Beginning with Keynes General theory , Phillips curve interpretation of policy trade-off to New Classical and to New Keynesian school of thought, the subject Macroeconomics travel a long path. From adhoc macroeconomics model to representative household in general equilibrium framework , methodologically the subject evolved tremendously. It will be great to find books or articles which capture the journey from the perspective of thought.

What makes scientific article a classical?

First, and more commonly used is number of citations.

Second definition of a classic scientific article is inherent to article itself. These articles are so completely innovatives and disruptive that new protocols have been implemented and new and results have been achieved.

I would appreciate so much if you could answer a fast online questionaire about historical articles:

And please send to your pairs, globally if possible.

Best regards

The integral development of children is intimately with the game. In a bibliographic review of this area of scientific knowledge (children's game): Which authors should not be omitted?

Let ABC be a triangle with angle BAC<90

^{o}. Let M on AC and N on AB be two points so that MA=MC and CN is orthogonal to AB. Let P be the intersection point of BM and CN. We suppose that BM=CN. Prove that BP=2PN.I am reading the paper of Jeremy Avigad, Edward Dean and John Mumma, entitled "A formal system for Euclid's Elements".

Could this approach be extended to the books of Apollonius of Perga dealing with conics ?

I am interested in complexity questions and the completeness/incompleteness of axiomatic systems for Greek geometry.

Maybe there is more to the late ideas of Frege about basing Arithmetic on Geometry than is generally believed...

Also, inspired by the well-known correspondence between the elementary theory of field extensions and the classical constructions with ruler and compass, one can ask: what kind of field extensions correspond to constructions in which we can draw conics as well as circles and lines ?

Quantum Computers have an advantage over classical computers as they are based on quantum states which can be three 0, 1 or superposition of 0and 1. Till now classical had 0 or 1 states of transistors, but if a third state of infinit current is introduced, will it improve them? If we keep aside the engineering difficulty to realize it, in principle, will it reach quantum computer-like computational levels?

Dear Colleagues,

The classical form of speed distribution of particles

Ac²exp( -bc²)dc is valid for particles in 1D box or not. ?

Please discuss.

Thanks

N Das

The main result of decoherence theory is that the non-diagonal elements of a quantum object's density matrix become zero due to uncontrolled interactions with the environment. For me, that only means that there will we no more interference effects between the superposed states. But there still remain the diagonal elements of the density matrix. So there is still a superposition of classical alternatives left. How does that solve the measurement problem ?

Moreover, doesn't the mathematical derivation of the decoherence effect involve an ensemble average over all possible environmental disturbances ? How does this help when we are interested in the behavior of a specific system in a specific environment ?

Heterogeneity in meta-analysis refers to the variation in study outcomes between studies. I-square statistic describes the percentage of variation across studies while is calculated as the weighted sum of squared differences between individual study effects. Which is a classical measure of heterogeneity for meta-analysis (Cochran’s Q or Higgins I-Square)? and WHY?

If you were to analyse metabolic pathways in T cells (murine and human), how would you prepare the culture medium? And, apart from classical activation, would you add IL-2 for a 3-5 day cell culture?

In classical biotechnology, three stages are distinguished: pre-fermentation, fermentation, post-fermentation. Typical equipment used in biotechnological industries is known. Are there any features in the hardware design of biopolymer production, such as proteins (enzymes), polysaccharides (starch, cellulose), lipids (lipids of microalgae)?

Along my research I have focused to a large extend on corporate

**culture, organizational values leadership and management**. I looked at the connection of these soft facts to financial performance and success measurements. I realized that the field of research is less explored then others in this area and that many researchers mainly borrow approaches from classical cultural research on nations. That approach is from my perspective a bit thin and too easy hence it might not capture an organizations unique approach. I invite all to**look at my wok**and**share their work**in the field to**start a discussion**on in which direction research will go. Because we know that just continue as we did so far will bring us not further.is there any book that can explain what kind of classical assumption or diagnostic check that need to be tested on panel data model ?

I am facing difficulty to find the gynoecium properly of Mimosa pudia L. which has capitulum or head inflorescence. I have found the androecium after many try. But pistill was too hard, How to be efficient in classical taxonomy study (Morphology based)? How did they described it in literature? I wonder!!!!

Hi,

I am currently working on a project that requires identifying the two subpopulations in mice using flow cytometry and I am having difficulties in my gating strategy.

I am staining with CD45, CD11b, Ly6G, CD11c, F4/80, Ly6C, CCR2, and CD43. Can someone please help me device a gating strategy to identify both classical and non-classical monocytes? Thank you.

Close binary stars have been reported to produce red nova at times while they are merging.

Some classical observers reported Sirius as a red star, others reported a white star. Now it is definitely white. Popular explanations try to discount the red claims, not entirely satisfactorily.

Sirius B is reported to be one of the largest known white dwarfs. The possibility for a merger of close binaries in historical times might be considered. Then the claims of red colors could have a scientific basis. It leaves the problem of what a binary pair might have been like while in the main sequence.

Was Sirius B a Binary Pair Of White Dwarfs In Classical Times?

Dear all,

I have a naive question. When we want to study Type 1 response, we infect mice with influenza for instance, if we want to study Type 2 response we use for instance T. Bruecei. Does anybody know which type of response we induce when we immunize with our classical protocols such as SRBC or NPCGG/NPOVA/NPKLH?

Thank you very much

I am trying to simulate permeation of gases in polymers. For this reson normally i use the clasic models like dual sorption model but newly i foun dthe non equilibrium lattice fluid model. It seems very promising but it's not easy to understand and use. So i want to know more about advantages of this model in comapre to classic models. Any idea about that? thnaks in advance

I am currently into liquid crystals and have no related background. Can someone tell me which book is the most instructive one to learn basics of liquid crystals? I am more interested in different phases/textures of liquid crystals.

Dear all,

We are looking for people interested in a postdoctoral position on ab initio and / or classical modeling of graphene-based systems.

I kindly ask you to pass this information on to all potential interested persons and to contact me. Location Pisa, Italy.

Thank you.