Questions related to Wikis
Earlier today I read a discussion in Researchgate, how to find what variables are endogenous and what variables are exogenous. The discussion was about the random error term in a regression.
However, time is essential. When a peak in time series x1 is earlier than a similar peak in time series x2, then the x1 and x2 may well be exogenous and endogenous, respectively.
Granger defined the causality relationship based on two principles:
1 The cause happens prior to its effect
2 The cause has unique information about the future values of its effect.
What are the best hydropower plant locations in Georgia?
All alternative locations and among them the best ones?
I am following this to calculate bandgap of Si using HSE06 in: VASP:https://www.vasp.at/wiki/index.php/Si_bandstructure
in step-1. I am running a SCF caculation using PBE. For which I have used this INCAR
IBRION = -1
NSW = 0
ISMEAR = 0
SIGMA = 0.01
ENCUT = 520
ALGO = Accurate
EDIFF = 1E-6
in the second step, along with POSCAR,POTCAR,KPOINTS used the WAVECAR file of the previous step as input and the INCAR file is :
LHFCALC = .TRUE.
HFSCREEN = 0.2
ALGO = D
TIME = 0.4
ENCUT = 520
ISMEAR = 0
SIGMA = 0.01
GGA = PE
and then I used the script gap.sh that is also provided in vaspwiki examples to calculate HOMO and LUMO.
But the HOMO is now coming approx 0.13ev and the LUMO is -0.78ev .
Please tell me what I have done wrong.
Thanks in advance.
Good day fellows,
I am currently doing a 2d pushover analysis of a simple RC frame ( fiber section ). I am just following the example in the opensees wiki. The model worked fine in gravity loads. However, upon running the pushover analysis, this error came up :
Large trial compressive strain
UniaxialMaterial : : setTrial() - material failed in setTrialStrain()
What does this mean and how can I solve this issue? attached herewith are my tcl model and a screenshot of the errors.
- According to the definition of heavy-tailed distribution at https://en.wikipedia.org/wiki/Heavy-tailed_distribution, Can a Kumaraswamy distribution (or beta distribution) be a heavy-tailed distribution?
Just wonder if a rejected paper can be reposited in BioRxiv?
As I see in Wiki saying:
"In general, most publishers that permit preprints require that:
- the authors disclose the existence of the preprint at submission (e.g. in the cover letter)
- once an article is published, the preprint should link to the published version (typically via DOI)
- the preprint should not have been formally peer reviewed"
My concern is the last point.
Linux Version - 16.04/ 18.04
The instruction given in the mentioned URL above are very unclear and ambiguous. Your assistance will be highly appreciated. Thank you.
Sigma Xi, The Scientific Research Honor Society Membership is some honor or not?
Or what exactly it is and who is eligible for it and what is your opinion about it?
The Global Warming Petition Project, also known as the Oregon Petition, is a political petition designed for disinforming and confusing the public about the scientific results and the consensus of climate change research. It is framed as a petition urging the United States government to reject the global warming Kyoto Protocol of 1997 and similar policies (Wikipedia).
For more details, check the following link.
Share your ideas.
Engineers are trained to think of stress limits and effective stress in strength of materials. Do you know strain limits and theories for effective strain? The idea to ask for strain limits came to me because for soft materials with large deformation there are material models with limiting chain extensibility, which lead to strain limits (or better stretch limits because of large deformation). The Gent rubber material model is a prominent example, https://en.wikipedia.org/wiki/Gent_(hyperelastic_model). The Strain Invariant Failure Theory (SIFT) for composites is a different example.
The uniaxial linearly elastic case is trivial because stress and strain are related by the Young’s modulus of the brittle material. Nevertheless, strain based criteria have not yet found proper recognition. I am concerned about general states of stress and strain in concrete, rock, bone and similar brittle (or quasi-brittle) materials.
I’m trying to prepare PBS 20 mM.
The recipe that I saw (https://en.m.wikipedia.org/wiki/Phosphate-buffered_saline) mention the following amounts of salts to get PBS (1x):
NaCl 8 g/L
KCl 0.2 g/L
Na2HPO4 1.42 g/L
KH2PO4 0.24 g/L
Following this recipe, what will be the PBS final concentration? How can I do to prepare a 20 mM buffer?
Many thanks in advance.
kind regards :)
How to work properly in the development of an integral like the Abel Plana defined on this image:
I am interested in to have a set of steps for attacking the problem of developing the integral and to determine a criterion of convergence for any complex value s, I mean, when the integral could have some specifical behavior at, for example, s=1/2 + i t where I am interested in to study it.
I am interested in the proper evaluation of that integral only with formal steps into complex analysis.
The Abel PLana formula appears also on https://en.wikipedia.org/wiki/Abel%E2%80%93Plana_formula
In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example in https://en.wikipedia.org/wiki/Lag_operator
Is there any article or project about interaction of the "Schumann Resonance" on the brain alpha or theta waves?
- The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency portion of the Earth's electromagnetic field spectrum :: Schumann Resonance Freq. : 7.83 Hz
- Alpha waves are neural oscillations in the frequency range of 8–12 Hz
- The Chu-Construction allows to obtain a *-autonomous category from the data of a closed symmetric monoidal category and a dualizing element.
- The Cayley-Dickson-construction builds an algebra B = A + A with involution from the data of an algebra A with involution *. Applied to the field of real numbers it gives successively the field of complex numbers, then the skew-field of quaternions, then the non-associative algebra of octonions, etc.
Due to closeness of the expressions of multiplication m: B \otimes B -> B for the multiplicative unit B we believe that there is an intimate link between both notions.
Has such a link been described in a reference text ?
Are there any tutorials which demonstrates the usage of C++ Linear Algebraic libraries like EIGEN ( http://eigen.tuxfamily.org/index.php?title=Main_Page ) or BLAZE ( https://bitbucket.org/blaze-lib/blaze/wiki/Getting_Started ) to build CFD applications.
In short how to initialize sparse matrices and use inbuilt iterative solvers.
I'm looking for morphological models that could used for stemming (NLP) in python for following languages: Croatian, Czech, Estonian, Slovakian.
I am new in Wannier tools and trying to plot normal band structure using it. I am using the below link to do the calculations
When i set the tag LWANNIER90 = .TRUE.
I got only wannier.wout file as output with an error
Error: Problem opening input file wannier.mmn
Kindly find my input files and please suggest me that where i am doing wrong.
Any help would be highly appreciated!
I am now working in the evaluation of an innovative renewable energy generation system. For the evaluation of economic aspects I was planning to use the LCOE, levelized cost of energy, and I have some doubts/questions about it.
In particular, I am not sure how I can incorporate in the usual definition, the one that can be found in Wikipedia (https://en.wikipedia.org/wiki/Levelized_cost_of_energy) and normally found in article, terms corresponding to
- Selling excess that cannot used or stored in site. I believe I can add a negative term, Si, to the costs sum, but not sure.
- Residual value at the end of the life cycle. I think I can add a negative term, RV/(1+r)^n, where r is the interest rate and n the useful life time to the numerator. This term arises from the fact that at the end of the life time some materials, in particular, metals, have a value for recycling.
- Disposal cost, I believe I can add a term DC/(1+r)^n, as in certain systems it is necessary to spend some money to be able to properly dispose some of system.
I have some doubts that I am thinking right or not. Thus, can someone give their opinion or reference that help me on issues I am having.
Why aren't drugs implementing activation of cholinergic anti-inflammatory pathway (n-cholinomimetics, α7nAChR agonists, etc.) used in treatment of COVID-19?
It was found that activation of cholinergic anti-inflammatory pathway reduces the concentration of pro-inflammatory cytokines in the blood and organs during sepsis, and various infectious diseases, while significantly reducing mortality.
I have my two proteins of interest which I docked using ClusPro and I am interested in finding the interacting residues at the interface. I ran the script available at PyMOL Wiki (https://pymolwiki.org/index.php/InterfaceResidues) and input the command, but it does not recognise the interface. I tried merging both proteins into a single PDB file, ran the script and input the command again, but this did not solve the issue. Does anybody know how to approach it?
I'm new to all things Spectroscopy so be gentle :-). I'm puzzled to find the difference between fluorescence and most variations of Raman scattering. They both involve exciting atoms from their ground state to some excited state, then fluoresce back down to a ground vibrational state, emitting radiation of lesser energy than the former (at least for Stokes Raman).
1. What am I missing?
2. Should I assume the difference in energy between initial and final radiation is lost in the vibrational state, so as to conserve energy?
3. Yeah... and the virtual energy states involved in Raman scattering? That's just too weird for now.
So, in simple English, what are they really? Wiki explanations are so over the top to be readily understood by a newbie like me.
I have a list of social media data points ( i.e. latitude and longitude). Lets say that the points represent people, and I am trying to identify population centers, based on just the location. Classic algorithms like k-means may not work. The approach needs to be unsupervised and the number of clusters are not known beforehand. Therefore I am thinking of DBSCAN (https://en.wikipedia.org/wiki/DBSCAN) as a good choice. However, I came across studies which have extracted spatial clusters using Moran I (https://en.wikipedia.org/wiki/Moran's_I). It is said that Moran I detects statistically significant clusters. Though DBSCAN is widely used and have a huge practical, I am not sure whether DBSCAN clusters are statistically significant ( I have not seen in any of the literature I have read).
How can we say that using DBSCAN is justifiable though the clusters it identifies are not statistically significant ( like Moran I). Or do we need to use Moran I for it.
I would really appreciate your kind guidance.
We know that heating happens due to mid-infrared region of solar irradiation.
Ice melts due to absorption of which specific range of wavelengths of electromagnetic spectrum?
Can we relate this to the vibration states of hydrogen bonds inside ice crystals and due to water molecules?
And is all the energy absorbed in UV, Infrared and Microwave region used for heating the ice mass or could it be used for breaking OH bond and just lead to ionisation?
Could someone please provide a working example (or point me to a good resource) of how 'lambda local' (along with lambda BG, 1k , 5k and 10k) is calculated for peak calling in MACS?
λlocal=max (λBG, [λregion, λ1k], λ5k, λ10k)
Does 'max' here mean the upper limit, based on whichever of the variables λBG, λregion, λ1k, λ5k, λ10k is the highest and hence the p-value being calculated taking this 'max' value for λ? Is λlocal a product/average/sum of all the other λs?
I have been unsuccessful in trying to understand it by referring to the script and the tutorial given in the link below:
Also, is the 'mfold' (10-30 fold enrichment) parameter estimated w.r.t lambda BG?
THE LONG VARIANT OF MY “COMPOSITE” DOUBLE-QUESTION:
Can elementary neutral massless fermions aka “elementary fermionic (neutral) luxons” (EFLs) (whose true existence isn’t rejected in principle by mainstream physics) (not to be confused with Weyl fermions which are not elementary particles, but quasi-particles) be valid candidates for dark matter and energy? And if so, do you have any suggestions on possible experiments that may confirm or infirm the existence of these EFLs?
My zero-energy hypothesis (ZEH) launched in my recent article (“On a Possible Logarithmic Connection between Einstein's Constant and the Fine-Structure Constant, in Relation to a Zero-energy Hypothesis”, Physical Science International Journal [PSIJ], ISSN: 2348-0130, Vol.: 24, Issue.: 5, pages 22-40: https://www.researchgate.net/publication/342530363 and https://www.journalpsij.com/index.php/PSIJ/article/view/30191) PREDICTS that all EPs may be “conjugated” in boson-fermion pairs of “mass-conjugates” (which is a new type of physical symmetry proposed by ZEH and produced by a balance between the strengths of electromagnetic and gravitational fields at Planck scales) with the rest masses of all known (and unknown!) elementary particles (EPs) being the conjugated solutions of a simple quadratic equation (proposed by ZEH) which allows all neutral EPs to have zero rest masses. ZEH also predicts that spacetime is probably granular (and very viscous!) at Planck scale allowing G/r and k_e/r ratios with only discrete values in the predicted length-interval [r_min, 5*10^3*r_min]. If the quantum vacuum will ever be proved to be actually a “fluid”-like entity, my ZEH predicts that vacuum may be granular and very viscous at scales close to Planck scales and that is why its movement and/or deformations may be governed by an equation similar to that of viscous flow (https://en.wikipedia.org/wiki/Lambert_W_function#Viscous_flows), which equation (of viscous flow) is solvable by using a Lambert W function.
Furthermore, my ZEH predicts two elementary massless fermions (the here-called “Higgs-fermion” [Hf] and “Z-fermion” [Zf] which can be regarded as elementary fermionic luxons [EFLs] [https://en.wikipedia.org/wiki/Massless_particle], NOT to be confused with Weyl fermions [which aren’t EPs but quasi-particles]) as being the “mass-conjugates” of the Higgs and Z bosons potentially viable candidates for both dark matter and dark energy. Being zero-mass fermions, they are also predicted by ZEH to move with the speed of light and thus to have been spread by the Big Bang in all directions of space with this speed of light. Mainstream physics DOESN’T reject, in principle, the true existence of EFLs.
Do you have any suggestion on possible experiments that may confirm or infirm the existence of my ZEH-predicted EFLs Hf and Zf?
It would be also interesting to (at least theoretically) know if these Hf and Zf have a weak charge or not, thus if they couple with the weak nuclear field (WNF)/participate to the weak interaction (https://en.wikipedia.org/wiki/Weak_interaction) (like all the other known fermions from the Standard model were proved to couple with WNF) or NOT. What do you think?
As ASLERD (https://en.wikipedia.org/wiki/ASLERD) we have conducted in Italy surveys at both national (university teachers, schools teachers and parents) and local level (university students, high schools teachers, students and parents). Just to provide an idea, on research gate you can find a preprint on
and a paper on
Conference Paper Effect induced by the Covid-19 pandemic on students’ percept...
We wish to get in contact with colleagues that have conducted similar studies in other countries and/or that wish to use the same questionnaires to support comparative studies and create a much wider dataset.
If interested write an email to
aslerd [dot] org [at] gmail [dot] com
in which you
1) describe your (and your research group) interests in education (just one sentence);
2) if you are interested in investigating distance learning during the Covid-19 emergence or after the universities/schools re-opening;
3) if you are interested in an investigation at university or school level or both
4) if, apart the standard localization, you wish to translate the questionnaires in your local languages (at present the questionnaires are available in Italian, English [to be validated], Arabic)
5) if you intend to carry on the investigation in the whole country or consider only a local case history (in this latter case plese describe it, just one sentence)
6) how do you plan to involve your target
Thanks in advance.
You may understand that motivation and well planned research are very important to establish a successful alliance and collect meaningful datasets (as we are already doing with some of you).
These are all very specific questions related to some lab questions I am supposed to fill out. I was able to do the first part, and know the equation for the Lande factor for the second, but I do not know exactly what the terms are going to be. Secondly, the Doppler question is very confusing to me because the reference material I have does not talk about this at all. Would the equation I want to use be the " full width at half maximum" equation I was able to find on Wikipedia here: https://en.wikipedia.org/wiki/Doppler_broadening? This just seems like the only thing I could find which has terms that I could look up and use from the problem. I have attached the questions for more details on these questions. Thank you.
I am trying to design a controller using C.L.F for a particular problem and I want to find the settling time for the controller that I would design for convergence near a desired equilibrium point that I want to define myself. It would be very nice if you can provide some intuition behind this.
Also, In the wikipedia page for the example section I could not find any reference but the controller appears to converge to a desired equillibrium point and according to the authors varying the alpha and kappa would do but I need some light on this topic can you help me.
Why do inverted organic solar cells typically have lower efficiencies than the standard architecture?
Wikipedia says this as do a few other places but I cannot find an explanation, the references always also state it without explanation.
"Inverted cells can utilize cathodes out of a more suitable material; inverted OPVs enjoy longer lifetimes than regularly structured OPVs, but they typically don't reach efficiencies as high as regular OPVs "
MDP is a discrete-time stochastic control process, providing a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision-maker [Source: Wiki]. They are used in the areas of optimal response such as Reinforcement Learning.
I ran into an interesting mathematical problem that is the result of the use of infinitesimal vector calculus in relation to the Helmholtz theorem and the vector LaPlace operator.
I've also posted this question here, where I've edited it a bit further and added some things:
What is very interesting is that the Helmholtz decomposition is hidden within the vector LaPlace operator and this can be used to define potential fields. For the general case:
The terms in the definition for the vector Laplacian can be negated and equaled to zero to obtain the vector Laplace equation:
-∇2𝐅 = -∇(∇·𝐅) + ∇×(∇×𝐅) = 0,
and then the terms in this identity can be written out to define a vector field for each of these
𝐀 = ∇×𝐅
Φ = ∇⋅𝐅
𝐁 = ∇×𝐀 = ∇×(∇×𝐅)
𝗘 = −∇Φ = −∇(∇⋅𝐅)
And, since the curl of the gradient of any twice-differentiable scalar field Φ is always the zero vector (∇×(∇Φ)=0), and the divergence of the curl of any vector field A is always zero as well (∇⋅(∇×A)=0), we can establish that E is curl-free and B is divergence-free, and we can write:
As can be seen from this, the vector Laplacian establishes a Helmholtz decomposition of the vector field 𝐅 into an irrotational or curl free component 𝗘 and a divergenceless component 𝐁, along with associated potential fields Φ and 𝐀, all from a single equation c.q. operator.
For fluid dynamics, we can use this decomposition to define a vector and a scalar potential for the velocity field, analogous to the electrodynamic domain like this:
vfd = -∇Φfd + ∇×𝐀fd
𝗘fd = −∇Φfd
𝐁fd = ∇×𝐀fd
ω = ∇×𝐁fd
From this, we can do an analysis of the units of measurement, since the curl, grad and div operators all have a unit of measurement in per meter[/m]. Since v, E and B all have a unit of measurement of velocity in meters per second [m/s], we obtain a unit of measurement in cubic meters per second [m3/s] for the primary field F, thus describing a volumetric flow field, similar to the volumetric flow rate:
"Volumetric flow rate is defined by [...] the flow of volume of fluid V through a surface per unit time t."
It seems this can also be defined as the flow velocity vector field v times an area A perpendicular to v with a surface proportional to h2 square meters [m2], with h the physical length scale in meters [m].
For finite difference or discrete vector calculus methods, such as used in FDTD simulation software, h denotes the spacing of the discretization grid, which may be variable or constant.
This leads to the conclusion that F =/= 0 for any v =/= 0 and any h>0 and therefore when using discrete mathematics F exist and according to the Helmholtz theorem it is uniquely defined by the two potential fields.
Now here's the problem: when we take the limit for h -> 0, which we do with infinitesimal notation, we obtain F=0, which cannot be correct for any field whereby v =/= 0, so what we find is that there is a limit to the applicability of the Helmholtz decomposition when using infinitesimal calculus and that needs to be worked around.
However, if v is known and F can be defined as v times an area A perpendicular to v , it seems it should be possible to compute the curl and divergenve of F from this definition and thus come to a completely closed system of potential theory, whereby all fields are uniquely defined and can be analytically solved, except the volumetric flow field F itself.
So, the question is: how do we do that?
Hopefully, some mathematician finds this problem interesting enough to think about, because it has quite a lot of consequences for the actual applicability of the Helmholtz decomposition in the general case as well. Now that we have shown that the Helmholtz decomposition does not actually hold in this case, it is an interesting question for mathematicians to figure out when this is the case and what consequences this has.
I've been working on Cohen Class Distribution functions and wanted to write a code that computes the TF-distribution based on the general function. With this I can replace the kernel function and try different TF distributions.
The equation from Wiki: https://en.wikipedia.org/wiki/Choi%E2%80%93Williams_distribution_function
Can someone please tell me what mistake I have done in the computation. I imagine there is some mistake in the integration, but don't know what.
I am conducting GWAS QC and would like to exclude regions of high linkage disequilibrium (LD) during pruning of my data (which is in GRCh38). Does anyone know where I may find a list of high LD positions in build 38?
... or how may I map the positions listed in https://genome.sph.umich.edu/wiki/Regions_of_high_linkage_disequilibrium_(LD) to build 38?
I also found a list in build 37 in De Vlaming et al., 2017 (Table 5) (https://www.biorxiv.org/content/10.1101/211821v1 ), but am not sure what would be the best approach.
I tried to use liftOver to map the list of high LD regions from De Vlaming et al., 2017 to Build 38 using the commands below, but there were plenty of unlifted positions in both (in brackets are the row counts in the output and unlifted bed files).
liftover input.bed hg18ToHg19.over.chain.gz outputA.bed unliftedA.bed
(16 positions lifted, 18 positions unlifted)
liftover input.bed GRCh37_to_GRCh38.chain outputB.bed unliftedB.bed
(15 positions lifted, 20 positions unlifted)
Does liftOver prefer hg18ToHg19.over.chain.gz or GRCh37_to_GRCh38.chain?
Would it be okay for me to use the output.bed files even though many of the high LD regions were not lifted from the original b37 list?
I have a polymer end-capped with a Polysilsesquioxane (POSS) like the one here: https://en.wikipedia.org/wiki/Silsesquioxane#/media/File:Silsesquioxane_T8_Cube.png where *R=i-butyl*.
I haven't found much info on what type or reactions these POSS groups can undergo.
I would like to know if there are some procedures to modify that POSS moiety and convert it into an amino, carboxyl or triethoxysilane group, for example.
Also, are those -R groups easy to replace? Or that Si-(i-butyl) bonds are very stable?
Here you are the figure and description of a “do it yourself” face mask. You can prepare it during 10 minutes.
1. You need a textile head scarf. Its measure may be variable, what I indicated in the figure, can circle a normal human head.
2. The scarf should be folded diagonally in half.
3. You refold the lower peak of the scarf and sew it on one layer of the scarf. You put a filter between the two scarf layers. Filter may be paper towel, paper handkerchief, toilet paper, paper napkin or nonwoven fabric https://en.wikipedia.org/wiki/Nonwoven_fabric .
The filter should be changed in every 2 hours.
You tie the scarf behind your head tightly but you should be able to untie it. Of course, it is not as efficient as a FFP2 or FFP3 mask but it is better than the normal surgical mask. The scarf can be washed every day.
All the best,
The brachistochrone is a well-known problem in calculus of variations and optimal control. I ask if there is any explicit solution to the problem?. In the existing texts, x and y are usually parameterized in dummy variables as t, phi or theta. I want to know whether any explicit solution is available or not? I mean y(x) in terms of x, not any other intermediary variable.
In the pdf, attached to this question, there is some hint on the explicit solution (but it seems it is computational and numerical).
So, around january last year I got back to using Concerto platform, but apparently now installation isn`t recommending using AWS. So I went through this very small tutorial until the end, but when doing the last part, going to http://localhost/admin , nothing happens. That's because the tutorial looks kind of incomplete. I'm missing the steps between 5 and login in
Another point that I don't get it... is AWS still needed? The new tutorial is really small and shallow for whoever isn't familiar with the extra hidden steps...
Collaboration is essential for ensuring learning for all learners in one classroom. Blackboard collaboration tools like blogs, wikis can be considered the best platform for increasing interaction and motivation to accomplish a task in a group. Selection of the tasks is the most important to achieve the learning outcomes. So, for reading skill development what type of tasks the language teachers can design and assign for the learners in a blog?
In chemistry and physics what are "atomic orbitals" actually USED for?
"In atomic theory and quantum mechanics, an atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus." (en.wikipedia.org/wiki/Atomic_orbital)
"In chemistry, a molecular orbital (MO) is a mathematical function describing the wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region." https://en.wikipedia.org/wiki/Molecular_orbital
These articles are about the probability of finding an electron. There have impressive diagrams illustrating spherical harmonics, drum membrane oscillation modes, shapes of orbitals, subshell filling rules, etc. There are wave equations, hydrogen-like single electron atoms, Hartree-Fock approximations for multi-electron atoms, etc. . . .
We are told (variously) that the:
electronic structure of neon is: 1s22s22px22py22pz2 or 1s22s22p6
electronic structure of potassium is: 1s22s22p63s23p64s1
electronic structure of barium is: 1s22s22p63s23p63d104s24p64d105s25p66s2.
Why should I care? Why is this taught to freshmen in college chemistry?
After being impressed by all the pictures of orbital shapes, I then read this:
"Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927,Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number l of the same shell n (e.g. all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same l) is spherical. This is known as Unsöld's theorem." (https://en.wikipedia.org/wiki/Atomic_orbital )
Does any of this have any practical use other than to pass a test or impress your professor or girl friend? This "knowledge" just does not seem to connect with anything. The concern over "finding an electron" seems like a pathological compulsive/obsessive disorder.
In my view, "electron energy levels" could simply be "energy levels within the atom"--and having nothing fundamentally to do with badly misbehaving electrons.
It all reminds me of a joke I heard on TV: "Does Superman have a super sense of humor or just an ordinary sense of humor?" Of course, we need a grant to study the issues here. The sense of humor might be related to diet. What does Superman eat? How do you define "super sense of humor"? etc., etc., etc. But after all the studies are done and all the money is spent, WHO CARES? This is useless "knowledge".
Two engineering firms were trying to "out spec" each other on their quality programs. One was bragging about 8 sigma quality, the other, 6 sigma quality. But the bottom line proved to be: "The quality that we don't have is BETTER than the quality that you don't have either." This is a distinction without a difference--nothing meaningful.
Is all this s,p,d,f orbital stuff simply mathematical circumlocution for our amusement? A horror movie based on Schrodinger's mathematical "maps of Hell" ? Could we learn just as much by getting drunk and watching a really stupid grade B movie on TV?
Of what USE is this so called "knowledge". If someone knows, PLEASE let him speak up! Please enlighten us!
Is it alright if I run an ISIF = 3 and ISPIN = 2 calculation for 2 hours, after which the job terminates (limited computing resources), and then simply copy the CONTCAR to POSCAR and start it again for 2 more hours and so on? I can only run a calculation for 2 hours at a time with the computing facility I use. Is this approach alright or will there be issues?
ISTART seems like an appropriate option here but I fail to understand its VASP wiki page, so I only use ISTART = 0. Any suggestions in this scenario are welcome.
Prince Prisdang became ambassador to eleven European countries and the United States. Later prince became the Buddhist patriarch of Colombo, Sri Lanka, but returned to Bangkok in 1911 where he was obliged to disrobe.
Rudolf Virchow was a German physician, anthropologist, pathologist, prehistorian, biologist, writer, editor, and politician. He is known as "the father of modern pathology" and as the founder of social medicine, and to his colleagues, the "Pope of medicine". He wrote in 1848 “Medicine is a social science, and politics is nothing else but medicine on a large scale.” What is your opinion?
You may follow the link:
This fake case report has been plagarized from previously published works and contains images that are edited and copied from different sources like wikipedia. Figure 3 of this article is a histopathology image of ERCP tissue sample but it is actually a image from wikipedia page of Tubulovillous adenoma published in 2009 and is an image of a surgically ressected tumor.
The original case report published in 2015 in a local journals
The authors are affliated with University of Arizona and this reflects poorly on the institute and individuals involved in this phony research.
Does this suggest that electronic circuits naturally produce harmonics to the input frequency? That electrons themselves break up into lower order components?
Does the sin-cubed (and cos-cubed) function identity:
suggest that spatially transmitted frequencies ought to be detected by some sort of Composite (or Complementary)-Fourier Transform, where instead of detecting 1 frequency transmitted at a time for a set binary signal, that we can actually take out known pieces of noise, these natural harmonics, and they would be improving to the signal-to-noise ratio?
I have trained Word embedding using a "clean" corpus in fastText and and I want to compare the quality of the Word embedding obtained against the word embedding from the pre-trained multi-lingual embedding in BERT which I perceive(discovered) to be trained on a "very-noisy" corpus(wiki).
Any Suggestions or Ideas on how to go about evaluating/comparing the performance would be appreciated.
I was reading about using the multivariate cox proportional hazards model at this website: http://www.sthda.com/english/wiki/cox-proportional-hazards-model, which uses the Survival package for cox regression. The summary of a cox regression object outputs a bunch of information about the model, including a concordance index.
Is all of the data used to train the cox regression model? If so, is the concordance index found on that same training data? Does this cause overfitting?
The Role of Salt Glands In Marine Vertebrates for Navigation and Migration (Crystalline Electromagnetic Conductors to Sense the Direction of Travel) By: Maryellen Elizabeth Hart November 8, 2018
- 📷Maryellen Elizabeth Hart
Goal: I was studying my Biology and read about the salt gland of marine animals (birds, turtles, etc.). Before reading further, I hypothesized the salt glands were being used for navigation (crystalline electromagnetic conductors to sense the direction of travel, the north and south poles, latitude and longitude) necessary for navigation in the ocean without the visual aid of land, sun or stars. I researched everything I could find, and have decided I am on a right track AND A NEW PATH. Supportive sources document the electromagnetic capability of crystalline structures contained within the salt glands of marine animals, however, no marine biologists have yet associated the salt gland with navigation of migratory paths by marine animals, an ability long held as a great mystery of science. I would like to research this more and would like to be connected with anyone currently interested or simultaneously researching this topic. I am reserving the copyright to my hypothesis if my hypothesis turns out right. Please allow me the much-needed credit. Please, my friends, connecting me with Marine Biologists who would be interested in my research. Thank you so very much. Blessings. Maryellen Elizabeth Hart November 8, 2018. Https://www.eidon.com/the-crystalline-electromagnetic-body/ Wikipedia says: https://en.wikipedia.org/wiki/Salt_gland https://www.eidon.com/the-crystalline-electromagnetic-body/ "The geomagnetic field is relatively stable over biological time scales and is axial, with the magnetic field lines roughly directed north-south and symmetric in both hemispheres. This provides a reliable, static reference system for orientation and navigation. Alternatively, magnetic anomalies within the Earth’s crust can also be recognized and used as reference features. Taking advantage of these properties of the geomagnetic field, some groups of animals have developed a biological magnetic compass, similar to the magnetic compass used by humans to locate the north magnetic pole. The magnetic compass has been described as an axial compass (also known as Inclination compass) for migratory birds and homing pigeons (Wiltschko and Wiltschko 1972; Walcott and Green 1974), and is based on the axial course of the geomagnetic field lines on the Earth’s surface. The magnetic compass is used as a reference system and as a mechanism to maintain steady courses during homing and migrations. Therefore animals able to discriminate the minute but steady changes of the inclination angle and the intensity of the geomagnetic field can potentially establish their latitudinal position. To date, several models for position determination based on magnetic field parameters have been proposed (Davila 2005). Lohmann et al. (1999) proposed that sea turtles use a combination of intensity and inclination, as independent coordinates for map information. Contours of equal magnetic intensity and inclination form a grid that can potentially be used as a bi-coordinate position-finding system over areas of the Atlantic Ocean, where sea turtles spend most of their life cycle. This model cannot, however, be generalized since isolines of magnetic inclination and intensity intersect each other at high angles only over local regions of the Earth’s surface (Davila 2005). In regions where the isolines are near-parallel to each other, or where the magnetic landscape is dominated by crustal magnetic anomalies, the bi-coordinate model is not viable for position determination..." (Walker et al. 2002) https://en.wikipedia.org/wiki/Salt_gland https://www.eidon.com/the-crystalline-electromagnetic-body/ Https://www.eidon.com/the-crystalline-electromagnetic-body/Date: 23 November 2018 https://www.researchgate.net/project/The-Role-of-Salt-Glands-In-Marine-Vertebrates-for-Navigation-and-Migration-Crystalline-Electromagnetic-Conductors-to-Sense-the-Direction-of-Travel-By-Maryellen-Elizabeth-Hart-November-8-2018
In a paper, it is simulated a 4-leg traffic intersection and a traffic light on it. It is proposed a dynamic traffic light scheduling algorithm and it is considered the presence of emergency vehicles.
It is used SUMO simulator to generate several mobility scenarios and NS-2 to implement the proposed algorithm. In final, it is measured the throughput and delay of vehicles in the intersection when it used the scheduling algorithm.
I am a beginner in traffic simulation and SUMO and NS2. I studied the traci_tls example in SUMO docs tutorials (https://sumo.dlr.de/wiki/Tutorials/TraCI4Traffic_Lights) which used just SUMO to control the traffic light for the emergency vehicles crossing.
Now, why that paper used SUMO and NS2 together? May it control the traffic light in NS2?
I am considering ferrofluids (https://en.wikipedia.org/wiki/Ferrofluid) as s target material of a detector that I am designing (conceptually). I want a liquid which could be magnetized, up to a tesla, say and one that will have high radiation length (so as to minimize effects of the multiple Coulomb scattering). liquid Argon or water have reasonably high radiation lengths, but I do not know if they can be magnetized. I have seen a paper: https://iopscience.iop.org/article/10.1088/1367-2630/7/1/063/meta, but I suspect that magnetizing a large neutrino detector is impractical. It would be great if a liquid as cheap as water could be used for this purpose.
Here comes the question about ferrofluid. I read somewhere that it is possible to make this colloid in home by putting iron filings in water. This may be too simplistic, but if this has high radiation length (I did not find any measurement) and can be magnetized, then perhaps it is not a bad option.
Does anyone have any suggestions or comments in this direction?
What is your idea about next generation of AI?
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler's Theorem, "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, and intelligent routing in content delivery networks and military simulations.
Turbulence is said to be one the unsolved problems in physics;
Though it's actually the mathematical problem which is considered to be the "millenium problem";
My own thought is that there is surface created inside the fluid, even when the fluid is the same, and not only when there are two different fluids;
What we see (based on Earth) as a lunar eclipse is a solar eclipse if based on the Moon. Has this been observed already directly? The page https://en.wikipedia.org/wiki/Solar_eclipses_on_the_Moon does not say anything on Moon-based observations and shows only an artistic view of such an eclipse.
Dear Respectful Researchers,
Here come several "strange" and probably unique questions about deep space detection which is motivated by an important recent scientific discovery and publication (in the journal Nature) about Fast Radio Bursts, "A second source of repeating fast radio bursts" as referring to these links as follows:
Researchers at UBC, Canada, have discovered the second so-called “repeating fast radio burst” (FRB) recorded six times coming from the same location, 1.5 billion light-years away. It seems that, CHIME was able to record some of the bursts as low as 400 MHz. My quick and first question is, can we statistically exclude its origin from the extraterrestrial (ET) civilizations?
Here come my rudimentary thoughts and reasoning as an electronics/telecommunications engineer, just out of strong curiosity.
I’ve conducted some quick and simplistic calculation of the link budget from communication perspective. Assume that the 400 MHz radio signals (assuming constant without shifting although actually not) do not significantly suffer from planets/galaxies introduced (all types of) attenuation and are thus deemed in an ideal free-space propagation model.
1.5 billion light-years distance is equal to around 1.419e22 kilometers, which will introduce a Free Space Path Loss (FSPL) of 537.5 dB at 400 MHz. On the other hand, assume that the receiver at ground station is able to achieve an ultra-low sensitivity of -160 dBm thanks to very large high gain phased array antennas, which also means the power at the origin (output) is at least 377.5 dBm, or 5.62e33 Watts (56.2 Decillion Watts).
On the other hand, the Sun releases an estimated 384.6 Yotta Watts (3.846e26) of Energy , the power output of all power plants of the world in 2008 is only 2.31e12 Watts. Therefore, we may be talking about an Energy (1.5 billion light year away) equivalent to at least 14.6 millions times the Sun !
This comparison makes one feel that this energy resource can be hardly from an ET civilization, unless it is from the so-called Type-III civilization categorized in the Kardashev Scale . Could it be ?
Another thing I am concerned with and would like to ask is, what could the "super-macro galaxy-level" propagation channel models look like? Would there be some multi-path fading effects? Would there be some time delay/frequency shifting among different bursts, and in the order of months/years?
Thanks for your correction, discussion, and suggestions.
Yiming Huo (Jimmy), Ph.D.,
Jan. 12, 2019
Extrapolating the spacetime concept of Einstein, The Great attractor is super massive object that human mankind could discover. It is so massive, it is actually, exerting great gravitational force on all galaxies. As we all know that super massive objects, create a warp in spacetime structure. This is the biggest discovery which is capable of explaining couple of paradoxes in science ( i will talk about one aspect here). IF Einstein could right about different timezones with in universe and if an Alien is riding towards earth from distant galaxy with a speed of light, then our future will become his current NOW. in this case, Einstein is also proposing that our future is deterministic and already written, then only spacetime concept will hold true. If future is not deterministic then spacetime concept loses its ground. That means whatever we events that have taken place in first clock of universal cycle, are going to exactly replicate in next cycle as well. However, this rises another question about life sustenance because if Big Bang destroys life, then how would it originate again. secondly, even scientist are of the opinion that eventually we will lead to cold death. And Big bang will again start. that means spacetime is cyclic and we are looped into infinite circles. This theory is referred as conformal cyclic cosmology (CCC), is a cosmological model in the framework of general relativity, advanced by the theoretical physicists Roger Penrose and Vahe Gurzadyan. In CCC, the universe iterates through infinite cycles, with the future timelike infinity of each previous iteration being identified with the Big Bang singularity of the next. Penrose popularized this theory in his 2010. Source 0: https://en.wikipedia.org/wiki/Conformal_cyclic_cosmologyWith Big Bang comes the few paradoxes,
If we started from big bang and whole universe exploded and we are receding or contracting, then it would exert linear force on earth's orbit and deviate it from its course but that has not happened.
Moreover, when universe was singular, how can you crush most fundamental unit of matter (to attain singularity, we must break electron basic mass).
Secondly, it does not tell you, how matter got gravity (if matter later formed). If the whole universe was singular, then the electron or quasi particles (or whichever most fundamental unit of matter) must have been crushed beyond their absolute mass. This raises a very basic question, how can we crush the most fundamental unit of matter.
Even if I assume, Universe started from some other fundamental particles (finer than electron or quasi), then how come they all become electrons and we have no trace about them.Even on a farther imagination, let just assume, that there were super quasi or electrons (finer than electrons) and got changed to current electrons. then how and when matter acquired the property of gravity.
In this article, i would like to postulate that universe is operating through infinite cycles but not through big bang rather orbiting around The Great Attractor. Asser 1. By theory of relativity, if earth is orbiting sun for the last million years without any change in orbit then it would mean, the overall movement of universe is supporting the earth's orbit. with the discovery of the Great attractor, it is clearly found that the Great attractor (with multibillion to size of earth or whole observable universe) is imposing angular motion. It simply means that our universe/galaxy/milky way is orbiting around the great attractor.Since, there is a gravitational force at play here (between The great attractor and other celestial objects), referring source 2, they must be exerting centripetal force. Even scientist themselves are claiming that it is exerting angular deviation.Source1: https://en.wikipedia.org/wiki/Great_Attractor" The variations in their redshifts are known as peculiar velocities, and cover a range from about +700 km/s to −700 km/s, depending on the ANGULAR deviation from the direction to the Great Attractor. ""The Great Attractor is an apparent gravitational anomaly in intergalactic space at the CENTER of the local Laniakea Supercluster"Source 2: https://en.wikipedia.org/wiki/Centripetal_force"In Newtonian mechanics, gravity provides the CENTRIPETAL force responsible for astronomical orbits. "Asser 2. As per the study done by university of California, there is a growing evidence that the whole universe is following a common pattern or structure. Golden Ratio (1.61) and magic figure (1/137) all suggests that universe does follow a particular design. If we start looking at the structure of universe, it follows a certain pattern, electron is cyclic, natural cycles are cyclic, earth's orbit is cyclic, sun is orbiting black hole, milky way is round, galaxy are circular. if all the subset of a larger set are cyclic/circular, then we have to believe that parent force (angular force exerted by The great attractor on celestial bodies) and motion of our universe is also cyclic. In layman language, just like in hurricane, all of its particles will have circular trajectory because the overall shape of hurricane is cyclic/circular (but the eye remains at calm). we can consider the whole universe as big hurricane whereas its eye is at calm (I.e. The great attractor).Moreover, just like atoms have orbiting electrons around nucleus (with bulk of the mass concentrated at nucleus, similarly, the great attractor is having bulk of the mass and rest galaxies/celestial are having only approx. 15% of the mass.Asser 3. Recent studies (in 1998) shown that the universe has an acceleration. no theory could justify this. as per this theory, since universe is also orbiting around a prime foci or the great attractor, it is now entering or exiting in perihelion zone, as all know when earth enters into perihelion, there is a change in rotational speed. that might be the prime reason why natural calamities are increasing because it is changing the gravity of the earth and hence changing the KE and potential energy of the nature. while coming it to equilibrium state, nature creating earthquake, tsunami. No theory in the existing modern age can answer all such questions, why natural calamities have increased and why universe is accelerating, change in earth’s gravity etc.Source 3: https://www.forbes.com/sites/trevornace/2017/11/20/earths-rotation-is-mysteriously-slowing-down-experts-predict-uptick-in-2018-earthquakes/#470b50226f24Asser 4. In the 1920s, Milankovitch hypothesized that variations in eccentricity, axial tilt, and precession of the Earth's orbit resulted in cyclical variation in the solar radiation reaching the Earth. Milankovitch and other hypothesized that variation in eccentricity, axial tilt, and precession of earth's orbit are cyclical and thousands years of periodicity.this theory can only be valid if we are to believe that time is cyclic otherwise this theory loses its ground.Source 4: https://en.wikipedia.org/wiki/Milankovitch_cyclesAsser 5. According to law, total mass/energy of universe is constant. That means there are only 2 things that exists in the universe, Matter (which has a mass) and Energy. As we know that matter is also form of energy and law of conservation of energy, energy can not be created or destroyed. That brings us to the fact that Universe has finite energy.For any finite object, we can afford to take universes as closed environment. As we know in an closed environment, all motions become periodic. in layman language, if you download lunar ball or billiard game, reduce the friction to zero and cover all the pots. then no matter what shot we play, it is always going to periodic or all events will have its periodicity.Asser 6. As recent study shows that earth magnetic fields have also changed around 200 BC and it always flips their North and South Poles. which also means they are also cyclic.Source 6: https://www.nytimes.com/2017/02/14/science/magnetic-field-earth-jars.htmlThis means the time is cyclic with Universe orbiting around The Great attractor not with Big Bang. Spacetime concept of Einstein was right, with a only tweak that like Time, we are only moving forward (yet cyclic), so as with space. New Answer to Paradoxes Even in case of Big bang, we never talked about origin of cosmos, (we always believed that hydrogen gas already there and other assumptions, whereas origin means irreversible process), in this theory, we got to believe that this peculiar skeleton of the universe cant be created, and it was always there. This structure is independent of time.
Rumination 3# - The lost goal of education?
Professor Stephen Dobson
Dean of the Faculty of Education
Victoria University of Wellington, New Zealand
Since arrival in the Faculty of Education at Victoria University in mid-2018 I have had many remarkable experiences. Not all however are the source of a strong catharsis, an ‘a ha’ experience as was the following: I was lucky enough to attend the 50th anniversary of the cohort of pre-service teachers who attended Wellington Teacher`s College in 1968. In the course of the pleasurable evening a common experience remembered was staff philosophy at the College, “that they should try to develop the person and then the teacher would emerge from that.” In cultural activities, such as art, reading, music, talking, dialogue and so on during their training. Sam Hunt the poet and New Zealand treasure was a student to whom they made constant and well-deserved reference. Sam never graduated – apparently the Dean told him to concentrate on his poetry skills.
We have come a long way from such a perspective, or so we are apt to think – our teachers are arguably more professional, the national curriculum is detailed and teachers meet clearly defined standards on graduation. Yet, when I talk with my Islamic educational colleagues I learn that the point of education in their culture for over a millennium has been to develop the character of the child. We must remember one of the oldest universities in the world is Islamic; it is over 1200 years old and found in Tunisia  (Ez-Zitouna University جامعة الزيتونة); not in Europe in the Middle Ages as we commonly think. In Chinese inspired education a version of this is the importance of developing a good moral character, inspired by the views of Confucius and a deep respect for others. We find a strong other-directed morality rather than an ego-oriented morality in China. In Scandinavian educational culture this is called bildung or dannelse and means the formation of a shared centred-ness and a shared cultural identity. In my limited, but growing understanding of Māori and Pasifika culture I have noted many of these same points.
Current global speak around the world and in many Anglo-Saxon countries profess a different view on these matters, if we exclude for a moment the point on curriculum knowledge. It is the desire to grow students who are resilient and possess ample funds of ‘grit’ to master set-backs. They are cognitively knowledgeable of their own thought processes. All possess so-called 21st century skills of team work and sociability.
Sometimes, I wonder if the pendulum of education has swung too far and we have lost ourselves in the science of education and in particular that international pastime of measuring literacy and numeracy scores. We want to perform well and to manage ourselves; to be the cleverest in the local, national and international class. This is very much to the exclusion of the other side of the pendulum where character, morality, bildung and other-directedness rest. A phrase I often quote from the Swedish child activist in early 1900 rings in my ears, `the formation of one`s identity is based upon what remains after we have forgotten everything we have learnt` (my translation). As with life we need a pendulum that swings both ways – or do we?
 Georgia Morgan (2007). A Short History of the Victoria University College of Education Art Collection. Unpublished manuscript.
 Established in the year 859, the University of al-Qarawiyyin in Fez, Morocco, was the first degree-granting educational institute in the world (as recognised by UNESCO and Guinness World Records).
The Physical meaning of flattening describes flattening is a measure of the compression of a circle/sphere along a diameter to form an ellipse or an ellipsoid of revolution respectively. (https://en.m.wikipedia.org/wiki/Flattening)
Outside the field of geometry while we dealing with simple quantities, Does it convey anything about Normalization?
For eg. third flattening (a-b/a+b) fits the value within (-1,1).
what is the advantage we get from this combination of quantities?
The Theory Wiki at IS.TheorizeIt.Org gets over 200,000 visits annually, but is due for a bit of an update. If you publish on this theory, we would love your updates.