Science topic

# Wikis - Science topic

Explore the latest questions and answers in Wikis, and find Wikis experts.
Questions related to Wikis
• asked a question related to Wikis
Question
Earlier today I read a discussion in Researchgate, how to find what variables are endogenous and what variables are exogenous. The discussion was about the random error term in a regression.
However, time is essential. When a peak in time series x1 is earlier than a similar peak in time series x2, then the x1 and x2 may well be exogenous and endogenous, respectively.
Granger defined the causality relationship based on two principles:
1 The cause happens prior to its effect
2 The cause has unique information about the future values of its effect.
I found the discussion which triggered my "Endogenous or exogenous" above, which is all about time series.
This is:
So I have to apologize about my wrong statement that time series are are required. There are none in cross-sectional data. I had forgotten that.
SORRY, FOLKS.
• asked a question related to Wikis
Question
Burak Omer Saracoglu The Lloyd Shoals Project, which includes the Lloyd Shoals Dam and Lake Jackson, is one of Georgia Power's oldest hydroelectric producing stations.
• asked a question related to Wikis
Question
Hi all,
I am following this to calculate bandgap of Si using HSE06 in: VASP:https://www.vasp.at/wiki/index.php/Si_bandstructure
in step-1. I am running a SCF caculation using PBE. For which I have used this INCAR
System Si
IBRION = -1
NSW = 0
ISMEAR = 0
SIGMA = 0.01
ENCUT = 520
ALGO = Accurate
EDIFF = 1E-6
PREC =H
in the second step, along with POSCAR,POTCAR,KPOINTS used the WAVECAR file of the previous step as input and the INCAR file is :
System Si
LHFCALC = .TRUE.
HFSCREEN = 0.2
ALGO = D
TIME = 0.4
ENCUT = 520
ISMEAR = 0
SIGMA = 0.01
GGA = PE
and then I used the script gap.sh that is also provided in vaspwiki examples to calculate HOMO and LUMO.
But the HOMO is now coming approx 0.13ev and the LUMO is -0.78ev .
Please tell me what I have done wrong.
Although i have never tried this script, but you can do a test,
Export the data of bands from vasprun.xml using P4vasp in .dat format,
and identify the KPOINTS from which your actual bands are starting.
As in HSE band structure, bands at initial KPOINTS are random which are taken from the IBZKPT file. Real band structure is from the point, where you provided the high symmetry path. like in this example
Explicit k-points list 18 <--- CHANGE TOTAL NUMBER OF K-POINTS !! Reciprocal lattice 0.00000000000000 0.00000000000000 0.00000000000000 1 0.25000000000000 0.00000000000000 0.00000000000000 8 0.50000000000000 0.00000000000000 0.00000000000000 4 0.25000000000000 0.25000000000000 0.00000000000000 6 0.50000000000000 0.25000000000000 0.00000000000000 24 -0.25000000000000 0.25000000000000 0.00000000000000 12 0.50000000000000 0.50000000000000 0.00000000000000 3 -0.25000000000000 0.50000000000000 0.25000000000000 6 0.00000000 0.00000000 0.00000000 0.000 <--- ZERO WEIGHT !! 0.00000000 0.05555556 0.05555556 0.000 0.00000000 0.11111111 0.11111111 0.000 0.00000000 0.16666667 0.16666667 0.000 0.00000000 0.22222222 0.22222222 0.000 0.00000000 0.27777778 0.27777778 0.000 0.00000000 0.33333333 0.33333333 0.000 0.00000000 0.38888889 0.38888889 0.000 0.00000000 0.44444444 0.44444444 0.000 0.00000000 0.50000000 0.50000000 0.000
your high symmetry path starts from here,
0.00000000 0.00000000 0.00000000 0.000 <--- ZERO WEIGHT !!,
so we need to skip the initial kpoints for the HSE band structure and
then you can calculate the band gap as the difference between valence band maxima and conduction band minima.
You can take a hint from the PBE band structure that which kpoints need to be skipped so that your HSE band structure looks similar to the PBE one.
Hope this helps!
• asked a question related to Wikis
Question
Good day fellows,
I am currently doing a 2d pushover analysis of a simple RC frame ( fiber section ). I am just following the example in the opensees wiki. The model worked fine in gravity loads. However, upon running the pushover analysis, this error came up :
Large trial compressive strain
UniaxialMaterial : : setTrial() - material failed in setTrialStrain()
What does this mean and how can I solve this issue? attached herewith are my tcl model and a screenshot of the errors.
Hi, I try to debug the program you uploaded. It shows the same error.
I find that the error is mainly caused the tcl file Ex5.Frame2D.analyze.Static.Push file. Your model is a 2D model, however, the "push" loadiing pattern is a 3D way.
"load $nodeID$Fi 0.0 0.0 0.0 0.0 0.0"
Therefore, the error occured.
Maybe you can try to debug the programs one by one. Firstly, you can run the ThreeStorey_CIP_Rev.tcl. Then, run the whole program.
Alternatively, you can debug the programs in Visual Studio, in the figure below. In this way, a clearer explanation on the error or warning is given.
• asked a question related to Wikis
Question
Interested
• asked a question related to Wikis
Question
Hi,
Just wonder if a rejected paper can be reposited in BioRxiv?
As I see in Wiki saying:
"In general, most publishers that permit preprints require that:
• the authors disclose the existence of the preprint at submission (e.g. in the cover letter)
• once an article is published, the preprint should link to the published version (typically via DOI)
• the preprint should not have been formally peer reviewed"
My concern is the last point.
Thank you
Xun Wen Chen no, because your data will be stolen. You don't need other people's opinions either since they will be critical. Keep your information confidential until it is formally published. If you submit to a HI journal, the review period will not be long since they impose time limits on reviewers :)
• asked a question related to Wikis
Question
Linux Version - 16.04/ 18.04
The instruction given in the mentioned URL above are very unclear and ambiguous. Your assistance will be highly appreciated. Thank you.
Mohammad Shafayet Hossain , Could you send me the errors you encounter during the installation, just to get an idea of the problems?
• asked a question related to Wikis
Question
The set of Beall's criteria is attached. Are they fair?
Beall's list was ended on 31 December 2016, almost 5 years ago.
Further information on https://beallslist.net/
In my opinion, Beall's list is flawed and will not stop the Open Access Publishing Movement. Some of the criteria are too subjective to be considered valid. Most importantly, Beall's list does not change. Journals cited as predatory are therefore condemned to life sentence prison? His words: "The list itself will not be changed, I may, however, add notes to the list." Why? The list was ended in 2016. Colorado University removed it from their website. There should be decent ways to evaluate journals, not based on such prejudiced criteria. I hope Mr. Beall graduates with a Ph.D. first. The criteria are available. Check for yourself and say no to prejudice. Let the academic community decide where to publish. Every scientist has the right not to be biased by such a prejudiced list. Furthermore, the classification is a static one, does not offer room for improvement, only imprisonment.
• asked a question related to Wikis
Question
Sigma Xi, The Scientific Research Honor Society Membership is some honor or not?
Or what exactly it is and who is eligible for it and what is your opinion about it?
That is something that you have to decide if the networking that you can make as a member is worth the quota
• asked a question related to Wikis
Question
The Global Warming Petition Project, also known as the Oregon Petition, is a political petition designed for disinforming and confusing the public about the scientific results and the consensus of climate change research. It is framed as a petition urging the United States government to reject the global warming Kyoto Protocol of 1997 and similar policies (Wikipedia).
For more details, check the following link.
Sincerely yours,
People continue to claim that since "over 30,000 scientists" have signed it, it is proof that there is no scientific consensus on climate. However, among the signatories, there is hardly any scientist who has published a relevant study: more than a third have an engineering degree, several thousand are in medicine, aerospace sciences, biology… There were also many fanciful signatories. or fictitious.
Regards
• asked a question related to Wikis
Question
Engineers are trained to think of stress limits and effective stress in strength of materials. Do you know strain limits and theories for effective strain? The idea to ask for strain limits came to me because for soft materials with large deformation there are material models with limiting chain extensibility, which lead to strain limits (or better stretch limits because of large deformation). The Gent rubber material model is a prominent example, https://en.wikipedia.org/wiki/Gent_(hyperelastic_model). The Strain Invariant Failure Theory (SIFT) for composites is a different example.
The uniaxial linearly elastic case is trivial because stress and strain are related by the Young’s modulus of the brittle material. Nevertheless, strain based criteria have not yet found proper recognition. I am concerned about general states of stress and strain in concrete, rock, bone and similar brittle (or quasi-brittle) materials.
I have no answer myself - but I could present strain based failure criterion, which is equivalent to the most used classical stress based failure criterion for brittle material. This is not an answer, but a first example:
• asked a question related to Wikis
Question
Hi everyone,
I’m trying to prepare PBS 20 mM.
The recipe that I saw (https://en.m.wikipedia.org/wiki/Phosphate-buffered_saline) mention the following amounts of salts to get PBS (1x):
NaCl 8 g/L
KCl 0.2 g/L
Na2HPO4 1.42 g/L
KH2PO4 0.24 g/L
Following this recipe, what will be the PBS final concentration? How can I do to prepare a 20 mM buffer?
kind regards :)
• asked a question related to Wikis
Question
How to work properly in the development of an integral like the Abel Plana defined on this image:
I am interested in to have a set of steps for attacking the problem of developing the integral and to determine a criterion of convergence for any complex value s, I mean, when the integral could have some specifical behavior at, for example, s=1/2 + i t where I am interested in to study it.
I am interested in the proper evaluation of that integral only with formal steps into complex analysis.
The Abel PLana formula appears also on https://en.wikipedia.org/wiki/Abel%E2%80%93Plana_formula
Best regards
Carlos López
Carlos, There is a wealth of information about the Riemann Zeta function. It started out as the Dirichlet series which converged and was holomorphic for abs(z)>1. However, through the process of analytic continuation the Zeta function could be defined as a meromorphic function on the entire complex plane, holomorphic on C-{1} with a simple pole at z=1.
Often times when one extends a convergent series representation to homomorphically to a larger region which is done to generate the Zeta function one finds the function is no longer single valued.
For example when considers the simple sqrt(z), then runs into the problem that it is multivalued and hence cannot be extended homomorphically to C. This gave rise to the concept of a Riemann surface, covering spaces, branch points and branch cut so to address this issue which can often arise in analytic continuation. http://www1.spms.ntu.edu.sg/~ydchong/teaching/07_branch_cuts.pdf
The Zeta function when extended to the p-adic number field, however is not single valued and has branch points.
There are many books at all levels on the zeta function. It is one of the most important special functions of mathematics and foundational to analytic number theory. Just go an Amazon and search on Riemann Zeta function in Books and see what pops up.
• asked a question related to Wikis
Question
In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example in https://en.wikipedia.org/wiki/Lag_operator
• asked a question related to Wikis
Question
Is there any article or project about interaction of the "Schumann Resonance" on the brain alpha or theta waves?
• The Schumann resonances (SR) are a set of spectrum peaks in the extremely low frequency portion of the Earth's electromagnetic field spectrum :: Schumann Resonance Freq. : 7.83 Hz
• Alpha waves are neural oscillations in the frequency range of 8–12 Hz
More:
Best Regards
I suppose one possibility that should always be considered is that perhaps what we call alpha waves are in fact the recording of the Schumann phenomenon when the brain is not working! Since we cannot really pick a spot on the planet where the Schumann sound is not present your best bet would be to look at brainwave studies on astronauts. There’s a good chance this information is classified.
• asked a question related to Wikis
Question
• The Chu-Construction allows to obtain a *-autonomous category from the data of a closed symmetric monoidal category and a dualizing element.
• The Cayley-Dickson-construction builds an algebra B = A + A with involution from the data of an algebra A with involution *. Applied to the field of real numbers it gives successively the field of complex numbers, then the skew-field of quaternions, then the non-associative algebra of octonions, etc.
Due to closeness of the expressions of multiplication m: B \otimes B -> B for the multiplicative unit B we believe that there is an intimate link between both notions.
Has such a link been described in a reference text ?
Bibliography:
Due to the close Connection between the Chu constuction and the Caley-Dickinson construction, as I found in the question that the two notions are
• asked a question related to Wikis
Question
Are there any tutorials which demonstrates the usage of C++ Linear Algebraic libraries like EIGEN ( http://eigen.tuxfamily.org/index.php?title=Main_Page ) or BLAZE ( https://bitbucket.org/blaze-lib/blaze/wiki/Getting_Started ) to build CFD applications.
In short how to initialize sparse matrices and use inbuilt iterative solvers.
• asked a question related to Wikis
Question
I'm looking for morphological models that could used for stemming (NLP) in python for following languages: Croatian, Czech, Estonian, Slovakian.
Hi.
I don't know whether this question is still actual however I'd like to recommend you the Universal Dependency tool, namely, UDPipe: http://lindat.mff.cuni.cz/services/udpipe/. There you can find models for all languages that you need. These models allow performing both morphological and syntactic analysis. I've used them for different Slavic languages in Python, it's working pretty well.
• asked a question related to Wikis
Question
Dear all,
I am new in Wannier tools and trying to plot normal band structure using it. I am using the below link to do the calculations
When i set the tag LWANNIER90 = .TRUE.
I got only wannier.wout file as output with an error
Error: Problem opening input file wannier.mmn
Kindly find my input files and please suggest me that where i am doing wrong.
Any help would be highly appreciated!
Thanks!
Hi Swapnil,
you should write (write_hr =.true.) in wannier90.win to get hr.dat
• asked a question related to Wikis
Question
Hi all,
I am now working in the evaluation of an innovative renewable energy generation system. For the evaluation of economic aspects I was planning to use the LCOE, levelized cost of energy, and I have some doubts/questions about it.
In particular, I am not sure how I can incorporate in the usual definition, the one that can be found in Wikipedia (https://en.wikipedia.org/wiki/Levelized_cost_of_energy) and normally found in article, terms corresponding to
- Selling excess that cannot used or stored in site. I believe I can add a negative term, Si, to the costs sum, but not sure.
- Residual value at the end of the life cycle. I think I can add a negative term, RV/(1+r)^n, where r is the interest rate and n the useful life time to the numerator. This term arises from the fact that at the end of the life time some materials, in particular, metals, have a value for recycling.
- Disposal cost, I believe I can add a term DC/(1+r)^n, as in certain systems it is necessary to spend some money to be able to properly dispose some of system.
I have some doubts that I am thinking right or not. Thus, can someone give their opinion or reference that help me on issues I am having.
Regards,
António Martins
Cory, K, and Schwabe, P. Wind Levelized Cost of Energy: A Comparison of Technical and Financing Input Variables. United States: N. p., 2009. Web. doi:10.2172/966296.
• asked a question related to Wikis
Question
Why aren't drugs implementing activation of cholinergic anti-inflammatory pathway (n-cholinomimetics, α7nAChR agonists, etc.) used in treatment of COVID-19?
It was found that activation of cholinergic anti-inflammatory pathway reduces the concentration of pro-inflammatory cytokines in the blood and organs during sepsis, and various infectious diseases, while significantly reducing mortality.
Dear Prakhar Garg !
""""Along with other drugs! """"
• asked a question related to Wikis
Question
Is MDPI a predatory journal publisher from China?
MDPI was included on Jeffrey Beall's list of predatory open access publishing companies in 2014.
Very close to it, unfortunately.
Me and my fellow researchers made very bad experiences with MDPI journals. Several critical reviews of very able and respected reviewers have been ignored just to get the money from publishing the paper. Often, they advertise with being from Switzerland (e.g. Geosciences [Switzerland]). I am Swiss, but the MDPI journals are from China.
If I have to select scientific staff and the candidates published a lot in MDPI journals, this is a reason for me not to select them. Furthermore, me and my colleagues do not publish or review papers in MDPI journals anymore.
The science community has to stand together to prevent such publishers from destroying the scientific system. This is particularly important for the young scientists!
• asked a question related to Wikis
Question
I have my two proteins of interest which I docked using ClusPro and I am interested in finding the interacting residues at the interface. I ran the script available at PyMOL Wiki (https://pymolwiki.org/index.php/InterfaceResidues) and input the command, but it does not recognise the interface. I tried merging both proteins into a single PDB file, ran the script and input the command again, but this did not solve the issue. Does anybody know how to approach it?
Annemarie Honegger That fixed the problem. Thank you very much for your guidance.
• asked a question related to Wikis
Question
I'm new to all things Spectroscopy so be gentle :-). I'm puzzled to find the difference between fluorescence and most variations of Raman scattering. They both involve exciting atoms from their ground state to some excited state, then fluoresce back down to a ground vibrational state, emitting radiation of lesser energy than the former (at least for Stokes Raman).
1. What am I missing?
2. Should I assume the difference in energy between initial and final radiation is lost in the vibrational state, so as to conserve energy?
3. Yeah... and the virtual energy states involved in Raman scattering? That's just too weird for now.
So, in simple English, what are they really? Wiki explanations are so over the top to be readily understood by a newbie like me.
Thanks,
David
Raman spectroscopy is all about inelastic scattering. If the collision between a photon and a molecule is perfectly elastic, there will be no exchange of energy, giving the Rayleigh line. But, inelastic scattering is associated with a collision which causes some exchange of energy, equal to the difference b/w 2 allowed states of the molecule, and thus it makes a frequency-shift of the scattered photon.
In the fluorescence emission process, a molecule is excited from the ground state (E0) to one of the vibrational states in the electronic excited state (E1) ac to the Franck-Condon Principle. Through vibrational relaxation, the molecule relaxes back to the lowest excited state by losing excess energy as heat. Then, the molecule transits from the lowest excited state to one of the vibrational states in E0 with the emission of a photon. The energy of the emitted photon is lower than that of the incident photon because of the energy loss during vibrational relaxation. The fluorescence emission spectrum is broad band covering the same wavelength band as the Stokes Raman signal. Since the cross section for fluorescence is larger than for Raman scattering, detection of Raman scattering is very difficult when strong fluorescence emission is present.
• asked a question related to Wikis
Question
I have a list of social media data points ( i.e. latitude and longitude). Lets say that the points represent people, and I am trying to identify population centers, based on just the location. Classic algorithms like k-means may not work. The approach needs to be unsupervised and the number of clusters are not known beforehand. Therefore I am thinking of DBSCAN (https://en.wikipedia.org/wiki/DBSCAN) as a good choice. However, I came across studies which have extracted spatial clusters using Moran I (https://en.wikipedia.org/wiki/Moran's_I). It is said that Moran I detects statistically significant clusters. Though DBSCAN is widely used and have a huge practical, I am not sure whether DBSCAN clusters are statistically significant ( I have not seen in any of the literature I have read).
My question:
How can we say that using DBSCAN is justifiable though the clusters it identifies are not statistically significant ( like Moran I). Or do we need to use Moran I for it.
I would really appreciate your kind guidance.
Do not be offended, In my opinion there is a misconception in the Moran I and DBSCAN. Moran I is a Geo-statistical measure to test the degree of spatial autocorrolation in the spatial datasets, and it could not give us the location of the clusters. After rejection the null hypothesis (random distribution), you can use spatial clustering such as StDBSCAN or even k-means to explore the location of clusters.
• asked a question related to Wikis
Question
We know that heating happens due to mid-infrared region of solar irradiation.
Ice melts due to absorption of which specific range of wavelengths of electromagnetic spectrum?
Can we relate this to the vibration states of hydrogen bonds inside ice crystals and due to water molecules?
And is all the energy absorbed in UV, Infrared and Microwave region used for heating the ice mass or could it be used for breaking OH bond and just lead to ionisation?
You can use, for example, measurements of spectral albedo from Perovich et al. (2002) for sea ice:
• asked a question related to Wikis
Question
Hello everyone,
Could someone please provide a working example (or point me to a good resource) of how 'lambda local' (along with lambda BG, 1k , 5k and 10k) is calculated for peak calling in MACS?
λlocal=max (λBG, [λregion, λ1k], λ5k, λ10k)
Does 'max' here mean the upper limit, based on whichever of the variables λBG, λregion, λ1k, λ5k, λ10k is the highest and hence the p-value being calculated taking this 'max' value for λ? Is λlocal a product/average/sum of all the other λs?
I have been unsuccessful in trying to understand it by referring to the script and the tutorial given in the link below:
Also, is the 'mfold' (10-30 fold enrichment) parameter estimated w.r.t lambda BG?
Thanks!
simple, assume you want to randomly pick a number between 1 to 100. If you do it many times (say 1000), how many times have you peaked a number between say 1-5?
(5/100)*1000 = 50
Where 5 is the interval length [1-5], 100 is the total numbers and 1000 is number of trials.
By analogy with MACS, interval length is estimated fragment length d, 100 is the estimated effective genome size and 1000 is the total number of reads form input sample, and 50 is λBG.
• asked a question related to Wikis
Question
THE LONG VARIANT OF MY “COMPOSITE” DOUBLE-QUESTION:
Can elementary neutral massless fermions aka “elementary fermionic (neutral) luxons” (EFLs) (whose true existence isn’t rejected in principle by mainstream physics) (not to be confused with Weyl fermions which are not elementary particles, but quasi-particles) be valid candidates for dark matter and energy? And if so, do you have any suggestions on possible experiments that may confirm or infirm the existence of these EFLs?
My zero-energy hypothesis (ZEH) launched in my recent article (“On a Possible Logarithmic Connection between Einstein's Constant and the Fine-Structure Constant, in Relation to a Zero-energy Hypothesis”, Physical Science International Journal [PSIJ], ISSN: 2348-0130, Vol.: 24, Issue.: 5, pages 22-40: https://www.researchgate.net/publication/342530363 and https://www.journalpsij.com/index.php/PSIJ/article/view/30191) PREDICTS that all EPs may be “conjugated” in boson-fermion pairs of “mass-conjugates” (which is a new type of physical symmetry proposed by ZEH and produced by a balance between the strengths of electromagnetic and gravitational fields at Planck scales) with the rest masses of all known (and unknown!) elementary particles (EPs) being the conjugated solutions of a simple quadratic equation (proposed by ZEH) which allows all neutral EPs to have zero rest masses. ZEH also predicts that spacetime is probably granular (and very viscous!) at Planck scale allowing G/r and k_e/r ratios with only discrete values in the predicted length-interval [r_min, 5*10^3*r_min]. If the quantum vacuum will ever be proved to be actually a “fluid”-like entity, my ZEH predicts that vacuum may be granular and very viscous at scales close to Planck scales and that is why its movement and/or deformations may be governed by an equation similar to that of viscous flow (https://en.wikipedia.org/wiki/Lambert_W_function#Viscous_flows), which equation (of viscous flow) is solvable by using a Lambert W function.
Furthermore, my ZEH predicts two elementary massless fermions (the here-called “Higgs-fermion” [Hf] and “Z-fermion” [Zf] which can be regarded as elementary fermionic luxons [EFLs] [https://en.wikipedia.org/wiki/Massless_particle], NOT to be confused with Weyl fermions [which aren’t EPs but quasi-particles]) as being the “mass-conjugates” of the Higgs and Z bosons potentially viable candidates for both dark matter and dark energy. Being zero-mass fermions, they are also predicted by ZEH to move with the speed of light and thus to have been spread by the Big Bang in all directions of space with this speed of light. Mainstream physics DOESN’T reject, in principle, the true existence of EFLs.
Do you have any suggestion on possible experiments that may confirm or infirm the existence of my ZEH-predicted EFLs Hf and Zf?
It would be also interesting to (at least theoretically) know if these Hf and Zf have a weak charge or not, thus if they couple with the weak nuclear field (WNF)/participate to the weak interaction (https://en.wikipedia.org/wiki/Weak_interaction) (like all the other known fermions from the Standard model were proved to couple with WNF) or NOT. What do you think?
Actually, I'm preparing right now another paper in which I try to demonstrate that these two type of massless neutral fermions (Predicted by my ZEH) are good candidates for a hypothetical superfluid vacuum which may at least partially explain dark matter and dark energy and even establish a profound connection between these two.
Regards!
• asked a question related to Wikis
Question
As ASLERD (https://en.wikipedia.org/wiki/ASLERD) we have conducted in Italy surveys at both national (university teachers, schools teachers and parents) and local level (university students, high schools teachers, students and parents). Just to provide an idea, on research gate you can find a preprint on
We wish to get in contact with colleagues that have conducted similar studies in other countries and/or that wish to use the same questionnaires to support comparative studies and create a much wider dataset.
If interested write an email to
aslerd [dot] org [at] gmail [dot] com
in which you
1) describe your (and your research group) interests in education (just one sentence);
and indicate
2) if you are interested in investigating distance learning during the Covid-19 emergence or after the universities/schools re-opening;
3) if you are interested in an investigation at university or school level or both
4) if, apart the standard localization, you wish to translate the questionnaires in your local languages (at present the questionnaires are available in Italian, English [to be validated], Arabic)
5) if you intend to carry on the investigation in the whole country or consider only a local case history (in this latter case plese describe it, just one sentence)
6) how do you plan to involve your target
You may understand that motivation and well planned research are very important to establish a successful alliance and collect meaningful datasets (as we are already doing with some of you).
yes, I think so
• asked a question related to Wikis
Question
These are all very specific questions related to some lab questions I am supposed to fill out. I was able to do the first part, and know the equation for the Lande factor for the second, but I do not know exactly what the terms are going to be. Secondly, the Doppler question is very confusing to me because the reference material I have does not talk about this at all. Would the equation I want to use be the " full width at half maximum" equation I was able to find on Wikipedia here: https://en.wikipedia.org/wiki/Doppler_broadening? This just seems like the only thing I could find which has terms that I could look up and use from the problem. I have attached the questions for more details on these questions. Thank you.
It is possible experimentally by the ESR or EPR seupts
• asked a question related to Wikis
Question
I am trying to design a controller using C.L.F for a particular problem and I want to find the settling time for the controller that I would design for convergence near a desired equilibrium point that I want to define myself. It would be very nice if you can provide some intuition behind this.
Also, In the wikipedia page for the example section I could not find any reference but the controller appears to converge to a desired equillibrium point and according to the authors varying the alpha and kappa would do but I need some light on this topic can you help me.
Although calculating the settling time for a multiple state nonlinear system is not difficult, whereas I think, tuning the settling time, even for a linear system in Laplace domain is sometimes a demanding task. In nonlinear systems, particularly for multi-dimensional nonlinear systems, the control designer is usually convinced to only find the control-law (analytic or numerical), and leaves the performance characteristics unanswered, due to some challenges of nonlinear control, already known to control systems society.
• asked a question related to Wikis
Question
Why do inverted organic solar cells typically have lower efficiencies than the standard architecture?
Wikipedia says this as do a few other places but I cannot find an explanation, the references always also state it without explanation.
"Inverted cells can utilize cathodes out of a more suitable material; inverted OPVs enjoy longer lifetimes than regularly structured OPVs, but they typically don't reach efficiencies as high as regular OPVs "
Dear Ken Johnson,
It is not alwayd true that inverted organic cells typically have lower efficiencies than the standard architecture. It very much depends on the device structure and layer thickness. The inverted device structure of solution processed OPVs is very often limited by the layer by layer deposition due to incompatible to the processing even if good energy alignment for some cases. This problem can be solved by vacuum deposition instead of solution processed layer. There are some publications involved this published by the group of Prof Martin Heeney. Their inverted OPVs have higher efficiency than normal structures which I witnessed their device testing.
• asked a question related to Wikis
Question
MDP is a discrete-time stochastic control process, providing a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision-maker [Source: Wiki]. They are used in the areas of optimal response such as Reinforcement Learning.
Dear Dr. Abhijeet Sahu ,
I am not a specialist, but this link will be very useful:
• asked a question related to Wikis
Question
I ran into an interesting mathematical problem that is the result of the use of infinitesimal vector calculus in relation to the Helmholtz theorem and the vector LaPlace operator.
I've also posted this question here, where I've edited it a bit further and added some things:
What is very interesting is that the Helmholtz decomposition is hidden within the vector LaPlace operator and this can be used to define potential fields. For the general case:
The terms in the definition for the vector Laplacian can be negated and equaled to zero to obtain the vector Laplace equation:
-∇2𝐅 = -∇(∇·𝐅) + ∇×(∇×𝐅) = 0,
and then the terms in this identity can be written out to define a vector field for each of these
𝐀 = ∇×𝐅
Φ = ∇⋅𝐅
𝐁 = ∇×𝐀 = ∇×(∇×𝐅)
𝗘 = −∇Φ = −∇(∇⋅𝐅)
And, since the curl of the gradient of any twice-differentiable scalar field Φ is always the zero vector (∇×(∇Φ)=0), and the divergence of the curl of any vector field A is always zero as well (∇⋅(∇×A)=0), we can establish that E is curl-free and B is divergence-free, and we can write:
∇×𝗘= 0
∇⋅𝐁= 0
As can be seen from this, the vector Laplacian establishes a Helmholtz decomposition of the vector field 𝐅 into an irrotational or curl free component 𝗘 and a divergenceless component 𝐁, along with associated potential fields Φ and 𝐀, all from a single equation c.q. operator.
For fluid dynamics, we can use this decomposition to define a vector and a scalar potential for the velocity field, analogous to the electrodynamic domain like this:
vfd = -∇Φfd + ∇×𝐀fd
𝗘fd = −∇Φfd
𝐁fd = ∇×𝐀fd
ω = ∇×𝐁fd
From this, we can do an analysis of the units of measurement, since the curl, grad and div operators all have a unit of measurement in per meter[/m]. Since v, E and B all have a unit of measurement of velocity in meters per second [m/s], we obtain a unit of measurement in cubic meters per second [m3/s] for the primary field F, thus describing a volumetric flow field, similar to the volumetric flow rate:
"Volumetric flow rate is defined by [...] the flow of volume of fluid V through a surface per unit time t."
It seems this can also be defined as the flow velocity vector field v times an area A perpendicular to v with a surface proportional to h2 square meters [m2], with h the physical length scale in meters [m].
For finite difference or discrete vector calculus methods, such as used in FDTD simulation software, h denotes the spacing of the discretization grid, which may be variable or constant.
This leads to the conclusion that F =/= 0 for any v =/= 0 and any h>0 and therefore when using discrete mathematics F exist and according to the Helmholtz theorem it is uniquely defined by the two potential fields.
Now here's the problem: when we take the limit for h -> 0, which we do with infinitesimal notation, we obtain F=0, which cannot be correct for any field whereby v =/= 0, so what we find is that there is a limit to the applicability of the Helmholtz decomposition when using infinitesimal calculus and that needs to be worked around.
However, if v is known and F can be defined as v times an area A perpendicular to v , it seems it should be possible to compute the curl and divergenve of F from this definition and thus come to a completely closed system of potential theory, whereby all fields are uniquely defined and can be analytically solved, except the volumetric flow field F itself.
So, the question is: how do we do that?
Hopefully, some mathematician finds this problem interesting enough to think about, because it has quite a lot of consequences for the actual applicability of the Helmholtz decomposition in the general case as well. Now that we have shown that the Helmholtz decomposition does not actually hold in this case, it is an interesting question for mathematicians to figure out when this is the case and what consequences this has.
Hi Arend, If you take a look at this section from the Wikipedia article on the Biot-Savart Law, https://en.wikipedia.org/wiki/Biot%E2%80%93Savart_law#Aerodynamics_applications
you will find that it relates to what you are talking about. I fully agree with you that there is an analogy between B = μH and J = ρv.
But you can't combine the two. They each have their own meaning, and only B = μH relates to magnetic field lines. Helmholtz once attempted to explain magnetic field lines based on J = ρv, but eventually he conceded that Maxwell's model was correct.
• asked a question related to Wikis
Question
I've been working on Cohen Class Distribution functions and wanted to write a code that computes the TF-distribution based on the general function. With this I can replace the kernel function and try different TF distributions.
Can someone please tell me what mistake I have done in the computation. I imagine there is some mistake in the integration, but don't know what.
Very subject, but not within my competence ... good luck
• asked a question related to Wikis
Question
I am conducting GWAS QC and would like to exclude regions of high linkage disequilibrium (LD) during pruning of my data (which is in GRCh38). Does anyone know where I may find a list of high LD positions in build 38?
... or how may I map the positions listed in https://genome.sph.umich.edu/wiki/Regions_of_high_linkage_disequilibrium_(LD) to build 38?
I also found a list in build 37 in De Vlaming et al., 2017 (Table 5) (https://www.biorxiv.org/content/10.1101/211821v1 ), but am not sure what would be the best approach.
Thank you.
EDIT:
I tried to use liftOver to map the list of high LD regions from De Vlaming et al., 2017 to Build 38 using the commands below, but there were plenty of unlifted positions in both (in brackets are the row counts in the output and unlifted bed files).
liftover input.bed hg18ToHg19.over.chain.gz outputA.bed unliftedA.bed
(16 positions lifted, 18 positions unlifted)
liftover input.bed GRCh37_to_GRCh38.chain outputB.bed unliftedB.bed
(15 positions lifted, 20 positions unlifted)
Does liftOver prefer hg18ToHg19.over.chain.gz or GRCh37_to_GRCh38.chain?
Would it be okay for me to use the output.bed files even though many of the high LD regions were not lifted from the original b37 list?
Thank you.
After reading the paper , I found the table 1 attached the rest positions at the end. But the paper didn't tell us the version of the genome build either.
I found another R packages ( bigsnpr ), and asked the owner. This is his answer ... https://github.com/privefl/bigsnpr/issues/84
Maybe we could use the NCBI36(hg18) bulid as Source Assembly and the GRCh38p13 as Target Assembly in NCBI Remap tool. Because there is only one version of NCBI36 in the Remap tool, and there are 13 versions of GRCh37.
• asked a question related to Wikis
Question
I have a polymer end-capped with a Polysilsesquioxane (POSS) like the one here: https://en.wikipedia.org/wiki/Silsesquioxane#/media/File:Silsesquioxane_T8_Cube.png where *R=i-butyl*.
I haven't found much info on what type or reactions these POSS groups can undergo.
I would like to know if there are some procedures to modify that POSS moiety and convert it into an amino, carboxyl or triethoxysilane group, for example.
Also, are those -R groups easy to replace? Or that Si-(i-butyl) bonds are very stable?
Hi Alfonso Brenlla, this is an interesting question. We have done a lot of silsesquioxane chemistry in the past. Just search our Research Items for the keyword "silsesquioxane" to get more detailed information, including review articles. A typical article is mentioned below. In general, it can be said that the organic R groups attached directly to Si (e.g. R = iBu, Ph, cyclo-C5H9, cyclo-C6H11) are rather stable and cannot be easily replaced. Thus chemistry involving silsesquioxanes is normally done with derivatives which have reactive functional groups such as Si–OH or Si–Cl.
• asked a question related to Wikis
Question
Dear All,
Here you are the figure and description of a “do it yourself” face mask. You can prepare it during 10 minutes.
1. You need a textile head scarf. Its measure may be variable, what I indicated in the figure, can circle a normal human head.
2. The scarf should be folded diagonally in half.
3. You refold the lower peak of the scarf and sew it on one layer of the scarf. You put a filter between the two scarf layers. Filter may be paper towel, paper handkerchief, toilet paper, paper napkin or nonwoven fabric https://en.wikipedia.org/wiki/Nonwoven_fabric .
The filter should be changed in every 2 hours.
You tie the scarf behind your head tightly but you should be able to untie it. Of course, it is not as efficient as a FFP2 or FFP3 mask but it is better than the normal surgical mask. The scarf can be washed every day.
All the best,
Among the insulating materials that are not penetrated by air waves are polymeric materials
• asked a question related to Wikis
Question
The brachistochrone is a well-known problem in calculus of variations and optimal control. I ask if there is any explicit solution to the problem?. In the existing texts, x and y are usually parameterized in dummy variables as t, phi or theta. I want to know whether any explicit solution is available or not? I mean y(x) in terms of x, not any other intermediary variable.
In the pdf, attached to this question, there is some hint on the explicit solution (but it seems it is computational and numerical).
It seems somewhat explicit analytic solution is presented in this book:
Introduction to the calculus of variations, by William Elwood Byerly.
Equation (2) of chapter II.
• asked a question related to Wikis
Question
So, around january last year I got back to using Concerto platform, but apparently now installation isnt recommending using AWS. So I went through this very small tutorial until the end, but when doing the last part, going to http://localhost/admin , nothing happens. That's because the tutorial looks kind of incomplete. I'm missing the steps between 5 and login in
Another point that I don't get it... is AWS still needed? The new tutorial is really small and shallow for whoever isn't familiar with the extra hidden steps...
Hello everyone, can anybody help me to run the Concerto-Platform at the AWS EC2? I did follow the instructions at the GitHub of the project, but it didn't work out. I never had contact with this platform or anything similar, then I'm kind of lost. I would like to know if there is any tutorial how to run and use the platform using the AWS EC2.
• asked a question related to Wikis
Question
Collaboration is essential for ensuring learning for all learners in one classroom. Blackboard collaboration tools like blogs, wikis can be considered the best platform for increasing interaction and motivation to accomplish a task in a group. Selection of the tasks is the most important to achieve the learning outcomes. So, for reading skill development what type of tasks the language teachers can design and assign for the learners in a blog?
Encouraging students to contribute to Wikipedia entries, critically evaluate articles and contribute to editing process is a useful project that can have several educational values.
• asked a question related to Wikis
Question
In chemistry and physics what are "atomic orbitals" actually USED for?
"In atomic theory and quantum mechanics, an atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus." (en.wikipedia.org/wiki/Atomic_orbital)
"In chemistry, a molecular orbital (MO) is a mathematical function describing the wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region." https://en.wikipedia.org/wiki/Molecular_orbital
These articles are about the probability of finding an electron. There have impressive diagrams illustrating spherical harmonics, drum membrane oscillation modes, shapes of orbitals, subshell filling rules, etc. There are wave equations, hydrogen-like single electron atoms, Hartree-Fock approximations for multi-electron atoms, etc. . . .
We are told (variously) that the:
electronic structure of neon is: 1s22s22px22py22pz2 or 1s22s22p6
electronic structure of potassium is: 1s22s22p63s23p64s1
electronic structure of barium is: 1s22s22p63s23p63d104s24p64d105s25p66s2.
Why should I care? Why is this taught to freshmen in college chemistry?
After being impressed by all the pictures of orbital shapes, I then read this:
"Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927,Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number l of the same shell n (e.g. all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same l) is spherical. This is known as Unsöld's theorem." (https://en.wikipedia.org/wiki/Atomic_orbital )
Does any of this have any practical use other than to pass a test or impress your professor or girl friend? This "knowledge" just does not seem to connect with anything. The concern over "finding an electron" seems like a pathological compulsive/obsessive disorder.
In my view, "electron energy levels" could simply be "energy levels within the atom"--and having nothing fundamentally to do with badly misbehaving electrons.
It all reminds me of a joke I heard on TV: "Does Superman have a super sense of humor or just an ordinary sense of humor?" Of course, we need a grant to study the issues here. The sense of humor might be related to diet. What does Superman eat? How do you define "super sense of humor"? etc., etc., etc. But after all the studies are done and all the money is spent, WHO CARES? This is useless "knowledge".
Two engineering firms were trying to "out spec" each other on their quality programs. One was bragging about 8 sigma quality, the other, 6 sigma quality. But the bottom line proved to be: "The quality that we don't have is BETTER than the quality that you don't have either." This is a distinction without a difference--nothing meaningful.
Is all this s,p,d,f orbital stuff simply mathematical circumlocution for our amusement? A horror movie based on Schrodinger's mathematical "maps of Hell" ? Could we learn just as much by getting drunk and watching a really stupid grade B movie on TV?
Of what USE is this so called "knowledge". If someone knows, PLEASE let him speak up! Please enlighten us!
Dear Brian,
I'm not an expert in chemistry... I can just mention a pair of possible applications in my field (electronics - molecular electronics), hoping that they might be useful.
Very often in molecules the chemical bonds take place in the direction of maximum overlap between single atom orbitals. Thus from the knowledge of valence atomic orbital shapes it is possible to predict the shape of a molecule (useful also for synthesis of new molecules).
Moreover steric effects can be linked to the shape of molecular orbitals. A simple example of steric effect is the repulsive force between orbitals. As a consequence the actual shape of a molecule can be modified (a torsion might appear).
For example the benzene (C6H6) is not planar, but instead it is bended (the so called "chair" configuration) because of that.
This influences chemical-physical properties of the molecule. In particular it affects the transport through it (electrical current) - useful in molecular electronics. Or again chemical reactions can be slowered (steric hindrance) beacuse of that.
Electronic structure of elements can be used for various purposes. For example in knowing the number of valence electrons (the only ones that strongly affect the chemical-physical properties of a material), and thus understaing the nature of chemical bonds. Instead the knowledge about the kind of valence orbitals (and their angular momentum): s,p,d... is useful for understanding chemical and physical properties of a material (eventually compound material or alloys) such as band structure for solids, from which electrical current, interaction with light, mechanical waves (vibrations) etc... can be derived.
Hope it helps,
regards.
• asked a question related to Wikis
Question
Is it alright if I run an ISIF = 3 and ISPIN = 2 calculation for 2 hours, after which the job terminates (limited computing resources), and then simply copy the CONTCAR to POSCAR and start it again for 2 more hours and so on? I can only run a calculation for 2 hours at a time with the computing facility I use. Is this approach alright or will there be issues?
ISTART seems like an appropriate option here but I fail to understand its VASP wiki page, so I only use ISTART = 0. Any suggestions in this scenario are welcome.
What I did to understand this is to do ISIF=4 with 5 different volumes, then fit to the minimum, and compare this to ISIF=3.
In my tests (all done with nickel-base alloys or nickel phases like delta/eta), ISIF=3 usually gives a smaller final cell volume and errors of up to 1-2meV/atom.
Note that the volume variation can be made a lot cheaper if you always use the final CONTCAR of one calculation as the starting point for the next (with changed scale-factor, of course).
In systems like nickel, you may also need to watch out for changes in magnetic moments which may change abruptly if you change the volume.
If you want very precise results (or need DOS etc.), after doing the volume variation, it is best to add a single static simulation with ISMEAR=-5 at the minimum position (copy the CONTCAR of the run that is closest to the minimum). Usually, the minimum energy calculated this way is close to that predicted by the EOS; deviations are a few meV for a 32-atom cell, so this is only needed if you really need very precise numbers or if you want to do a static calculation anyway.
As I said, all this was done with nickel (and a few things with titanium) alloys; if you have a different system, I suspect that effect size may be different.
• asked a question related to Wikis
Question
is anyone informed about the subject - would like to talk to someone who does this kind of therapy
Yes, everyone has vital energy that a psychotherapist can use to treat individual mental disorders, but the therapist must study each case individually. In mental disorders, I also ask, like Dr. Richard Kensinger, whether bed overlap can have a role for mental disorders. Arrest disorder I wonder, I did well the question and blessed efforts.
• asked a question related to Wikis
Question
Prince Prisdang became ambassador to eleven European countries and the United States. Later prince became the Buddhist patriarch of Colombo, Sri Lanka, but returned to Bangkok in 1911 where he was obliged to disrobe.
You are welcome my brother
• asked a question related to Wikis
Question
Rudolf Virchow was a German physician, anthropologist, pathologist, prehistorian, biologist, writer, editor, and politician. He is known as "the father of modern pathology" and as the founder of social medicine, and to his colleagues, the "Pope of medicine". He wrote in 1848 “Medicine is a social science, and politics is nothing else but medicine on a large scale.” What is your opinion?
To take up Virchow's metaphor of POLITICS IS MEDICINE from the perspective of cognitive linguistics, by nowadays political realities world-wide, most, if not all, politicians are failed doctors because they failed in their diagnosis and prognostics of social realities. Perhaps Virchow meant by this metaphor that, like medicine, politics is not an exact science. This does not entail that medicine does not make use of scientific methodologies. From the perspective of Critical Discourse Analysis, this metaphor constitutes a kind of legitimization of political errors (by analogy to medical errors). In medicine and politics, fatalities are not a rare occurrence world-wide.
• asked a question related to Wikis
Question
This fake case report has been plagarized from previously published works and contains images that are edited and copied from different sources like wikipedia. Figure 3 of this article is a histopathology image of ERCP tissue sample but it is actually a image from wikipedia page of Tubulovillous adenoma published in 2009 and is an image of a surgically ressected tumor.
The original case report published in 2015 in a local journals
The authors are affliated with University of Arizona and this reflects poorly on the institute and individuals involved in this phony research.
Thank you Dr Keller. Yes, i highlighted the issue to Mr John Adler, editor of cureus about this. And he retracted the article but was reluctant to label it as plagiarism. And hence, the retraction note that is extremely vague and too forgiving.
I got in touch with retraction watch and they endorsed my view that this was plagiarism.
The same author has another paper that is also retracted due to plagiarism with the same inaccurate retraction notice.
What I find despicable about all this, is that these authors plagiarised from a local journal of my university that is not indexed in pubmed but still publishes quality work. I don't know how University of arizona where most of the authors are based, let this happen.
• asked a question related to Wikis
Question
Does this suggest that electronic circuits naturally produce harmonics to the input frequency? That electrons themselves break up into lower order components?
Does the sin-cubed (and cos-cubed) function identity:
suggest that spatially transmitted frequencies ought to be detected by some sort of Composite (or Complementary)-Fourier Transform, where instead of detecting 1 frequency transmitted at a time for a set binary signal, that we can actually take out known pieces of noise, these natural harmonics, and they would be improving to the signal-to-noise ratio?
Emmanuel Orban de Xivry , yes, but if sin^3(x) = (3sin(x) - sin(3x))/4
Then 9sin(x) + 9 sin(9x) = 12 sin^3(x) - 4 sin^3(3x) -6 sin (3x)
So a 3 dimensional spherical wave needs a difference of three times the frequency and a third of the amplitude.
With nine times the frequency you can have a difference between two spherical waves of a third the amplitude and the frequency with a difference of a single dimensional wave at half the amplitude and a third of the frequency.
The shapes and the components go together, all you need are thirds of frequencies.
• asked a question related to Wikis
Question
I have trained Word embedding using a "clean" corpus in fastText and and I want to compare the quality of the Word embedding obtained against the word embedding from the pre-trained multi-lingual embedding in BERT which I perceive(discovered) to be trained on a "very-noisy" corpus(wiki).
Any Suggestions or Ideas on how to go about evaluating/comparing the performance would be appreciated.
It is best to test for your task. If you are doing text classification, I would recommend starting with an AUC assessment. If the entity recognition is non-zero F1.
• asked a question related to Wikis
Question
I was reading about using the multivariate cox proportional hazards model at this website: http://www.sthda.com/english/wiki/cox-proportional-hazards-model, which uses the Survival package for cox regression. The summary of a cox regression object outputs a bunch of information about the model, including a concordance index.
Is all of the data used to train the cox regression model? If so, is the concordance index found on that same training data? Does this cause overfitting?
Yes the entire dataset is used in model fitting as there is nothing to be tuned unless you are doing penalized Cox regression and need to tune the penalty.
• asked a question related to Wikis
Question
The Role of Salt Glands In Marine Vertebrates for Navigation and Migration (Crystalline Electromagnetic Conductors to Sense the Direction of Travel) By: Maryellen Elizabeth Hart November 8, 2018
• 📷Maryellen Elizabeth Hart
Goal: I was studying my Biology and read about the salt gland of marine animals (birds, turtles, etc.). Before reading further, I hypothesized the salt glands were being used for navigation (crystalline electromagnetic conductors to sense the direction of travel, the north and south poles, latitude and longitude) necessary for navigation in the ocean without the visual aid of land, sun or stars. I researched everything I could find, and have decided I am on a right track AND A NEW PATH. Supportive sources document the electromagnetic capability of crystalline structures contained within the salt glands of marine animals, however, no marine biologists have yet associated the salt gland with navigation of migratory paths by marine animals, an ability long held as a great mystery of science. I would like to research this more and would like to be connected with anyone currently interested or simultaneously researching this topic. I am reserving the copyright to my hypothesis if my hypothesis turns out right. Please allow me the much-needed credit. Please, my friends, connecting me with Marine Biologists who would be interested in my research. Thank you so very much. Blessings. Maryellen Elizabeth Hart November 8, 2018. Https://www.eidon.com/the-crystalline-electromagnetic-body/ Wikipedia says: https://en.wikipedia.org/wiki/Salt_gland https://www.eidon.com/the-crystalline-electromagnetic-body/ "The geomagnetic field is relatively stable over biological time scales and is axial, with the magnetic field lines roughly directed north-south and symmetric in both hemispheres. This provides a reliable, static reference system for orientation and navigation. Alternatively, magnetic anomalies within the Earth’s crust can also be recognized and used as reference features. Taking advantage of these properties of the geomagnetic field, some groups of animals have developed a biological magnetic compass, similar to the magnetic compass used by humans to locate the north magnetic pole. The magnetic compass has been described as an axial compass (also known as Inclination compass) for migratory birds and homing pigeons (Wiltschko and Wiltschko 1972; Walcott and Green 1974), and is based on the axial course of the geomagnetic field lines on the Earth’s surface. The magnetic compass is used as a reference system and as a mechanism to maintain steady courses during homing and migrations. Therefore animals able to discriminate the minute but steady changes of the inclination angle and the intensity of the geomagnetic field can potentially establish their latitudinal position. To date, several models for position determination based on magnetic field parameters have been proposed (Davila 2005). Lohmann et al. (1999) proposed that sea turtles use a combination of intensity and inclination, as independent coordinates for map information. Contours of equal magnetic intensity and inclination form a grid that can potentially be used as a bi-coordinate position-finding system over areas of the Atlantic Ocean, where sea turtles spend most of their life cycle. This model cannot, however, be generalized since isolines of magnetic inclination and intensity intersect each other at high angles only over local regions of the Earth’s surface (Davila 2005). In regions where the isolines are near-parallel to each other, or where the magnetic landscape is dominated by crustal magnetic anomalies, the bi-coordinate model is not viable for position determination..." (Walker et al. 2002) https://en.wikipedia.org/wiki/Salt_gland https://www.eidon.com/the-crystalline-electromagnetic-body/ Https://www.eidon.com/the-crystalline-electromagnetic-body/Date: 23 November 2018 https://www.researchgate.net/project/The-Role-of-Salt-Glands-In-Marine-Vertebrates-for-Navigation-and-Migration-Crystalline-Electromagnetic-Conductors-to-Sense-the-Direction-of-Travel-By-Maryellen-Elizabeth-Hart-November-8-2018
Thank you. Please connect me with peers who may work with me to research and find an answer. Thank you.
• asked a question related to Wikis
Question
A technology that superimpose a computer generated image on a user view of the real world , thus provide a composit view...
If the companies like Apple and Facebook release AR glasses with Wide Field of View and reasonable price, it can create a paradigm shift in product design. Each object will have a Visual interface in AR.
• asked a question related to Wikis
Question
Hello,
In a paper, it is simulated a 4-leg traffic intersection and a traffic light on it. It is proposed a dynamic traffic light scheduling algorithm and it is considered the presence of emergency vehicles.
It is used SUMO simulator to generate several mobility scenarios and NS-2 to implement the proposed algorithm. In final, it is measured the throughput and delay of vehicles in the intersection when it used the scheduling algorithm.
I am a beginner in traffic simulation and SUMO and NS2. I studied the traci_tls example in SUMO docs tutorials (https://sumo.dlr.de/wiki/Tutorials/TraCI4Traffic_Lights) which used just SUMO to control the traffic light for the emergency vehicles crossing.
Now, why that paper used SUMO and NS2 together? May it control the traffic light in NS2?
Thank you
Hi,
Please pay attention to the bold phrases.
Ns-2 has a lot of applications for example, you could combined the network simulator ns-2 with SUMO to evaluate VANETs and developed a TraCI in which SUMO and ns-2 communicated over a Transmission Control Protocol (TCP) to simulate V2V connections. The integrated platform that links SUMO and ns-2 by using the TraCI interface is known as Traffic and Network Simulation (TRANS). TRANS can mimic traffic congestion and road collisions at a specific location or vehicle.
• asked a question related to Wikis
Question
Hello everyone,
I am considering ferrofluids (https://en.wikipedia.org/wiki/Ferrofluid) as s target material of a detector that I am designing (conceptually). I want a liquid which could be magnetized, up to a tesla, say and one that will have high radiation length (so as to minimize effects of the multiple Coulomb scattering). liquid Argon or water have reasonably high radiation lengths, but I do not know if they can be magnetized. I have seen a paper: https://iopscience.iop.org/article/10.1088/1367-2630/7/1/063/meta, but I suspect that magnetizing a large neutrino detector is impractical. It would be great if a liquid as cheap as water could be used for this purpose.
Here comes the question about ferrofluid. I read somewhere that it is possible to make this colloid in home by putting iron filings in water. This may be too simplistic, but if this has high radiation length (I did not find any measurement) and can be magnetized, then perhaps it is not a bad option.
Does anyone have any suggestions or comments in this direction?
Regards,
Kolahal
Dear Sir,
Thank you for the detailed response. I need a magnetizable liquid with high radiation length. When I said "magnetized up to 1 tesla", I meant that when a charged particle passes through such a liquid, it will feel the force due to 1 tesla magnetic field. So, this 1 T is not the "applied external field".
I am exploring such possibility and ferrofluid seemed to be an option. But if that is not a good option due to maintenance, what else can be done? Forcing water to be magnetized by putting coils around etc., seems to be inconvenient.
• asked a question related to Wikis
Question
Thanks. My e-mail is: xylanase@gmail.com
My group has worked on a fungal exocellular beta-glucan over the last 25 years and details on our published works are available at: https://en.wikipedia.org/wiki/Botryosphaeran
Regards
Dr R Dekker
I'd like to help you, but my research is about data mining. I've never been exposed to fungal extracellular dextran, so I don't have any literature on fungal extracellular dextran.
• asked a question related to Wikis
Question
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler's Theorem, "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, and intelligent routing in content delivery networks and military simulations.
Next Generation Artificial Intelligence endeavors to rise above the structure and restrictions of current AI innovation to build a novel plan of science and innovation in regards to human and computerized reasoning. It will be really advantageous for people, driving future society, industry, economy, culture and research. It, will likewise propose and understand a dream or structure of things to come society and add to next Generation training of next pioneers.
• asked a question related to Wikis
Question
Dear all, I am seeking a PhD or MS Research Assistant in FEM of traumatic brain injury: http://niml.org/wiki/images/d/d8/FEM_AT.pdf.
Im intrested I finshed My PhD recently working with FEA in the field of TBI of Paedaitric Population.
• asked a question related to Wikis
Question
Turbulence is said to be one the unsolved problems in physics;
Though it's actually the mathematical problem which is considered to be the "millenium problem";
My own thought is that there is surface created inside the fluid, even when the fluid is the same, and not only when there are two different fluids;
Dear Mr. Jokela,
in Fig. 8a in paper:
Чашечкин Ю.Д. Капли: короны, всплески, звуки… // Природа. 2016. № 11. С. 13-23.
Chashechkin Yu. D. Drops: crowns, splashes, sounds // Nature (Priroda, Russia). 2016. No.11. P. 13-23. (In Russian).
one can see bright transverse reflecting surface separating upper cylindrical part of the the splash jet (dark) and low conical part of the jet covered by banded envelope.
I did not described this flow component in details in the paper but especially select to print this photo taking in mind intention to inspect this phenomenon more carefully, but "the hands did not reach." Maybe circumstances and your interest will help come back to the further study of this flow component...Original photo is attached.
With best regards,
Yuli D Chashechkin
• asked a question related to Wikis
Question
What we see (based on Earth) as a lunar eclipse is a solar eclipse if based on the Moon. Has this been observed already directly? The page https://en.wikipedia.org/wiki/Solar_eclipses_on_the_Moon does not say anything on Moon-based observations and shows only an artistic view of such an eclipse.
During the lunar eclipse of April 24 1967, Surveyor III took photos of the eclipsing Earth while sitting on the eclipsed lunar surface. See for example photos 67-H-483 and 67-H-484 on this Surveyor III webpage:
The quality is not exactly great, but the "ring of light" is clearly visible.
Regards,
Thomas
• asked a question related to Wikis
Question
Dear Respectful Researchers,
Here come several "strange" and probably unique questions about deep space detection which is motivated by an important recent scientific discovery and publication (in the journal Nature) about Fast Radio Bursts, "A second source of repeating fast radio bursts" as referring to these links as follows:
Researchers at UBC, Canada, have discovered the second so-called “repeating fast radio burst” (FRB) recorded six times coming from the same location, 1.5 billion light-years away. It seems that, CHIME was able to record some of the bursts as low as 400 MHz. My quick and first question is, can we statistically exclude its origin from the extraterrestrial (ET) civilizations?
Here come my rudimentary thoughts and reasoning as an electronics/telecommunications engineer, just out of strong curiosity.
I’ve conducted some quick and simplistic calculation of the link budget from communication perspective. Assume that the 400 MHz radio signals (assuming constant without shifting although actually not) do not significantly suffer from planets/galaxies introduced (all types of) attenuation and are thus deemed in an ideal free-space propagation model.
1.5 billion light-years distance is equal to around 1.419e22 kilometers, which will introduce a Free Space Path Loss (FSPL) of 537.5 dB at 400 MHz. On the other hand, assume that the receiver at ground station is able to achieve an ultra-low sensitivity of -160 dBm thanks to very large high gain phased array antennas, which also means the power at the origin (output) is at least 377.5 dBm, or 5.62e33 Watts (56.2 Decillion Watts).
On the other hand, the Sun releases an estimated 384.6 Yotta Watts (3.846e26) of Energy [1], the power output of all power plants of the world in 2008 is only 2.31e12 Watts. Therefore, we may be talking about an Energy (1.5 billion light year away) equivalent to at least 14.6 millions times the Sun !
This comparison makes one feel that this energy resource can be hardly from an ET civilization, unless it is from the so-called Type-III civilization categorized in the Kardashev Scale [3]. Could it be ?
Another thing I am concerned with and would like to ask is, what could the "super-macro galaxy-level" propagation channel models look like? Would there be some multi-path fading effects? Would there be some time delay/frequency shifting among different bursts, and in the order of months/years?
Thanks for your correction, discussion, and suggestions.
References
Yours Sincerely,
Yiming Huo (Jimmy), Ph.D.,
Jan. 12, 2019
Dear Yiming,
It is good that one think about what is happening in the our surrounding universe.
The freuency of 400 MHz lies in the UHF range. When the stars radiates electromagnetic radiations normally they radiate blackbody spectrum. Since the temperature of the star is very large so, it radiates only a small radiation of its power in this frequency. This means that the power you estimated may be much less than the real value. This points out a very huge star which may be even much much greater than the sun. So, it will be even very less probable to be due to activity of other creatures.
Best wishes
• asked a question related to Wikis
Question
Extrapolating the spacetime concept of Einstein, The Great attractor is super massive object that human mankind could discover. It is so massive, it is actually, exerting great gravitational force on all galaxies. As we all know that super massive objects, create a warp in spacetime structure. This is the biggest discovery which is capable of explaining couple of paradoxes in science ( i will talk about one aspect here). IF Einstein could right about different timezones with in universe and if an Alien is riding towards earth from distant galaxy with a speed of light, then our future will become his current NOW. in this case, Einstein is also proposing that our future is deterministic and already written, then only spacetime concept will hold true. If future is not deterministic then spacetime concept loses its ground. That means whatever we events that have taken place in first clock of universal cycle, are going to exactly replicate in next cycle as well. However, this rises another question about life sustenance because if Big Bang destroys life, then how would it originate again. secondly, even scientist are of the opinion that eventually we will lead to cold death. And Big bang will again start. that means spacetime is cyclic and we are looped into infinite circles. This theory is referred as conformal cyclic cosmology (CCC), is a cosmological model in the framework of general relativity, advanced by the theoretical physicists Roger Penrose and Vahe Gurzadyan. In CCC, the universe iterates through infinite cycles, with the future timelike infinity of each previous iteration being identified with the Big Bang singularity of the next.[4] Penrose popularized this theory in his 2010. Source 0: https://en.wikipedia.org/wiki/Conformal_cyclic_cosmologyWith Big Bang comes the few paradoxes,
If we started from big bang and whole universe exploded and we are receding or contracting, then it would exert linear force on earth's orbit and deviate it from its course but that has not happened.
Moreover, when universe was singular, how can you crush most fundamental unit of matter (to attain singularity, we must break electron basic mass).
Secondly, it does not tell you, how matter got gravity (if matter later formed). If the whole universe was singular, then the electron or quasi particles (or whichever most fundamental unit of matter) must have been crushed beyond their absolute mass. This raises a very basic question, how can we crush the most fundamental unit of matter.
Even if I assume, Universe started from some other fundamental particles (finer than electron or quasi), then how come they all become electrons and we have no trace about them.Even on a farther imagination, let just assume, that there were super quasi or electrons (finer than electrons) and got changed to current electrons. then how and when matter acquired the property of gravity.
In this article, i would like to postulate that universe is operating through infinite cycles but not through big bang rather orbiting around The Great Attractor. Asser 1. By theory of relativity, if earth is orbiting sun for the last million years without any change in orbit then it would mean, the overall movement of universe is supporting the earth's orbit. with the discovery of the Great attractor, it is clearly found that the Great attractor (with multibillion to size of earth or whole observable universe) is imposing angular motion. It simply means that our universe/galaxy/milky way is orbiting around the great attractor.Since, there is a gravitational force at play here (between The great attractor and other celestial objects), referring source 2, they must be exerting centripetal force. Even scientist themselves are claiming that it is exerting angular deviation.Source1: https://en.wikipedia.org/wiki/Great_Attractor" The variations in their redshifts are known as peculiar velocities, and cover a range from about +700 km/s to −700 km/s, depending on the ANGULAR deviation from the direction to the Great Attractor. ""The Great Attractor is an apparent gravitational anomaly in intergalactic space at the CENTER of the local Laniakea Supercluster"Source 2: https://en.wikipedia.org/wiki/Centripetal_force"In Newtonian mechanics, gravity provides the CENTRIPETAL force responsible for astronomical orbits. "Asser 2. As per the study done by university of California, there is a growing evidence that the whole universe is following a common pattern or structure. Golden Ratio (1.61) and magic figure (1/137) all suggests that universe does follow a particular design. If we start looking at the structure of universe, it follows a certain pattern, electron is cyclic, natural cycles are cyclic, earth's orbit is cyclic, sun is orbiting black hole, milky way is round, galaxy are circular. if all the subset of a larger set are cyclic/circular, then we have to believe that parent force (angular force exerted by The great attractor on celestial bodies) and motion of our universe is also cyclic. In layman language, just like in hurricane, all of its particles will have circular trajectory because the overall shape of hurricane is cyclic/circular (but the eye remains at calm). we can consider the whole universe as big hurricane whereas its eye is at calm (I.e. The great attractor).Moreover, just like atoms have orbiting electrons around nucleus (with bulk of the mass concentrated at nucleus, similarly, the great attractor is having bulk of the mass and rest galaxies/celestial are having only approx. 15% of the mass.Asser 3. Recent studies (in 1998) shown that the universe has an acceleration. no theory could justify this. as per this theory, since universe is also orbiting around a prime foci or the great attractor, it is now entering or exiting in perihelion zone, as all know when earth enters into perihelion, there is a change in rotational speed. that might be the prime reason why natural calamities are increasing because it is changing the gravity of the earth and hence changing the KE and potential energy of the nature. while coming it to equilibrium state, nature creating earthquake, tsunami. No theory in the existing modern age can answer all such questions, why natural calamities have increased and why universe is accelerating, change in earth’s gravity etc.Source 3: https://www.forbes.com/sites/trevornace/2017/11/20/earths-rotation-is-mysteriously-slowing-down-experts-predict-uptick-in-2018-earthquakes/#470b50226f24Asser 4. In the 1920s, Milankovitch hypothesized that variations in eccentricity, axial tilt, and precession of the Earth's orbit resulted in cyclical variation in the solar radiation reaching the Earth. Milankovitch and other hypothesized that variation in eccentricity, axial tilt, and precession of earth's orbit are cyclical and thousands years of periodicity.this theory can only be valid if we are to believe that time is cyclic otherwise this theory loses its ground.Source 4: https://en.wikipedia.org/wiki/Milankovitch_cyclesAsser 5. According to law, total mass/energy of universe is constant. That means there are only 2 things that exists in the universe, Matter (which has a mass) and Energy. As we know that matter is also form of energy and law of conservation of energy, energy can not be created or destroyed. That brings us to the fact that Universe has finite energy.For any finite object, we can afford to take universes as closed environment. As we know in an closed environment, all motions become periodic. in layman language, if you download lunar ball or billiard game, reduce the friction to zero and cover all the pots. then no matter what shot we play, it is always going to periodic or all events will have its periodicity.Asser 6. As recent study shows that earth magnetic fields have also changed around 200 BC and it always flips their North and South Poles. which also means they are also cyclic.Source 6: https://www.nytimes.com/2017/02/14/science/magnetic-field-earth-jars.htmlThis means the time is cyclic with Universe orbiting around The Great attractor not with Big Bang. Spacetime concept of Einstein was right, with a only tweak that like Time, we are only moving forward (yet cyclic), so as with space. New Answer to Paradoxes Even in case of Big bang, we never talked about origin of cosmos, (we always believed that hydrogen gas already there and other assumptions, whereas origin means irreversible process), in this theory, we got to believe that this peculiar skeleton of the universe cant be created, and it was always there. This structure is independent of time.
Yes. As per my thesis, its the same. This universe is cyclic and always will be cyclic in nature. If anything, any system is about to break the cyclic system of univers, then immune system of the universe activates and distroys the disturbabces, from upper energy cycle. From an atom model to gigantic structers and planetory systems, undergo same cyclic system. A wide range of void space existing between the planatory systems. Total energy of a system, at a given time, at given location in any planatory system is always a contant. When iimbalence occur, massive distruction happens to recover the balancing stage of that system So Space-Time is cyclic.
• asked a question related to Wikis
Question
Rumination 3# - The lost goal of education?
Professor Stephen Dobson
Dean of the Faculty of Education
Victoria University of Wellington, New Zealand
Since arrival in the Faculty of Education at Victoria University in mid-2018 I have had many remarkable experiences. Not all however are the source of a strong catharsis, an ‘a ha’ experience as was the following: I was lucky enough to attend the 50th anniversary of the cohort of pre-service teachers who attended Wellington Teachers College in 1968. In the course of the pleasurable evening a common experience remembered was staff philosophy at the College, “that they should try to develop the person and then the teacher would emerge from that.”[1] In cultural activities, such as art, reading, music, talking, dialogue and so on during their training. Sam Hunt the poet and New Zealand treasure was a student to whom they made constant and well-deserved reference. Sam never graduated – apparently the Dean told him to concentrate on his poetry skills.
We have come a long way from such a perspective, or so we are apt to think – our teachers are arguably more professional, the national curriculum is detailed and teachers meet clearly defined standards on graduation. Yet, when I talk with my Islamic educational colleagues I learn that the point of education in their culture for over a millennium has been to develop the character of the child. We must remember one of the oldest universities[2] in the world is Islamic; it is over 1200 years old and found in Tunisia [3] (Ez-Zitouna University جامعة الزيتونة‎); not in Europe in the Middle Ages as we commonly think. In Chinese inspired education a version of this is the importance of developing a good moral character, inspired by the views of Confucius and a deep respect for others. We find a strong other-directed morality rather than an ego-oriented morality in China. In Scandinavian educational culture this is called bildung or dannelse and means the formation of a shared centred-ness and a shared cultural identity. In my limited, but growing understanding of Māori and Pasifika culture I have noted many of these same points.
Current global speak around the world and in many Anglo-Saxon countries profess a different view on these matters, if we exclude for a moment the point on curriculum knowledge. It is the desire to grow students who are resilient and possess ample funds of ‘grit’ to master set-backs. They are cognitively knowledgeable of their own thought processes. All possess so-called 21st century skills of team work and sociability.
Sometimes, I wonder if the pendulum of education has swung too far and we have lost ourselves in the science of education and in particular that international pastime of measuring literacy and numeracy scores. We want to perform well and to manage ourselves; to be the cleverest in the local, national and international class. This is very much to the exclusion of the other side of the pendulum where character, morality, bildung and other-directedness rest. A phrase I often quote from the Swedish child activist in early 1900 rings in my ears, the formation of ones identity is based upon what remains after we have forgotten everything we have learnt` (my translation). As with life we need a pendulum that swings both ways – or do we?
[1] Georgia Morgan (2007). A Short History of the Victoria University College of Education Art Collection. Unpublished manuscript.
[2] Established in the year 859, the University of al-Qarawiyyin in Fez, Morocco, was the first degree-granting educational institute in the world (as recognised by UNESCO and Guinness World Records).
As I see it, the goal of education is to bring about students who come to be creators and innovators, not conformist people.
The goal of teacher education shoud aim at forming teachers who are more mentors and organizers of learning experiences and situations such that their students come to understand, reinvent, and reconstruct everything they learn than simple transmitters of ready made and established truths imposed on students from outside.
Best regards,
Orlando
• asked a question related to Wikis
Question
The Physical meaning of flattening describes flattening is a measure of the compression of a circle/sphere along a diameter to form an ellipse or an ellipsoid of revolution respectively. (https://en.m.wikipedia.org/wiki/Flattening)
Outside the field of geometry while we dealing with simple quantities, Does it convey anything about Normalization?
For eg. third flattening (a-b/a+b) fits the value within (-1,1).
what is the advantage we get from this combination of quantities?
It has been observed that rank methods (under the name of flattenings) are also in wide use in algebraic geometry for proving tensor rank and symmetric tensor rank lower bounds by
Klim Efremenko, Ankit Garg, Rafael Oliveira, and Avi Wigderson,
Barriers for Rank Methods in Arithmetic Complexity
More to the point, it has also been observed by K. Efremenko et al. (p. 5):
The possibly familiar names including partial derivatives, shifted partial derivatives, evaluation dimension, coefficient dimension which are used e.g. in these lower bounds for monotone, non-commutative, homogeneous, multilinear, bounded-depth and other models [Nis91, Smo93, Raz, NW96, Kay12, GKKS14, KLSS14, FSS14, FLMS15, KS14, KS15] are all rank methods....
In other words, a main advantage to flatten two surface shapes is that it facilitates the evaluation of the dimensionality of the flattened shapes. Another important advantage in flattening a pair of shapes (not mentioned by K. Efremenko et al.) is flattening leads to insights into surface structures as well as the more general problem of surface.
• asked a question related to Wikis
Question