Question
Asked 18th Apr, 2017

Comparing Hubble calculated distances...with Pan Theory?

I wonder about the source of the formula at internet
[((z+1)2-1) / ((z+1)2+1)] c / H0
H0 – Hubble’s constant, c – speed of light.
Comparing Hubble calculated distances and brightnesses with Pan Theory calculations of distances and brightnesses."
I have checked the formula against 100 galaxies with [0<z<=1]. The correlation was ca. 99%. Somebody knows where the formula stems from? JM

Most recent answer

11th Jun, 2018
Forrest Noble
Pantheory Research Org.
With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.
Continuous creation of matter
When the universe was first thought to be expanding, theorists like James Jeans, Paul Dirac, Fred Hoyle, Bondi, Gold, Narlikar, and many others made such proposals. In the early 1930s, before his endorsement of the Big Bang model, Einstein developed his own Steady-State density model involving the new creation of matter to maintain a constant matter density in an expanding universe. The new-creation mechanism(s) discussed were similar to prior proposals made by Paul Dirac and James Jeans. He wrote a related paper of this proposal but decided to set it aside in his archives rather than publishing it once he began to believe the Big Bang model was on a solid footing and began to endorse it.
The Pan Theory process has similarities to these other proposals but because of the diminution of matter process it generally does not involve an expansion of space or of the universe .
A present-day mainstream proposal of the 'Continuous creation of matter' involves Hawking radiation and the idea that near the event horizon of black holes, particles can be created by vacuum fluctuations that could create new matter in this universe. This accordingly could be in the form of electrons, positrons, and possibly protons and anti-protons. This idea seems to have kinship with the steady-state creation of matter and 'C' field matter creation of Quasi-steady-state cosmology, but on a much smaller scale. By new-matter creation and energy radiation, black holes accordingly could therefore evaporate out of existence.
The Pan Theory calculation of redshifts vs. distances (which you reduced more elegantly :) ) involves primarily 3 unstated variables concerning its equation. First, matter in the past would have been relatively larger radiating relatively longer wavelengths of radiation. Secondly, our yardsticks would now be smaller so that matter would appear to be moving away from itself producing a Doppler-like redshift. Thirdly, time would be relative to the size of matter. Relatively larger matter and space in the past would be compatible with time being relatively slower. Longer distances traveled, from our time-frame perspective, would have taken a longer period of time maintaining a generally constant speed of light, while explaining the dilation of time from past events such as supernovae. Longer time periods would mean that the time between wave-lengths would be greater equivalent to a redshift from today's perspective of these waves.
So the question becomes: With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.
According to the Big Bang model the density of matter in the past would have to have been greater in an expanding universe, 8 times more dense 7 billion years ago, and 64 times more dense roughly 10.5 billion years ago. This is exactly that opposite of what has been observed. This is the universe observation-density problem which we explained in our previous papers.
Why does the apparent density of galaxies drop off at larger distances? (Advanced question)
A generally standard mainstream answer to this question can be seen in this link:
Look up my e-mail address at pantheory.org. I wish to make you a proposal concerning our present research and related paper -- in regard to your insights.
with best regards, Forrest Noble
-------------------

All Answers (36)

21st Apr, 2017
Hans van Leunen
Eindhoven University of Technology
Hubble's theory depends on the behavior of photons. The notion of a photon is not well comprehended. It is NOT A WAVE and its carrier is NOT THE ELECTROMAGNETIC FIELD. Waves cannot travel billions of light years through empty space and stat detectable. The EM field does not cover such huge ranges.
21st Apr, 2017
Joseph Emmanuel Mullat
Independent Researcher
Dear Sir,
I am a novice in this matters and I am still confused with this formula as I found the formula in difference context, i.e. like tanh(ln(1+z)), that was the question. Hubble’s low (experimental result) means for me a law how the EM wave behave when traveling huge distances. Hubble interpreted the EM (the light) redshift as a result of Doppler effect. Therefore this formula, when following in steps of Hubble, can be used only for z<=1. However, nothing prevents to use it for higher z values even nobody yet discovered galaxies with z >12. On the other hand I am convinced that the weak gravitational redshift is more reliable for interpretation for the redshift than the Doppler effect. As consequence we must drop the dynamic nature of the universe. So, what is then left to the BB? It might be that the BB is not a BB, but a phase transition of matter from unknown origin, like dark energy. Water can remain liquid below zero, so called super cooled water. Why not to suppose that dark energy is a matter below 0K. Is her some contradictions of laws of physics?
21st Apr, 2017
Hans van Leunen
Eindhoven University of Technology
@Joseph,
Physics theories are guided to pragmatic applications and not so much on explanations of its origins. For that reason, most physical theories are descriptions of observations rather than explanations. If a description appears to fit, then it is accepted as a valid theory. In this way, all kinds of "theories" exist that claim to describe the behavior of the universe. Our problem is that only a small part of reality is accessible to observation, while all observations are affected by changes of the format of the perceived information. Google for the Hilbert Book Model.
25th Apr, 2017
Jonathan Doolin
Carl Sandburg College
I've never heard of Pan theory, but that formula is
Distance = velocity * Time.
Where T = 1/H_0 = 1/13.7 billion years
The expression of [((z+1)2-1) / ((z+1)2+1)] =v/c
It is the inverse of the relativistic redshift equation v/x = sqrt((1-v/c)/(1+v/c))
26th Apr, 2017
Joseph Emmanuel Mullat
Independent Researcher
Ho Jonathan,
Many thanks. I received also an answer that matches yours. Yes, you are right. The equation to convert the redshift (z) to distance is based on recessional velocity (v). Indeed, the relativistically correct equation for velocity is:
 v = c [(1+z)2 -1]/[(1+z)2 +1]=tanh(ln(1+z))c
 Hubble distance equals d=vH0. Thus, dividing the expression by H0=67,15km/s per MpC, the distance for a redshift, f.e., 0.138 is: [(1+0.138)2 -1]/[(1+0.138)2 +1]c/H0=573.9MpC.
 However, I am still in doubt. The function f(z)=[(1+z)2 -1]/[(1+z)2 +1] or f(z)=tanh(ln(1+z)) in more elegant form, is convex, i.e., f’’(z)<0:
 f’’(z)=((tanh(ln(1+z))2-1)/(1+z)2)(2tanh(ln(1+z))+1)<0, z=[0.03;10.86].
f’’(0.138)=-0,954669
I recently picked up a data taken from NED-D in Table 3, Astronomical Journal, 153:37 (20pp). When observing the data at glance I noticed that distances distribution in the universe is approximately concave.
In authors words the data highlights
“Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics. Researchers use them to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion.”
So, I am totally confused because convex function cannot underpin concave function.
Best JM
Ps. The Pan Theory – Alternative to the Big Bang Theory may be found at
26th Apr, 2017
Jonathan Doolin
Carl Sandburg College
"Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics"
Such independent indicators would be magnitudes of standard candles.  I have recently heard the word "luptitudes" but I haven't yet learned what that means exactly--something to do with accounting for signal-to-noise ratios in very faint and distant galaxies.
"So, I am totally confused because convex function cannot underpin concave function."
I'm not sure what this means.  
But, I can offer you another variable that might help.
Let's name this other variable "rapidity" denoted as curlyphi or varphi.  This represents the hyperbolic rotation angle between our worldline and the distant galaxy's worldline.
varphi= ln(1+z)
beta = v/c = tanh(varphi) 
gamma = 1/sqrt(1-beta^2) = cosh(varphi)
beta*gamma = beta/sqrt(1-beta^2)=sinh(varphi)
My grasp of what you mean by "underpin" might be a bit hazy, but if my usage of the word matches yours, you will understand when I say  that z is not what underpins the space, nor does beta, gamma, or phi.  But what underpins the space is a Minkowski coordinate system.  
In this framework, distant objects are not required by principle to move according to a Hubble flow.  They just happen to move according to a Hubble flow because of the event or events which set them into motion.
These events would tend to create an environment of equipartition in rapidity space, which in turn, produces a local appearance of equipartition in velocity space (small angle approximation:  sinh(cphi) = tanh(cphi) = cphi.)  which in turn, produces an apparent homogeneity over regions where the small-angle approximation holds.
27th Apr, 2017
Joseph Emmanuel Mullat
Independent Researcher
Hi Jonathan,
 I used the term “underpin” in its ordinary meaning like “reinforce, back, sustain, corroborate, confirm,” etc.
 I am not familiar with all these funny names: varphi, beta, gamma, etc. For me it is just hyperbolic trigonometry, where a pail is called pail. Hyperbolic geometry, to my knowledge is connected to geometries with negative curvature. Minkovski geometry represents a positive curvature geometry. Minkovski geometry, let we agree on the word, “underpin” the NED-D data much better than negative curvature geometries. Hyperbolic geometry underpin better the Doppler effect.
Two functions, as you already noticed, represent so called diffeomorphsm, where the direct mapping and its reverse are both differentional functions:
1+z=sqrt[(1+v/c)/(1-v/c)] and v=tanh[ln(1+z)]
represent such en example of a diffeomorphism. Unfortunately this diffeomorphism does not underpin the NED-D data. On the contrary, I found a diffeomorphism which does:
 g(r)=4(pi) { arctan(r)+r [ (-1+r2)/(1+r2)] } (mu) / rlambda
  • mu - an average density of matter in the Universe,
  • lambda - calibration parameter like in MOND model,
  • mu=27.567341, lambda=0.8375102
  • r – the distance to extragalactic entity.
I call the g-function a weak gravitational potential function, which reverse function, indeed, in contrast to tanh[ln(1+z)], is not a convex but concave. One can check that the reverse r-values of g-function taken in ascending order on the interval g in [0.03;10.86] will undepin NED-D data, at least with the correlation coefficient 0.988238.
27th Apr, 2017
Jonathan Doolin
Carl Sandburg College
beta, varphi, and gamma are just words used to represent LaTeX representations of greek and roman symbols.  
I could have written:
Define:  rapidity= ln(1+redshift)
Define: celerity = beta = velocity/sped of light = tanh(rapidity)
Define: Lorentz Factor:  gamma = 1/sqrt(1-beta^2) = cosh(rapidity)
Derive:  beta*gamma = tanh(rapidity)*cosh(rapidity)=sinh(rapidity)
As for your use of underpin:
JEM: I used the term “underpin” in its ordinary meaning like “reinforce, back, sustain, corroborate, confirm,” etc.
I am wondering, now, if you might use the word "fit", as in, "this function fits the data", or "perform an ordinary least-squares regression to fit the data"
1+z=sqrt[(1+v/c)/(1-v/c)] and v=tanh[ln(1+z)]
represent such en example of a diffeomorphism. Unfortunately this diffeomorphism does not underpin the NED-D data.
Hmmm... Sorry, I'm not familiar with the Ned-D data tables, but I'm looking at a table of something called NED-D data now. at 
This particular table doesn't even show a value for z.  It just has a column for V helio(z).  Nor does it show a value for weak gravitational potential function, that I could see, unless that's the GLON or the GLAT.  
So, I am totally confused because convex function cannot underpin concave function.
Is your concern here, really related to whether various functions are concave/convex, or does it have more to do with whether they are one-to-one, and onto?   So long as two functions are one-to-one, and onto over their respective domains and ranges, you can do functions of functions of functions, and not lose any information.
28th Apr, 2017
Joseph Emmanuel Mullat
Independent Researcher
To conclude the discussion.
It was impossible. in accord to my level and knowledge of data analysis, to fit the data from NED-D, The Astronomical Journal, 153:37 (20pp), 2017 January, Table 3, column Mean (Mpc) (compiled, in authors words, from 954958 extragalactic entities  and collected from 34389 galaxies, reports of  13745 authors) by Hubble’s distances tanh[ln(1+z)]c/H0, c – speed of light, H0 – Hubble constant.
28th Apr, 2017
Jonathan Doolin
Carl Sandburg College
Which was greater?  The values in column 3, or the hubble distances?
In most situations, the calculation of the distance comes from estimations based on the object's magnitude, rather than its redshift.
28th Apr, 2017
Joseph Emmanuel Mullat
Independent Researcher
Hi,
As said, the observation of Astronomical Journal, 153:37 (20pp), 2017 January, Table 3, column Mean (Mpc), convinced me, at first glance, that Hubble’s distances at lower z-s, perhaps, give overly high values, and, in contrast, on higher z-s Hubble’s law underestimate more longer distances of extraterrestrial objects. I need to think this preliminary analysis result in details. In the illustration attached more correct estimates might be useful.
JM
Ps. Tab. 3 highlights 94.958 extraterrestrial entities collected from 34.389 galaxies and reported by 13.745 reserches.
10th May, 2018
Forrest Noble
Pantheory Research Org.
Well, the last comment was a year ago. I just found this query in the last week or so, so I will answer it. I was the one who derived the reference equation that was used in the reference website, which is my organizations website. The is the Javascript program referenced above the calculates galactic distances contrary to the Hubble formula. The formula is (10) in the above paper, and the brightness formula variation is (11) differs from the inverse square law of light since larger matter in the past (relatively speaking) would appear brighter based upon its distance.
This model proposes that dark energy and non-baryonic dark matter do not exist, as well as proposing a "simple" beginning to the universe without a Big Bang or Inflation. It is a mechanical “Theory of Everything", proposing to be able to unify all of physics under a single all-encompassing theory. Galactic redshifts are explained by a diminution of matter process rather than by the expansion of space. Space would appear to be expanding but instead matter would be very slowly getting smaller, a type of scale-changing theory. New matter would be steadily created from the matter decrement, maintaining a constant density of matter and a steady-state condition conserving matter and energy. The universe accordingly would be far older but not infinite in size or age. It is also an aether theory, a single fundamental particle theory and a single matter-innate physical-force theory. A related scientific peer-reviewed paper has been published in 2014.
The paper whereby I explained how I derived this formula is shown in this link:
The derivation was made in 2013. It involved several hundred type 1a supernovae data, much more data than what was available when dark energy was proclaimed. The model used to derive the formula is a diminution of matter model, contrary to the expansion of space. The model is called the Pan Theory which can be found on any search engine. The model concerning it does not take too long to explain its basis, but a complete description of it is a book 370 pages long and is available on the internet without cost at pantheory.org
10th May, 2018
Hans van Leunen
Eindhoven University of Technology
Dear Forrest Noble,
You might be pleased by the fact that another theory also bases on a field excitation that forms the base of all matter. This excitation is a spherical pulse response and it is a solution of the wave equation. It integrates into the Green's function of the field. I apply quaternionic field theory and there the spherical pulse response injects volume into the affected field. That volume locally deforms the field and then spreads over the field and expands the field. See: and https://www.researchgate.net/project/The-Hilbert-Book-Model-Project/update/5af2df924cde260d15dd9529
10th May, 2018
Forrest Noble
Pantheory Research Org.
Hans,
I dislike the something from nothing ideas of Hawking, but I also propose the generation of new matter from a background field. The background field I prefer is a particulate version of the Zero Point Fiend. Since this is off topic I will e-mail you and we could discuss our ideas further If you are interested.
Forrest
1 Recommendation
10th May, 2018
Hans van Leunen
Eindhoven University of Technology
The zero point field is not part of the Hilbert Book Model, but stochastic processes that control the universe are. You might send me a private message at LinkedIn
12th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest Noble,
You have dispelled my doubts about the formula expressing distance by means of the Hubble constant through a redshift z in an elegant form. I found an article where the same formula was proposed to be called under the name of another author. Thanks for the answer. Joseph Mullat
12th May, 2018
Forrest Noble
Pantheory Research Org.
Thank You Joseph, Dear Sir,
The formula you noted above is one form of the Hubble formula, but the formula on the website you listed above at http://www.pantheory.org/HF.htm is the JavaScript program, where distances are calculated by my own derived equation. Distances are calculated according to redshift input data based upon a different cosmology model called the Pan Theory, which is my own model.
The Pan Theory distance equation that you referenced by the embolden link directly above is:
r1 = 21.2946 log10 [.5((z +1).5 - 1) +1] (z +1).5 P0
Where r1 is the calculated distance, z is the observed redshift,
and P0 is a constant equal to 1,958.0
Below is the version of the Hubble formula that you also noted above:
d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 ,
Where d is the calculated distance, z is the observed redshift, c is the speed of light, and H0 is the designated Hubble constant. There are presently two contrary and competing Hubble constants. One is about 68 ks/ mpc and the other about 72 ks/ mpc, using two different methods of determination. This presently is a big controversy in mainstream cosmology.
On the other hand it is believed that the Hubble constant, the expansion rate of the universe, is not constant at all. Present theory/ hypotheses proposes that instead, the initial expansion of the universe was caused by a hypothetical Inflation process, and is presently believed to be controlled by a hypothetical Dark Energy process.
Comparing the Hubble formula (Big Bang cosmology) with the Pan Theory formula:
Pan Theory cosmology (reference any search engine) is based upon a different explanation for observed galactic redshifts.
According to this model, Instead of space expanding, matter (relatively speaking) would be getting smaller, as well as the rate time passes would be getting quicker. As such, it would appear to us that space was expanding. Such models have been called scale-changing theory. Of course the reasons for this change is part of the theory.
Where redshifts z<1, there is no more than a 8% difference between the calculated distances between the two formulas above. Where redshifts z=/>1, the Pan Theory calculated distances progressively increase to multiples of the distances calculated by the Hubble formula with no ultimate distance limit other than the limits of our equipment. Of course such great distances are contrary to the Big Bang cosmology. The reason why these great distances are not recognized by mainstream astronomers would accordingly be because larger matter in the past would have produced more light per galaxy so the greater distances would be compensated by greater luminosity. This compensation still would not match with the inverse square law of light, but would match the determined observation angle of a galaxy or cluster. Instead of the universe expanding, the observable universe would be in a steady state condition with new matter eventually being created from a physical background field as a result of the decrement of the diminution of matter process.
Here is the link to the paper that explains how the Pan Theory formula above was derived.
Here is a link to our paper explaining the major problems with the Big Bang model, and the reasoning to favor the Pan Theory model.
best regards, Forrest Noble
12th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
I really appreciate your prompt response.
To your information I'm not an astronomer, in my life I have never looked in a telescope. I wrote several articles that I published privately. All these articles are summed up under one roof, which I call "Monotonous Phenomena of Issues". Only some of them are published but not in prestigious publishing houses. For example, one article, over which I worked for about 20 years, turned out to be the most complicated non-linear economic model with 8 parameters.
My interest in Cosmology was awakened in 1973 in connection with the use of the same “Monotonous Phenomena of Issues”. These were just some of my funny ideas, which at that time I have illustrated by the same monotonous phenomena. Now I'm getting close to what caused my interest in cosmology.
It was Planck Mission from 2013, the results of measurements of which I just entered for fun in my cosmological model. To my surprise, without any problems, my model comes up to results with almost 100% accuracy. Working the last 2-3 years, I published this article unfortunately in the very contradictory Journal of Cosmology. However, the mathematical quality of this article should be at the standard academic level. You can download my book with my illustrations and comments:
http://www.datalaundering.com/download/Experiment.pdf , total 572 pages. My cosmological efforts begin on page 455.
I am seeking for contradictions in my model. Therefore, PAN theory suits well. I downloaded your articles and looked through the list of problems. I can immediately say that the "horizon problem" can be explained in my model. As for the "flatness problem", I do not use GR at all. The density problem is transferred to one parameter, which takes into account the relativistic density, which allows all kinds of motions of matter. In my model the matter supposedly standing still. The problem of forming Galaxy is relevant only if a timeline is used. I do not use the time parameter and, therefore, this problem does not exist.
Dear Forrest, I already told you that I'm not an astronomer. I do not understand, e.g. how an observer, unknown to me, came to conclusion, see page 561 in my book. However, if 561 is true, I can explain his observations in my model. "The Anachronistic Galaxy Problem" is confusing. I could at least somehow explain this problem but for this I have to become an astronomer what is impossible. My model is much simpler than the LCDM model with its 24 parameters.
By the way, I recently discovered an article claiming that GT in the theory of BB violates causality. As I understand it, the theory of PAN suggests that matter is shrinking, but moving with acceleration, and that the reduction due to acceleration is manifested as an expansion with acceleration. Yes, if matter accelerates in the Universe despite it is shrinking, then in this case the relativistic energy of the Universe increases, although in spite of the fact that the average relativistic energy can decrease, what is necessary in my theory. This is my problem, because of which I have not yet abandoned the attempts to explain my mathematical findings to astronomers. This can take many years.
Regards
Joseph
13th May, 2018
Forrest Noble
Pantheory Research Org.
Dear Joseph,
I am a big fan of simplicity in physics, which you said is one of your goals in your own proposed model(s). In today's physics there seems to be little effort spent attempting to simplify theory. Most all physicists believe that to understand the universe is very complicated. Although most equations are necessarily complicated, explanations of theory often lack logic because the theories being explained are often partly, or wholly wrong IMO.
Because I often have contrary views and theory to mainstream models, my contrarian papers are often difficult to get published, maybe somewhat like the paper you ended up sending to the Journal of Cosmology. Our current paper might be called a "no dark matter" paper. It is contrary to the existence of dark matter. What we think is a better proposal has been presented with much data, calculations, and evidence to support our proposal. This paper was finished more than a year and half ago and is still in the process of our own editing since the paper has been turned down by some of the major publishers who have published at least one previous paper of ours. Some rejections were expected because contrarian papers may never be published in a mainstream journal. In this case I will keep changing the way the paper has been written to eventually get it published, hopefully in a higher profile journal. In another month or so I expect to submit this many-times-amended paper again to still another mainstream journal. We spent more than 2 years on our now published "no dark energy" paper.
Although I think the Big Bang model is an incorrect cosmological model, somebody's claim that the BB model violates the principle of causality probably has flaws in the logic. In my opinion the beginning of the BB model has been adequately explained by one of the mainstream versions of the model. Where time can be explained as meaningless without change, there could have been no cause for an original Big Bang, otherwise if there were -- what would be the cause of that cause etc. So one ends in an infinite series and a infinite universe concerning time. There has to have been an original cause for a finite universe model. If the universe were infinite in time there also could not have been any cause for it since time would be infinite in the past. Where the universe is defined as everything in existence, whether finite or infinite, it could not logically have had a beginning external cause for it, from a logical perspective.
I will look at your book and comments posted above, and hopefully come up with good suggestions :)
best regards, Forrest
13th May, 2018
Forrest Noble
Pantheory Research Org.
As an afterthought I thought I would give an additional explanation for the Anachronistic Galaxy Problem with the BB model.
As I said in our related paper, this may be the biggest, and most obvious problem with the Big Bang model. This problem is well-known to astronomers but seldom discussed excepting by those making such on-going observations.
A great many galaxies at the furthest distances appear to be red, fully mature, large galaxies, some similar to the Milky Way, and some even older looking galaxies. At these great distances only small, blue, young appearing galaxies should exist according to the BB model. This is probably the biggest problem with the Big Bang model and the problem that will eventually result in its demise, IMO, after the James Webb has been up for awhile and can see no limit to the extent of galaxies, and galaxy clusters. Below are links to a few of the many such contrary observations at the presently furthest observable distances.
14th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
In connection with the problem of anachronistic galaxies, I will try to describe the situation very simply, without going into details. In the theory of LCDM there are claims to a very precise and voluminous description of the formation of galaxies and all this soup of matter (rather quantum particles effects) accompanying the formation process. First, some formal scheme is used and then the scheme turns into a verbal description. It seems to me that we need to do the opposite - first we need to understand what an explosion is in simple words. Indeed, an explosion performing in time is an instantaneous transition of more dense material into a more sparse material.
Let we describe this process by one parameter - the density of the relativistic energy parameter mu. For me it is like a description of matter of a proton in a Large Hadron Collider moving with a nearly light speed what is equivalent to about 400 tons of mass moving at a speed of 150 km per hour. It is clear that the energy density enclosed in the volume occupied by proton will be extremely high. In my model from purely speculative purposes, I took the energy density approximately 1015 in the inflation phase of the BB. This density corresponds to a globe of 1cm in diameter. Now it would be interesting to calculate what the density of the universe will be at the time of the anachronistic galaxies formation, that is, when it passed somewhere 400,000,000 light years after the initial inflation. I used the NED database for this purpose from Stark's article. The farthest galaxy in this sample is at a distance of 7.700 MPC. The linear transformation that maps the distance to the galaxy with the density of relativistic energy gives me the result mu = 2.79. – indeed, there was a huge leap in density. If we now move to the present state of density, then, in fact, mu=0.12457. Actually, during 16.7 billion years that passed there was, in fact, only small leap in density. According to my calculations, there exists a theoretical, by analogy with LCDM model, a critical value mu = kappa = 0.087267 at which the phase process of energy transition to mass ends. It highlights, in other words, that the death of the universe will come. Hence it is clear that our universe has almost finished its evolution.
What I said in words is a fairy tale about the dynamics of our universe. A fairy tale is a fairy tale, but maybe there is something instructive here.
Yours sincerely
Joseph
14th May, 2018
Forrest Noble
Pantheory Research Org.
Yes, it seems likely that your analysis may be a harbinger of something important because of its apparent accuracy. I will continue to investigate your material to give my humble input into what kind of nuggets your related material might contain. Since my related views will probable not be in accord with mainstream theory, my related input will just be personal opinion, probably not palatable to mainstream astronomers and theorists at the present time.
best regards, Forrest
15th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
My duty is to help you reading my stuff. Don’t hesitate to disturb me.
I am worried that my math do not make any harm on you. I think to put your distance formula
r1 = 21.2946 log10 [.5((z +1).5 - 1) +1] (z +1).5 P0
in question to fit it with my theory. I must find an interval of z’ts mapping the z’ts into my average energy density interval. I already tried to do that with
d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 but must refresh my findings.
By the way d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 can be rewritten in more elegant form
d = tanh(ln(1+z)) c / H0. Best JM
23rd May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
I do not quite understand your distance formula.
Why do you have 21.2946 constant and P0 constant - why not to merge these two constants into one? Next, is the log10 just a constant or it represents by itself a log10 function, e.g. log10[of .5((z +1).5 - 1) +1 ].... Please, make it cristal clear to me how to put the z into your formula. I really will try to match your formula in my model with the calculus of distances via average energy density parameter.
Best JM
24th May, 2018
Forrest Noble
Pantheory Research Org.
Joseph,
Yes, for practical applications you could merge the two constants. But rather than calculating anything yourself you could use the programmed calculator at the link that you first posted: http://www.pantheory.org/HF.htm
The reason for the two constants being separate is because the formula was first written in the form of the natural log 'e' because the foundation Pan Theory has its basis in the natural log, so in that formulation the first constant is different. The formula was converted into log 10 base for ease of calculation. The form of the equation : r1 = ...... , above, is the way the program in the above link was written. Also, because the first constant was calculated based upon theory and the observation accuracy of spectral lines, while the second was determined based upon the combined type 1a supernova data available in 2013 which could vary maybe as much as 1 significant figure with hundreds of more observations.
When using the programmed equation in the link, if you have the determined redshift z of .52, for instance, you would put in the value of z+1 into the z input data, therefore you would put in 1.52 following the 1. This would be the relative spectral length increase over its normal beginning length indicated as "1." From this input the calculations of my formula are determined with no other input. To compare these results with the Hubble formula the additional input of a Hubble constant is needed. The average constant expansion rate that seemed to be used for the all-sources supernova data was 68 km/s/mpc, although I could find none specified since the Hubble constant can vary up to 10% depending upon the method of determination.
I haven't had time as yet but I still hope to be able to make possibly helpful suggestions concerning your related writings.
best regards, Forrest
24th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
I am very sorry to disturb You.
I used z=0.52 and get 2459.374986 using your PAN distances formula.
Is it correct? Is your formula in MPC distances?
Best JM
25th May, 2018
Forrest Noble
Pantheory Research Org.
Forgot to answer your other question.
"Is the log10 just a constant or it represents by itself a log10 function, e.g. log10[of .5((z +1).5 - 1) +1 ] "
One could consider it as the log of the function [ .5((z +1).5 - 1) +1 ], and (z +1).5 as a separate function, or they could be considered a single combined function of z when multiplied times the constants.
26th May, 2018
Forrest Noble
Pantheory Research Org.
Yes, that is correct. Although the distance is greater than the Hubble calculated distance, the apparent distance (observation angle) is very close to the same. I had a much longer answer written which was somehow lost. I'll elaborate more on this when I get back from traveling this weekend beginning 5/25/18.
26th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
Thanks for your efforts. Check that your formula can be rewritten in more elegant form:
Q*ln{[(1+sqrt(1+z))/2]sqrt(1+z)}, where Q=18110.607641.
I will come back with some positive statements about this formula in connection with my Relativistic Energy Average Density Scale.
Best JM
27th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Once again, as JPG image
1 Recommendation
29th May, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
It is very important for me to evaluate my own efforts in the field that now is rising under cosmology nomenclature. I did not accidentally draw attention to the PAN theory because here it was possible for me in addition to the Hubble scale to compare your formula of distances to galaxies with my scale of relativistic matter density about which I mentioned earlier. I found that the Hubble scale has a "convex" character while the PAN scale is practically linear. The Hubble scale, in simple terms, exaggerates (overestimates) the distances for small z at the same time, as it seems to me it underestimates the distances for larg z>=5 . Which of these two scales is closer to reality is hard to say. A similar character applies to my scale of relativistic density. However, there is one advantage in my scale.
Indeed, when we talk about some kind of measurement scale, usually one point is selected on the scale corresponding to the state of our environment. So 0o Celsius corresponds to the transition of water to ice. Now it is appropriate to ask what the temperature is outside the window, it's very simp now, for example, 24o degrees Celsius. Similarly, on my density scale there is a density point corresponding to the death of the universe, the critical density is kappa=0.87268. Now it is also appropriate to ask where is our universe at the moment - the answer, at the point mu=0.12457 .
Finally, we are now approaching the advantage of the density scale. Namely, it is easy to display any scale to be embedded into density scale, e.g. the scale of redshifts, the Hubble scale or the PAN scale into on the density scale. The mapping will characterize the scale image by the location of its image on the Density scale. For example, the recently discovered galaxy NG-z11 with z = 10.2 corresponds to a density on the PAN scale mu ~ 379.64788 at which the dimensions of the universe are negligible ~ 0. According to Hubble my mu density at z = 10.2 is equal to 52.119401. The size of the universe at the moment is ~ 3.065505. I can also note that the PAN scale at z = 10.2 corresponds to a distance of ~153.54 billion light years, while the Hubble distance is ~ 14.33 billion. The difference with the factor is more than 10 times.
Sincerely your JM
1st Jun, 2018
Forrest Noble
Pantheory Research Org.
Yes, your comparison of distances calculated by the two different distance scales is correct. The Pan theory model is a scale-changing theory whereby, relatively speaking, matter becomes smaller as time progresses. From this perspective it would appear to us that the universe or space was expanding, whereby instead neither would be happening.
In astronomy one indicator of galactic distances is called the observation angle. Given a galaxy of a particular diameter, the farther away one is from the galaxy, the smaller the angle needed to visually traverse the galaxy from side to side with a telescope. The average observation angle should progressively decrease as distances increase. If the measurement scale is wrong, and distances miscalculated, then this is not what will be observed.
In the Pan theory model, observation angles match distances. Based upon Hubble formula calculated distances, observation angles do not correlate with distances at all. The rationale for this, according to the Big Bang model, is that galaxies in the past were progressively smaller in size even though they appear to be brighter than they should at calculated distances.
In an expanding universe, the universe would have been denser in the past. The opposite has been observed consistent with the Pan Theory model. After the James Webb goes up and is successfully operating, I believe it will become obvious to astronomers that something is wrong with the Big Bang model. Just before the James Webb goes up we intend to write a paper explaining what we believe the James Webb will observe that will be contrary to mainstream cosmology.
That being said, it would seem that your studies and conclusions better fit with standard cosmology. I would hope that my ideas, equations and conclusions could help you with your research but expect that it could not. Maybe the primary value of the Pan Theory is that contrary future observations may not come as a complete surprise for those familiar with this model.
respectfully, Forrest Noble
3rd Jun, 2018
Forrest Noble
Pantheory Research Org.
Joseph,
I gave an answer to your reworked version of my equation but again my answer was dropped. I must be occasionally making a mistake of some kind in my postings. I will check out your form of this equation, but if they are the same, I much prefer your version to the form that I posted. The log 10 form was easier to program in JavaScript than the natural log format. But the natural log format is the format that is an analog to the theory behind the equation so it is the preferable format. Thanks again for your effort in my behalf. I will get back with you concerning this equation and your own cosmological ideas of your book as time permits.
with best regards, Forrest Noble
3rd Jun, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
A quick answer. I checked carefully the LN form of your equation - it should be correct. It appears to me that the constant 21.946*PO is somewhat a calibration constant, which can be changed to fit real data best (18...) In the LN form. I have downloaded your article, where the PAN theory is introduced. The idea of diminishing the matter, perhaps, suits me well. I will come back with some questions. I really appreciate our conversations. Regards Joseph
7th Jun, 2018
Joseph Emmanuel Mullat
Independent Researcher
Dear Forrest,
Despite the fact that the "adjacent views" of the PAN theory do not agree with the main line of astronomers and theorists, it is important for me to understand the meaning of the discrepancies. Every author knows its weakest points best. I have two points that can cause objections. The first, which for me is extremely important, is to explain the decrease in the parameter of the average energy density. The second concerns the interpretation of the same parameter.
The PAN theory states that “The alternative cosmological model proposes that matter becomes smaller in size but proportionally greater in quantity as time progresses. In the past there would have been accordingly fewer individual units of matter than there is now, but over time the density of matter in space would remain the same; as these individual units halve in size, they double in their numbers. These matter units in the future will accordingly be smaller but there will be more of them. For this reason this model is also a type of steady-state model.
I understand the first half of the statement, although the second half remains tedious for me. Does this mean that the new atoms of matter reconnect or fill the old spatial atomic volume that becomes available because the matter will be compressed? In my model, as well, the volume of the total matter-energy increases despite the decrease in density. Even if my model supposedly corresponds to reality, I need to explain why the energy of atoms decreases together with the decrease in atomic volume, as required by the theory of PAN, e.g. the energy of the hydrogen atom should decrease together with its atomic volume? I also consider the creation of matter, but the new matter connects space "on the edge of the universe" as a result of a phase transition of dark energy. Dark energy in this context has nothing to do with repulsive gravity. I look at dark energy as a source for matter creation that can have some primary anomalies leading to secondary anomalies, i.e. to gigantic voids without matter or vice versa, like the threads of galaxies formed during the phase transition.
It seems that in the past, the density of matter was higher. However, there is evidence that this is not so. Apparently, it is necessary to rephrase the density parameter in a different way. "In Physics mass-energy equivalence states that anything having mass has an equivalent amount of energy and vice versa." The latter emphasizes, e.g. the duality an electron as a particle and wave, the light beam duality, etc. It appears that in the past the energy dominated matter, and at the moment, matter dominates the energy. Therefore, we can say that the average energy density in the universe reflects some synthesis of matter and energy. In this sense it is convenient to introduce the scale of matter density and to reveal in this way the dynamics of the Universe. Therefore, my point is to map distances, or any cosmological indicators, such as the brightness of galaxies, or their visible angles, etc. into the average energy density scale.
Yours sincerely
Joseph
11th Jun, 2018
Forrest Noble
Pantheory Research Org.
With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.
Continuous creation of matter
When the universe was first thought to be expanding, theorists like James Jeans, Paul Dirac, Fred Hoyle, Bondi, Gold, Narlikar, and many others made such proposals. In the early 1930s, before his endorsement of the Big Bang model, Einstein developed his own Steady-State density model involving the new creation of matter to maintain a constant matter density in an expanding universe. The new-creation mechanism(s) discussed were similar to prior proposals made by Paul Dirac and James Jeans. He wrote a related paper of this proposal but decided to set it aside in his archives rather than publishing it once he began to believe the Big Bang model was on a solid footing and began to endorse it.
The Pan Theory process has similarities to these other proposals but because of the diminution of matter process it generally does not involve an expansion of space or of the universe .
A present-day mainstream proposal of the 'Continuous creation of matter' involves Hawking radiation and the idea that near the event horizon of black holes, particles can be created by vacuum fluctuations that could create new matter in this universe. This accordingly could be in the form of electrons, positrons, and possibly protons and anti-protons. This idea seems to have kinship with the steady-state creation of matter and 'C' field matter creation of Quasi-steady-state cosmology, but on a much smaller scale. By new-matter creation and energy radiation, black holes accordingly could therefore evaporate out of existence.
The Pan Theory calculation of redshifts vs. distances (which you reduced more elegantly :) ) involves primarily 3 unstated variables concerning its equation. First, matter in the past would have been relatively larger radiating relatively longer wavelengths of radiation. Secondly, our yardsticks would now be smaller so that matter would appear to be moving away from itself producing a Doppler-like redshift. Thirdly, time would be relative to the size of matter. Relatively larger matter and space in the past would be compatible with time being relatively slower. Longer distances traveled, from our time-frame perspective, would have taken a longer period of time maintaining a generally constant speed of light, while explaining the dilation of time from past events such as supernovae. Longer time periods would mean that the time between wave-lengths would be greater equivalent to a redshift from today's perspective of these waves.
So the question becomes: With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.
According to the Big Bang model the density of matter in the past would have to have been greater in an expanding universe, 8 times more dense 7 billion years ago, and 64 times more dense roughly 10.5 billion years ago. This is exactly that opposite of what has been observed. This is the universe observation-density problem which we explained in our previous papers.
Why does the apparent density of galaxies drop off at larger distances? (Advanced question)
A generally standard mainstream answer to this question can be seen in this link:
Look up my e-mail address at pantheory.org. I wish to make you a proposal concerning our present research and related paper -- in regard to your insights.
with best regards, Forrest Noble
-------------------

Similar questions and discussions

Is the quantum mechanics a complete theory in the sense that no additional axiom can be added to it?
Discussion
57 replies
  • Sofia D. WechslerSofia D. Wechsler
In my article
I show that the most popular interpretations of the quantum mechanics (QM) fail to reproduce the quantum predictions, or, are self-contradicted. The problems that arise are caused by the new hypotheses added to the quantum formalism.
Does that say that the QM is complete in the sense that no new axioms can be added?
Of course, a couple of particular cases in which additional axioms lead to failure, does not represent a general proof. Does somebody know a general proof?
Is anyone doing continuous quantum measurements? Are you picking up gravity, seismic, magnetic, ELF and other noise?
Question
2 answers
  • Richard CollinsRichard Collins
As the quantum detectors get more sensitivity they should be picking up fluctuations in the local gravitational potential and electromagnetic background fields of the earth. But I have not seen anyone running a detector continuously for the days or weeks or months needed to do the required correlations to trace things out. Nor, do I see anyone using time of flight (array imaging with speed of light and speed of gravity) to correlate and image and characterize sources.
For many years I have been tracking every new gravimeter design - traditional sensitive accelerometers, atom interferometer gravimeters, MEMS gravimeters, electrochemical gravimeters, electron interferometer gravimeters, Bose Einstein gravimeters, and many others. If I left out your favorite gravimeter (also gradiometer and other names), tell me and I will add it to my list.
The problem with most of them is they are too slow to sort out natural, man-made and purpose build signals. You need at least Msps (mega samples per second) which is 300 meters resolution for many global sources. But there are many ADCs (analog to digital converters) that can do Gsps (giga sps) for relatively easy correlations.
The seismometer networks pick up a small bit of gravitational noise, as do the magnetometer networks, as do many electromagnetic sensor networks. It is taking a long time for all the groups to sort out, mostly "not my job". But if there were some decent, low cost sensors that were sensitivity enough to track acceleration in real time (Gsps is real time for these sorts of things) maybe we could separate and characterize the sources.
I am writing to everyone, so some groups are very sophisticated but don't do practical things. And other groups have practical problems, but no time or resources for theory. And all the groups are struggling with sharing data and models, ideas and problems globally.
Anyway, I know there are lots of "quantum" and "condensate" and other devices out there that have noise. I am asking all the groups to share their noise and sort it out. Much of the "kT" noise is partly magnetic, partly gravitational and much human activity. It is possible to separate. But it means serious effort for global correlations. Now the radio telescope groups have "correlators" and seismic groups have their methods, and electromagnetic groups their methods, and gravitational groups their methods. But it is just one field.
Regardless. If you have noise that propagates at the speed of light and gravity, then all the other groups are picking up part, or all your signals too. If you are tracking signals propagating at acoustic speeds and particle speeds then you should also be checking the speed of light and gravity signals - because there are almost always couplings - that show up in correlations.
And, if you are one of those rare people who also check for instantaneous signals (or ones that are billions of times the speed of light scale) then there are screens and checks for those too.
But here I am particularly asking for those "quantum" groups who find analog signals in their devices when they try to reduce the size, increase the frequency, lower the temperature -- and all the thing people are doing to get to "nano", "pico", "femto", "atto", "zepto". "yocto" scale phenomena.
Please update your notes on noise. Those strong millivolt, microvolt and nanovolt signals are just the start of many levels of tracking noise sources. If you have distributed sources it looks more like diffusion than shock waves or pulses. The signal from an earthquake is going to propagate at the speed of light and gravity, but it is going to have cubic kilometers of source. It is trackable, but it needs low cost detectors of high sensitivity and high sampling rates - then the arrays can image and track the seismic waves. That is just one of many hundreds of outstanding problems that need better detectors. I am hoping some of the groups who have been pushing hard to make "quantum device" will take a few moments to look at their noise seriously and think of the practical applications and problems of using those for imaging.
The signal at a superconducting gravimeter is about 95% sun moon tidal signal. That is about +/- 1000 nanometers per second squared (nm/s2) at one sample per second (sps). And the remaining 5% is from the atmosphere and nearby water and a tiny bit of magma. There will be the usual magnetic noise and electromagnetic noise from nanoHertz to GigaHertz. But some of it is gravitational - at least it shows up as a signal in a gravimeter or gradiometer or direct gravitational potential sensor (time dilation, Mossbauer, LIGO type detectors).
A "good" gravimeter array can image the local atmospheric density, flows, radiation field. That is a strong signal. You can convert gravitational signals to magnetic units by using B = 38.7083 g, where B is in Tesla and g is in meters/second^2. The earth field, 9.8 m/s2 comes to about 379 Tesla. That is why gravity is so fine grained and powerful. And why it is so hard to make strong magnets in the fluctuating earths gravitational field.
The tidal gravitational field is fairly smooth, but it can also be turbulent. You are just as likely to see "flow noise" than "sparks" or "shocks" or "pulses". And lots of slow drifts and sudden changes of levels. It is not hard, but requires care and effort.
I have been at this for several decades. I take this unusual step of asking "anyone with noise" to contact me. If you have shielded your device from electric field variations, and done some magnetic shielding and still getting drift and variations, then it is "gravitational".
I can tell you the rough size at the surface of the earth. A lot of the "kT" noise is gravitational flow noise. It is actually moving at the speed of light and gravity but you only see the net as a slow motion. The potential is smooth and has tiny gradients. You can see the gradients fairly easily. The "grain" is about one 7 millionth the mass of the electron and the size is picometers. The ultimate grain is not as small as the Planck scale, but on the order of 10^-24 meters. For practical things it is only necessary to work at 10^-18 scale. But use them all and you don't have to stop at boundaries.
Electrons have mass, charge and magnetic moment - so they pick up electric, electromagnetic, acoustic and gravitational noise (signals if you know where it comes from). Just as electron paramagnetic resonance has advantages of higher speeds and greater sensitivity, so too does any electron or hole based device have advantages for detection and characterization of fast and tiny signals. I spent several years checking - the camera electron (or hole) wells are sensitive enough to use for detecting gravitational variations - and there are plenty of tiny escape events to work with. The same for all the memory devices - they are just small floating islands of a few electrons each. Sorting out the noise in the memory chips will help shrink those down to single electron charge levels and below.
It is possible to image the atmosphere with gravitational arrays. Since I am lumping magnetic and gravitational fields together now, that means any combinations of "gravimeters" or "magnetometers" or "electromagnetic (from nanoHertz to PetaHertz or more). Moving sensors get a synthetic aperture advantage. So if someone would boost the GRACE type satellites to monitor the motion of electrons at Gsps rates we could get clear, real time images of the earths interior. Likewise moving sensor detectors arrays for volcanoes, ocean currents, density variations, magma and other things. The "moving" can be from seismic or natural vibrations, just measure it carefully for correlations and corrections. It jinks the position of the sensor so you can use subpixel methods. Deliberate movements are fine, but random but measured ones work too.
Richard Collins, Director, The Internet Foundation
What is InTech Open Science? A predatory or a ligitimate publisher?
Question
493 answers
  • Stephen Jia WangStephen Jia Wang
Dear friends colleagues, have you ever received an invitation to publish your work at InTech Open Science (https://www.intechopen.com/)? I have recently been invited to edit a new book title for them. I am usually suspicious with such invitations and must check the authenticity of the publisher first. Interestingly, they claim that they have published the work for two recent Nobel Laureates. Therefore, I would appreciate your experience and opinions regarding InTech Open Science.
Kind regards,

Related Publications

Article
We present a class of regular black holes with cosmological constant Λ in nonlinear electrodynamics. Instead of usual singularity behind black hole horizon, all fields and curvature invariants are regular everywhere for the regular black holes. Through gauge invariant approach, the linearly dynamical stability of the regular black hole is studied....
Article
Full-text available
Recently, a new class of modified gravity theories formulated via an additional scalar and vector field on top of the standard tensor field has been proposed. The direct implications of these theories are expected to be relevant for cosmology and astrophysics. In the present work, we revisit the modified framework of the scalar-vector-tensor theori...
Article
We derive the black hole solutions in higher curvature gravitational theories and discuss their spacetime structures. In our analysis Lovelock theory is mainly investigated, which includes cosmological constant, Einstein-Hilbert action, and Gauss-Bonnet term as its lower order terms. Among the solutions, there are solutions which may become extreme...
Got a technical question?
Get high-quality answers from experts.