11th Jun, 2018

Pantheory Research Org.

Question

Asked 18th Apr, 2017

I wonder about the source of the formula at internet

[((z+1)^{2}-1) / ((z+1)^{2}+1)] c / H_{0}

H_{0} – Hubble’s constant, c – speed of light.

“*Comparing Hubble calculated distances and brightnesses with Pan Theory calculations of distances and brightnesses.*"

I have checked the formula against 100 galaxies with [0<z<=1]. The correlation was ca. 99%. Somebody knows where the formula stems from? JM

With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.

Continuous creation of matter

When the universe was first thought to be expanding, theorists like James Jeans, Paul Dirac, Fred Hoyle, Bondi, Gold, Narlikar, and many others made such proposals. In the early 1930s, before his endorsement of the Big Bang model, Einstein developed his own Steady-State density model involving the new creation of matter to maintain a constant matter density in an expanding universe. The new-creation mechanism(s) discussed were similar to prior proposals made by Paul Dirac and James Jeans. He wrote a related paper of this proposal but decided to set it aside in his archives rather than publishing it once he began to believe the Big Bang model was on a solid footing and began to endorse it.

The Pan Theory process has similarities to these other proposals but because of the diminution of matter process it generally does not involve an expansion of space or of the universe .

A present-day mainstream proposal of the 'Continuous creation of matter' involves Hawking radiation and the idea that near the event horizon of black holes, particles can be created by vacuum fluctuations that could create new matter in this universe. This accordingly could be in the form of electrons, positrons, and possibly protons and anti-protons. This idea seems to have kinship with the steady-state creation of matter and 'C' field matter creation of Quasi-steady-state cosmology, but on a much smaller scale. By new-matter creation and energy radiation, black holes accordingly could therefore evaporate out of existence.

The Pan Theory calculation of redshifts vs. distances (which you reduced more elegantly :) ) involves primarily 3 unstated variables concerning its equation. First, matter in the past would have been relatively larger radiating relatively longer wavelengths of radiation. Secondly, our yardsticks would now be smaller so that matter would appear to be moving away from itself producing a Doppler-like redshift. Thirdly, time would be relative to the size of matter. Relatively larger matter and space in the past would be compatible with time being relatively slower. Longer distances traveled, from our time-frame perspective, would have taken a longer period of time maintaining a generally constant speed of light, while explaining the dilation of time from past events such as supernovae. Longer time periods would mean that the time between wave-lengths would be greater equivalent to a redshift from today's perspective of these waves.

So the question becomes: With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.

According to the Big Bang model the density of matter in the past would have to have been greater in an expanding universe, 8 times more dense 7 billion years ago, and 64 times more dense roughly 10.5 billion years ago. This is exactly that opposite of what has been observed. This is the universe observation-density problem which we explained in our previous papers.

Why does the apparent density of galaxies drop off at larger distances? (Advanced question)

A generally standard mainstream answer to this question can be seen in this link:

Look up my e-mail address at pantheory.org. I wish to make you a proposal concerning our present research and related paper -- in regard to your insights.

with best regards, Forrest Noble

-------------------

**Get help with your research**

Join ResearchGate to ask questions, get input, and advance your work.

Hubble's theory depends on the behavior of photons. The notion of a photon is not well comprehended. It is NOT A WAVE and its carrier is NOT THE ELECTROMAGNETIC FIELD. Waves cannot travel billions of light years through empty space and stat detectable. The EM field does not cover such huge ranges.

Dear Sir,

I am a novice in this matters and I am still confused with this formula as I found the formula in difference context, i.e. like **tanh(ln(1+z))**, that was the question. Hubble’s low (experimental result) means for me a law how the EM wave behave when traveling huge distances. Hubble interpreted the EM (the light) redshift as a result of Doppler effect. Therefore this formula, when following in steps of Hubble, can be used only for z<=1. However, nothing prevents to use it for higher z values even nobody yet discovered galaxies with z >12. On the other hand I am convinced that the weak **gravitational redshift** is more reliable for interpretation for the redshift than the Doppler effect. As consequence we must drop the dynamic nature of the universe. So, what is then left to the BB? It might be that the BB is not a BB, but a phase transition of matter from unknown origin, like dark energy. Water can remain liquid below zero, so called super cooled water. Why not to suppose that dark energy is a matter below 0K. Is her some contradictions of laws of physics?

@Joseph,

Physics theories are guided to pragmatic applications and not so much on explanations of its origins. For that reason, most physical theories are descriptions of observations rather than explanations. If a description appears to fit, then it is accepted as a valid theory. In this way, all kinds of "theories" exist that claim to describe the behavior of the universe. Our problem is that only a small part of reality is accessible to observation, while all observations are affected by changes of the format of the perceived information. Google for the Hilbert Book Model.

I've never heard of Pan theory, but that formula is

Distance = velocity * Time.

Where T = 1/H_0 = 1/13.7 billion years

The expression of [((z+1)2-1) / ((z+1)2+1)] =v/c

It is the inverse of the relativistic redshift equation v/x = sqrt((1-v/c)/(1+v/c))

Ho Jonathan,

Many thanks. I received also an answer that matches yours. Yes, you are right. The equation to convert the redshift (z) to distance is based on recessional velocity (v). Indeed, the relativistically correct equation for velocity is:

Hubble distance equals **d=vH0**. Thus, dividing the expression by **H0=67,15km/s** per **MpC**, the distance for a redshift, f.e., **0.138** is: **[(1+0.138)**^{2} -1]/[(1+0.138)^{2} +1]c/H_{0}=573.9MpC.

However, I am still in doubt. The function **f(z)=[(1+z)**^{2} -1]/[(1+z)^{2} +1] or **f(z)=tanh(ln(1+z))** in more elegant form, is convex, i.e., **f’’(z)<0**:

I recently picked up a data taken from **NED-D** in Table 3, **Astronomical Journal, 153:37** (20pp). When observing the data at glance I noticed that distances distribution in the universe is approximately concave.

In authors words the data highlights

So, I am totally confused because convex function cannot underpin concave function.

Best JM

Ps. The Pan Theory – Alternative to the Big Bang Theory may be found at

Such independent indicators would be magnitudes of standard candles. I have recently heard the word "luptitudes" but I haven't yet learned what that means exactly--something to do with accounting for signal-to-noise ratios in very faint and distant galaxies.

I'm not sure what this means.

But, I can offer you another variable that might help.

Let's name this other variable "rapidity" denoted as curlyphi or varphi. This represents the hyperbolic rotation angle between our worldline and the distant galaxy's worldline.

varphi= ln(1+z)

beta = v/c = tanh(varphi)

gamma = 1/sqrt(1-beta^2) = cosh(varphi)

beta*gamma = beta/sqrt(1-beta^2)=sinh(varphi)

My grasp of what you mean by "underpin" might be a bit hazy, but if my usage of the word matches yours, you will understand when I say that z is not what underpins the space, nor does beta, gamma, or phi. But what underpins the space is a Minkowski coordinate system.

In this framework, distant objects are not required by principle to move according to a Hubble flow. They just happen to move according to a Hubble flow because of the event or events which set them into motion.

These events would tend to create an environment of equipartition in rapidity space, which in turn, produces a local appearance of equipartition in velocity space (small angle approximation: sinh(cphi) = tanh(cphi) = cphi.) which in turn, produces an apparent homogeneity over regions where the small-angle approximation holds.

Hi Jonathan,

I used the term “**underpin**” in its ordinary meaning like “**reinforce, back, sustain, corroborate, confirm,**” etc.

I am not familiar with all these funny names: **varphi, beta, gamma**, etc. For me it is just hyperbolic trigonometry, where a pail is called pail. Hyperbolic geometry, to my knowledge is connected to geometries with negative curvature. Minkovski geometry represents a positive curvature geometry. Minkovski geometry, let we agree on the word, “underpin” the **NED-D** data much better than negative curvature geometries. Hyperbolic geometry underpin better the Doppler effect.

Two functions, as you already noticed, represent so called diffeomorphsm, where the direct mapping and its reverse are both differentional functions:

represent such en example of a diffeomorphism. Unfortunately this diffeomorphism does not underpin the **NED-D** data. On the contrary, I found a diffeomorphism which does:

**mu**- an average density of matter in the Universe,**lambda**- calibration parameter like in**MOND**model,**mu=27.567341, lambda=0.8375102****r**– the distance to extragalactic entity.

I call the **g**-function a weak gravitational potential function, which reverse function, indeed, in contrast to **tanh[ln(1+z)]**, is not a convex but concave. One can check that the reverse **r**-values of **g**-function taken in ascending order on the interval **g** in **[0.03;10.86]** will undepin **NED-D** data, at least with the correlation coefficient **0.988238**.

beta, varphi, and gamma are just words used to represent LaTeX representations of greek and roman symbols.

I could have written:

Define: rapidity= ln(1+redshift)

Define: celerity = beta = velocity/sped of light = tanh(rapidity)

Define: Lorentz Factor: gamma = 1/sqrt(1-beta^2) = cosh(rapidity)

Derive: beta*gamma = tanh(rapidity)*cosh(rapidity)=sinh(rapidity)

As for your use of underpin:

I am wondering, now, if you might use the word "fit", as in, "this function fits the data", or "perform an ordinary least-squares regression to fit the data"

Hmmm... Sorry, I'm not familiar with the Ned-D data tables, but I'm looking at a table of something called NED-D data now. at

This particular table doesn't even show a value for z. It just has a column for V helio(z). Nor does it show a value for weak gravitational potential function, that I could see, unless that's the GLON or the GLAT.

Is your concern here, really related to whether various functions are concave/convex, or does it have more to do with whether they are one-to-one, and onto? So long as two functions are one-to-one, and onto over their respective domains and ranges, you can do functions of functions of functions, and not lose any information.

To conclude the discussion.

It was **impossible.** in accord to my level and knowledge of data analysis, to fit the data from **NED-D**, **The Astronomical Journal, 153:37 (20pp), 2017 January, Table 3**, column **Mean (Mpc)** (compiled, in authors words, from **954958 extragalactic entities ** and collected from **34389 galaxies**, reports of **13745 authors**) by Hubble’s distances **tanh[ln(1+z)]c/H0, c – speed of light, H0 – Hubble constant**.

Which was greater? The values in column 3, or the hubble distances?

In most situations, the calculation of the distance comes from estimations based on the object's magnitude, rather than its redshift.

Hi,

As said, the observation of Astronomical Journal, 153:37 (20pp), 2017 January, Table 3, column Mean (Mpc), convinced me, at first glance, that Hubble’s distances at lower z-s, perhaps, give overly high values, and, in contrast, on higher z-s Hubble’s law underestimate more longer distances of extraterrestrial objects. I need to think this preliminary analysis result in details. In the illustration attached more correct estimates might be useful.

JM

Ps. Tab. 3 highlights 94.958 extraterrestrial entities collected from 34.389 galaxies and reported by 13.745 reserches.

- 376.51 KBzMpc.pdf

Well, the last comment was a year ago. I just found this query in the last week or so, so I will answer it. I was the one who derived the reference equation that was used in the reference website, which is my organizations website. The is the Javascript program referenced above the calculates galactic distances contrary to the Hubble formula. The formula is (10) in the above paper, and the brightness formula variation is (11) differs from the inverse square law of light since larger matter in the past (relatively speaking) would appear brighter based upon its distance.

This model proposes that dark energy and non-baryonic dark matter do not exist, as well as proposing a "simple" beginning to the universe without a Big Bang or Inflation. It is a mechanical “Theory of Everything", proposing to be able to unify all of physics under a single all-encompassing theory. Galactic redshifts are explained by a diminution of matter process rather than by the expansion of space. Space would appear to be expanding but instead matter would be very slowly getting smaller, a type of scale-changing theory. New matter would be steadily created from the matter decrement, maintaining a constant density of matter and a steady-state condition conserving matter and energy. The universe accordingly would be far older but not infinite in size or age. It is also an aether theory, a single fundamental particle theory and a single matter-innate physical-force theory. A related scientific peer-reviewed paper has been published in 2014.

Our website is** **http://www.pantheory.org/

The paper whereby I explained how I derived this formula is shown in this link:

The derivation was made in 2013. It involved several hundred type 1a supernovae data, much more data than what was available when dark energy was proclaimed. The model used to derive the formula is a diminution of matter model, contrary to the expansion of space. The model is called the Pan Theory which can be found on any search engine. The model concerning it does not take too long to explain its basis, but a complete description of it is a book 370 pages long and is available on the internet without cost at pantheory.org

Dear Forrest Noble,

You might be pleased by the fact that another theory also bases on a field excitation that forms the base of all matter. This excitation is a spherical pulse response and it is a solution of the wave equation. It integrates into the Green's function of the field. I apply quaternionic field theory and there the spherical pulse response injects volume into the affected field. That volume locally deforms the field and then spreads over the field and expands the field. See:

Article Generating Mass from Nothing

and https://www.researchgate.net/project/The-Hilbert-Book-Model-Project/update/5af2df924cde260d15dd9529Hans,

I dislike the something from nothing ideas of Hawking, but I also propose the generation of new matter from a background field. The background field I prefer is a particulate version of the Zero Point Fiend. Since this is off topic I will e-mail you and we could discuss our ideas further If you are interested.

Forrest

1 Recommendation

The zero point field is not part of the Hilbert Book Model, but stochastic processes that control the universe are. You might send me a private message at LinkedIn

Dear Forrest Noble,

You have dispelled my doubts about the formula expressing distance by means of the Hubble constant through a redshift z in an elegant form. I found an article where the same formula was proposed to be called under the name of another author. Thanks for the answer. Joseph Mullat

Thank You Joseph, Dear Sir,

The formula you noted above is one form of the Hubble formula, but the formula on the website you listed above at **http://www.pantheory.org/HF.htm** is the JavaScript program, where distances are calculated by my own derived equation. Distances are calculated according to redshift input data based upon a different cosmology model called the Pan Theory, which is my own model.

The Pan Theory distance equation that you referenced by the embolden link directly above is:

Where *r*_{1 } is the calculated distance, z is the observed redshift,

and *P*_{0} is a constant equal to 1,958.0

Below is the version of the Hubble formula that you also noted above:

d = [((z+1)^{2 }-1) / ((z+1)^{2 }+1)] c / H_{0} ,

Where d is the calculated distance, z is the observed redshift, c is the speed of light, and H_{0} is the designated Hubble constant. There are presently two contrary and competing Hubble constants. One is about 68 ks/ mpc and the other about 72 ks/ mpc, using two different methods of determination. This presently is a big controversy in mainstream cosmology.

On the other hand it is believed that the Hubble constant, the expansion rate of the universe, is not constant at all. Present theory/ hypotheses proposes that instead, the initial expansion of the universe was caused by a hypothetical Inflation process, and is presently believed to be controlled by a hypothetical Dark Energy process.

Comparing the Hubble formula (Big Bang cosmology) with the Pan Theory formula:

Pan Theory cosmology (reference any search engine) is based upon a different explanation for observed galactic redshifts.

According to this model, Instead of space expanding, matter (relatively speaking) would be getting smaller, as well as the rate time passes would be getting quicker. As such, it would appear to us that space was expanding. Such models have been called scale-changing theory. Of course the reasons for this change is part of the theory.

Where redshifts z<1, there is no more than a 8% difference between the calculated distances between the two formulas above. Where redshifts z=/>1, the Pan Theory calculated distances progressively increase to multiples of the distances calculated by the Hubble formula with no ultimate distance limit other than the limits of our equipment. Of course such great distances are contrary to the Big Bang cosmology. The reason why these great distances are not recognized by mainstream astronomers would accordingly be because larger matter in the past would have produced more light per galaxy so the greater distances would be compensated by greater luminosity. This compensation still would not match with the inverse square law of light, but would match the determined observation angle of a galaxy or cluster. Instead of the universe expanding, the observable universe would be in a steady state condition with new matter eventually being created from a physical background field as a result of the decrement of the diminution of matter process.

Here is the link to the paper that explains how the Pan Theory formula above was derived.

Here is a link to our paper explaining the major problems with the Big Bang model, and the reasoning to favor the Pan Theory model.

best regards, Forrest Noble

Dear Forrest,

I really appreciate your prompt response.

To your information I'm not an astronomer, in my life I have never looked in a telescope. I wrote several articles that I published privately. All these articles are summed up under one roof, which I call "Monotonous Phenomena of Issues". Only some of them are published but not in prestigious publishing houses. For example, one article, over which I worked for about 20 years, turned out to be the most complicated non-linear economic model with 8 parameters.

My interest in Cosmology was awakened in 1973 in connection with the use of the same “Monotonous Phenomena of Issues”. These were just some of my funny ideas, which at that time I have illustrated by the same monotonous phenomena. Now I'm getting close to what caused my interest in cosmology.

It was Planck Mission from 2013, the results of measurements of which I just entered for fun in my cosmological model. To my surprise, without any problems, my model comes up to results with almost 100% accuracy. Working the last 2-3 years, I published this article unfortunately in the very contradictory Journal of Cosmology. However, the mathematical quality of this article should be at the standard academic level. You can download my book with my illustrations and comments:

I am seeking for contradictions in my model. Therefore, PAN theory suits well. I downloaded your articles and looked through the list of problems. I can immediately say that the "horizon problem" can be explained in my model. As for the "flatness problem", I do not use GR at all. The density problem is transferred to one parameter, which takes into account the relativistic density, which allows all kinds of motions of matter. In my model the matter supposedly standing still. The problem of forming Galaxy is relevant only if a timeline is used. I do not use the time parameter and, therefore, this problem does not exist.

Dear Forrest, I already told you that I'm not an astronomer. I do not understand, e.g. how an observer, unknown to me, came to conclusion, see page 561 in my book. However, if 561 is true, I can explain his observations in my model. "The Anachronistic Galaxy Problem" is confusing. I could at least somehow explain this problem but for this I have to become an astronomer what is impossible. My model is much simpler than the LCDM model with its 24 parameters.

By the way, I recently discovered an article claiming that GT in the theory of BB violates causality. As I understand it, the theory of PAN suggests that matter is shrinking, but moving with acceleration, and that the reduction due to acceleration is manifested as an expansion with acceleration. Yes, if matter accelerates in the Universe despite it is shrinking, then in this case the relativistic energy of the Universe increases, although in spite of the fact that the average relativistic energy can decrease, what is necessary in my theory. This is my problem, because of which I have not yet abandoned the attempts to explain my mathematical findings to astronomers. This can take many years.

Regards

Joseph

Dear Joseph,

I am a big fan of simplicity in physics, which you said is one of your goals in your own proposed model(s). In today's physics there seems to be little effort spent attempting to simplify theory. Most all physicists believe that to understand the universe is very complicated. Although most equations are necessarily complicated, explanations of theory often lack logic because the theories being explained are often partly, or wholly wrong IMO.

Because I often have contrary views and theory to mainstream models, my contrarian papers are often difficult to get published, maybe somewhat like the paper you ended up sending to the Journal of Cosmology. Our current paper might be called a "no dark matter" paper. It is contrary to the existence of dark matter. What we think is a better proposal has been presented with much data, calculations, and evidence to support our proposal. This paper was finished more than a year and half ago and is still in the process of our own editing since the paper has been turned down by some of the major publishers who have published at least one previous paper of ours. Some rejections were expected because contrarian papers may never be published in a mainstream journal. In this case I will keep changing the way the paper has been written to eventually get it published, hopefully in a higher profile journal. In another month or so I expect to submit this many-times-amended paper again to still another mainstream journal. We spent more than 2 years on our now published "no dark energy" paper.

Although I think the Big Bang model is an incorrect cosmological model, somebody's claim that the BB model violates the principle of causality probably has flaws in the logic. In my opinion the beginning of the BB model has been adequately explained by one of the mainstream versions of the model. Where time can be explained as meaningless without change, there could have been no cause for an original Big Bang, otherwise if there were -- what would be the cause of that cause etc. So one ends in an infinite series and a infinite universe concerning time. There has to have been an original cause for a finite universe model. If the universe were infinite in time there also could not have been any cause for it since time would be infinite in the past. Where the universe is defined as everything in existence, whether finite or infinite, it could not logically have had a beginning external cause for it, from a logical perspective.

I will look at your book and comments posted above, and hopefully come up with good suggestions :)

best regards, Forrest

As an afterthought I thought I would give an additional explanation for the Anachronistic Galaxy Problem with the BB model.

As I said in our related paper, this may be the biggest, and most obvious problem with the Big Bang model. This problem is well-known to astronomers but seldom discussed excepting by those making such on-going observations.

A great many galaxies at the furthest distances appear to be red, fully mature, large galaxies, some similar to the Milky Way, and some even older looking galaxies. At these great distances only small, blue, young appearing galaxies should exist according to the BB model. This is probably the biggest problem with the Big Bang model and the problem that will eventually result in its demise, IMO, after the James Webb has been up for awhile and can see no limit to the extent of galaxies, and galaxy clusters. Below are links to a few of the many such contrary observations at the presently furthest observable distances.

Dear Forrest,

In connection with the problem of anachronistic galaxies, I will try to describe the situation very simply, without going into details. In the theory of **LCDM** there are claims to a very precise and voluminous description of the formation of galaxies and all this soup of matter (rather quantum particles effects) accompanying the formation process. First, some formal scheme is used and then the scheme turns into a verbal description. It seems to me that we need to do the opposite - first we need to understand what an explosion is in simple words. Indeed, an explosion performing in time is an instantaneous transition of more dense material into a more sparse material.

Let we describe this process by one parameter - the density of the relativistic energy parameter **mu**. For me it is like a description of matter of a proton in a Large Hadron Collider moving with a nearly light speed what is equivalent to about **400 tons** of mass moving at a speed of **150 km** per hour. It is clear that the energy density enclosed in the volume occupied by proton will be extremely high. In my model from purely speculative purposes, I took the energy density approximately **10**^{15} in the inflation phase of the **BB**. This density corresponds to a globe of 1cm in diameter. Now it would be interesting to calculate what the density of the universe will be at the time of the anachronistic galaxies formation, that is, when it passed somewhere **400,000,000 **light years after the initial inflation. I used the NED database for this purpose from Stark's article. The farthest galaxy in this sample is at a distance of **7.700 MPC**. The linear transformation that maps the distance to the galaxy with the density of relativistic energy gives me the result **mu = 2.79**. – indeed, there was a huge leap in density. If we now move to the present state of density, then, in fact, **mu=0.12457**. Actually, during **16.7** billion years that passed there was, in fact, only small leap in density. According to my calculations, there exists a theoretical, by analogy with **LCDM** model, a critical value **mu = kappa = 0.087267** at which the phase process of energy transition to mass ends. It highlights, in other words, that the death of the universe will come. Hence it is clear that our universe has almost finished its evolution.

What I said in words is a fairy tale about the dynamics of our universe. A fairy tale is a fairy tale, but maybe there is something instructive here.

Yours sincerely

Joseph

Yes, it seems likely that your analysis may be a harbinger of something important because of its apparent accuracy. I will continue to investigate your material to give my humble input into what kind of nuggets your related material might contain. Since my related views will probable not be in accord with mainstream theory, my related input will just be personal opinion, probably not palatable to mainstream astronomers and theorists at the present time.

best regards, Forrest

Dear Forrest,

My duty is to help you reading my stuff. Don’t hesitate to disturb me.

I am worried that my math do not make any harm on you. I think to put your distance formula

in question to fit it with my theory. I must find an interval of **z**’ts mapping the **z**’ts into my average energy density interval. I already tried to do that with

By the way **d = [((z+1)**^{2} -1) / ((z+1)^{2} +1)] c / H_{0} can be rewritten in more elegant form

Dear Forrest,

I do not quite understand your distance formula.

Why do you have **21.2946** constant and **P**_{0} constant - why not to merge these two constants into one? Next, is the **log10** just a constant or it represents by itself a **log10** function, e.g. l**og10[of .5((z +1)**^{.5} - 1) +1 ].... Please, make it cristal clear to me how to put the **z** into your formula. I really will try to match your formula in my model with the calculus of distances via average energy density parameter.

Best JM

Joseph,

Yes, for practical applications you could merge the two constants. But rather than calculating anything yourself you could use the programmed calculator at the link that you first posted: **http://www.pantheory.org/HF.htm**

The reason for the two constants being separate is because the formula was first written in the form of the natural log 'e' because the foundation Pan Theory has its basis in the natural log, so in that formulation the first constant is different. The formula was converted into log 10 base for ease of calculation. The form of the equation : **r1 = ......** , above, is the way the program in the above link was written. Also, because the first constant was calculated based upon theory and the observation accuracy of spectral lines, while the second was determined based upon the combined type 1a supernova data available in 2013 which could vary maybe as much as 1 significant figure with hundreds of more observations.

When using the programmed equation in the link, if you have the determined redshift z of .52, for instance, you would put in the value of z+1 into the z input data, therefore you would put in 1.52 following the 1. This would be the relative spectral length increase over its normal beginning length indicated as "1." From this input the calculations of my formula are determined with no other input. To compare these results with the Hubble formula the additional input of a Hubble constant is needed. The average constant expansion rate that seemed to be used for the all-sources supernova data was 68 km/s/mpc, although I could find none specified since the Hubble constant can vary up to 10% depending upon the method of determination.

I haven't had time as yet but I still hope to be able to make possibly helpful suggestions concerning your related writings.

best regards, Forrest

Dear Forrest,

I am very sorry to disturb You.

I used z=0.52 and get 2459.374986 using your PAN distances formula.

Is it correct? Is your formula in MPC distances?

Best JM

Forgot to answer your other question.

"Is the **log10** just a constant or it represents by itself a **log10** function, e.g. l**og10[of .5((z +1).5 - 1) +1 ]** "

One could consider it as the log of the function [ **.5((z +1).5 - 1) +1 **],** **and **(z +1).5** as a separate function, or they could be considered a single combined function of z when multiplied times the constants.

Yes, that is correct. Although the distance is greater than the Hubble calculated distance, the apparent distance (observation angle) is very close to the same. I had a much longer answer written which was somehow lost. I'll elaborate more on this when I get back from traveling this weekend beginning 5/25/18.

Dear Forrest,

Thanks for your efforts. Check that your formula can be rewritten in more elegant form:

I will come back with some positive statements about this formula in connection with my Relativistic Energy Average Density Scale.

Best JM

Dear Forrest,

It is very important for me to evaluate my own efforts in the field that now is rising under cosmology nomenclature. I did not accidentally draw attention to the **PAN** theory because here it was possible for me in addition to the Hubble scale to compare your formula of distances to galaxies with my scale of relativistic matter density about which I mentioned earlier. I found that the Hubble scale has a "convex" character while the **PAN** scale is practically linear. The Hubble scale, in simple terms, exaggerates (overestimates) the distances for small **z **at the same time, as it seems to me it underestimates the distances for larg **z>=5 . **Which of these two scales is closer to reality is hard to say. A similar character applies to my scale of relativistic density. However, there is one advantage in my scale.

Indeed, when we talk about some kind of measurement scale, usually one point is selected on the scale corresponding to the state of our environment. So **0**^{o}^{ }Celsius corresponds to the transition of water to ice. Now it is appropriate to ask what the temperature is outside the window, it's very simp now, for example,** 24**^{o }degrees Celsius. Similarly, on my density scale there is a density point corresponding to the death of the universe, the critical density is **kappa=0.87268**. Now it is also appropriate to ask where is our universe at the moment - the answer, at the point **mu=0.12457 **.

Finally, we are now approaching the advantage of the density scale. Namely, it is easy to display any scale to be embedded into density scale, e.g. the scale of redshifts, the Hubble scale or the **PAN** scale into on the density scale. The mapping will characterize the scale image by the location of its image on the Density scale. For example, the recently discovered galaxy **NG-z11** with **z = 10.2** corresponds to a density on the **PAN** scale **mu ~ 379.64788** at which the dimensions of the universe are negligible **~ 0**. According to Hubble my mu density at **z = 10.2** is equal to **52.119401**. The size of the universe at the moment is **~ 3.065505**. I can also note that the PAN scale at **z = 10.2** corresponds to a distance of **~153.54 **billion light years, while the Hubble distance is **~ 14.33 billion**. The difference with the factor is more than **10** times.

Sincerely your JM

Yes, your comparison of distances calculated by the two different distance scales is correct. The Pan theory model is a scale-changing theory whereby, relatively speaking, matter becomes smaller as time progresses. From this perspective it would appear to us that the universe or space was expanding, whereby instead neither would be happening.

In astronomy one indicator of galactic distances is called the observation angle. Given a galaxy of a particular diameter, the farther away one is from the galaxy, the smaller the angle needed to visually traverse the galaxy from side to side with a telescope. The average observation angle should progressively decrease as distances increase. If the measurement scale is wrong, and distances miscalculated, then this is not what will be observed.

In the Pan theory model, observation angles match distances. Based upon Hubble formula calculated distances, observation angles do not correlate with distances at all. The rationale for this, according to the Big Bang model, is that galaxies in the past were progressively smaller in size even though they appear to be brighter than they should at calculated distances.

In an expanding universe, the universe would have been denser in the past. The opposite has been observed consistent with the Pan Theory model. After the James Webb goes up and is successfully operating, I believe it will become obvious to astronomers that something is wrong with the Big Bang model. Just before the James Webb goes up we intend to write a paper explaining what we believe the James Webb will observe that will be contrary to mainstream cosmology.

That being said, it would seem that your studies and conclusions better fit with standard cosmology. I would hope that my ideas, equations and conclusions could help you with your research but expect that it could not. Maybe the primary value of the Pan Theory is that contrary future observations may not come as a complete surprise for those familiar with this model.

respectfully, Forrest Noble

Joseph,

I gave an answer to your reworked version of my equation but again my answer was dropped. I must be occasionally making a mistake of some kind in my postings. I will check out your form of this equation, but if they are the same, I much prefer your version to the form that I posted. The log 10 form was easier to program in JavaScript than the natural log format. But the natural log format is the format that is an analog to the theory behind the equation so it is the preferable format. Thanks again for your effort in my behalf. I will get back with you concerning this equation and your own cosmological ideas of your book as time permits.

with best regards, Forrest Noble

Dear Forrest,

A quick answer. I checked carefully the **LN form **of your equation - it should be correct. It appears to me that the constant **21.946*PO **is somewhat a calibration constant, which can be changed to fit real data best **(18...) **In the **LN form. ** I have downloaded your article, where the **PAN **theory is introduced. The idea of diminishing the matter, perhaps, suits me well. I will come back with some questions. I really appreciate our conversations. Regards Joseph

Dear Forrest,

Despite the fact that the "*adjacent views*" of the PAN theory do not agree with the main line of astronomers and theorists, it is important for me to understand the meaning of the discrepancies. Every author knows its weakest points best. I have two points that can cause objections. The first, which for me is extremely important, is to explain the decrease in the parameter of the average energy density. The second concerns the interpretation of the same parameter.

The PAN theory states that “*The alternative cosmological model proposes that matter becomes smaller in size but proportionally greater in quantity as time progresses. In the past there would have been accordingly fewer individual units of matter than there is now, but over time the density of matter in space would remain the same; as these individual units halve in size, they double in their numbers. These matter units in the future will accordingly be smaller but there will be more of them. For this reason this model is also a type of steady-state model.*”

I understand the first half of the statement, although the second half remains tedious for me. Does this mean that the new atoms of matter reconnect or fill the old spatial atomic volume that becomes available because the matter will be compressed? In my model, as well, the volume of the total matter-energy increases despite the decrease in density. Even if my model supposedly corresponds to reality, I need to explain why the energy of atoms decreases together with the decrease in atomic volume, as required by the theory of PAN, e.g. the energy of the hydrogen atom should decrease together with its atomic volume? I also consider the creation of matter, but the new matter connects space "*on the edge of the universe*" as a result of a phase transition of dark energy. Dark energy in this context has nothing to do with repulsive gravity. I look at dark energy as a source for matter creation that can have some primary anomalies leading to secondary anomalies, i.e. to gigantic voids without matter or vice versa, like the threads of galaxies formed during the phase transition.

It seems that in the past, the density of matter was higher. However, there is evidence that this is not so. Apparently, it is necessary to rephrase the density parameter in a different way. "*In Physics mass-energy equivalence states that anything having mass has an equivalent amount of energy and vice versa.*" The latter emphasizes, e.g. the duality an electron as a particle and wave, the light beam duality, etc. It appears that in the past the energy dominated matter, and at the moment, matter dominates the energy. Therefore, we can say that the average energy density in the universe reflects some synthesis of matter and energy. In this sense it is convenient to introduce the scale of matter density and to reveal in this way the dynamics of the Universe. Therefore, my point is to map distances, or any cosmological indicators, such as the brightness of galaxies, or their visible angles, etc. into the average energy density scale.

Yours sincerely

Joseph

With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.

Continuous creation of matter

When the universe was first thought to be expanding, theorists like James Jeans, Paul Dirac, Fred Hoyle, Bondi, Gold, Narlikar, and many others made such proposals. In the early 1930s, before his endorsement of the Big Bang model, Einstein developed his own Steady-State density model involving the new creation of matter to maintain a constant matter density in an expanding universe. The new-creation mechanism(s) discussed were similar to prior proposals made by Paul Dirac and James Jeans. He wrote a related paper of this proposal but decided to set it aside in his archives rather than publishing it once he began to believe the Big Bang model was on a solid footing and began to endorse it.

The Pan Theory process has similarities to these other proposals but because of the diminution of matter process it generally does not involve an expansion of space or of the universe .

A present-day mainstream proposal of the 'Continuous creation of matter' involves Hawking radiation and the idea that near the event horizon of black holes, particles can be created by vacuum fluctuations that could create new matter in this universe. This accordingly could be in the form of electrons, positrons, and possibly protons and anti-protons. This idea seems to have kinship with the steady-state creation of matter and 'C' field matter creation of Quasi-steady-state cosmology, but on a much smaller scale. By new-matter creation and energy radiation, black holes accordingly could therefore evaporate out of existence.

The Pan Theory calculation of redshifts vs. distances (which you reduced more elegantly :) ) involves primarily 3 unstated variables concerning its equation. First, matter in the past would have been relatively larger radiating relatively longer wavelengths of radiation. Secondly, our yardsticks would now be smaller so that matter would appear to be moving away from itself producing a Doppler-like redshift. Thirdly, time would be relative to the size of matter. Relatively larger matter and space in the past would be compatible with time being relatively slower. Longer distances traveled, from our time-frame perspective, would have taken a longer period of time maintaining a generally constant speed of light, while explaining the dilation of time from past events such as supernovae. Longer time periods would mean that the time between wave-lengths would be greater equivalent to a redshift from today's perspective of these waves.

So the question becomes: With respect to the density of the universe, according to the Pan Theory model, matter very slowly becomes smaller over very large time spans. All matter very slowly would reduce its diameter by roughly 20%. This equates to a loss of volume and mass equivalent to about 1/2. It does this by an unwinding process which we can observe as Fermion spin. About every 6 billion years all matter would reduce its mass by about 1/2 . While doing this matter gives off this decrement to a background physical field. Such a physical field could be called the Zero-Point-Field, a Higgs field, an aether, etc. This decrement of mass in time reforms into new matter particulates surrounding very large gravitational influences such as theoretical black holes. The hypothesized process would involve new electrons, positrons, protons, and anti-protons.

According to the Big Bang model the density of matter in the past would have to have been greater in an expanding universe, 8 times more dense 7 billion years ago, and 64 times more dense roughly 10.5 billion years ago. This is exactly that opposite of what has been observed. This is the universe observation-density problem which we explained in our previous papers.

Why does the apparent density of galaxies drop off at larger distances? (Advanced question)

A generally standard mainstream answer to this question can be seen in this link:

Look up my e-mail address at pantheory.org. I wish to make you a proposal concerning our present research and related paper -- in regard to your insights.

with best regards, Forrest Noble

-------------------

Is the quantum mechanics a complete theory in the sense that no additional axiom can be added to it?

Discussion

57 replies

- Asked 9th Sep, 2022

- Sofia D. Wechsler

In my article

I show that the most popular interpretations of the quantum mechanics (QM) fail to reproduce the quantum predictions, or, are self-contradicted. The problems that arise are caused by the new hypotheses added to the quantum formalism.

Does that say that the QM is complete in the sense that no new axioms can be added?

Of course, a couple of particular cases in which additional axioms lead to failure, does not represent a general proof. Does somebody know a general proof?

Is Nature based on a perfect hyperfluid , an active background, the unique responsible of all the interactions?

Question

54 answers

- Asked 19th Oct, 2018

- Stefano Quattrini

In the latest years, due to unsurmountable problems of unification of the interactions and the non explicability of some observations in the universe according to the current theories (Dark matter and Dark energy), several researches are more and more approaching a vision of a Universe closer to the Maxwell ether as opposed to the space-time voidness of the metric representation of Einstein, than ever before.

Although the aether conception is forever abandoned as initially tought, there are several attempts at describing the Physics of the cosmos with the help of Hyperfluids. GR was also considered as such (MOELLER).

The following papers are one example of such research

by Schmelzer

by CHristov

this one has been just issued by Capoziello.

is this going to be a way to unification through a unifying entity?

What is InTech Open Science? A predatory or a ligitimate publisher?

Question

493 answers

- Asked 25th May, 2018

- Stephen Jia Wang

Dear friends colleagues, have you ever received an invitation to publish your work at InTech Open Science (https://www.intechopen.com/)? I have recently been invited to edit a new book title for them. I am usually suspicious with such invitations and must check the authenticity of the publisher first. Interestingly, they claim that they have published the work for two recent Nobel Laureates. Therefore, I would appreciate your experience and opinions regarding InTech Open Science.

Kind regards,

Article

- Sep 2008

We present a class of regular black holes with cosmological constant Λ in nonlinear electrodynamics. Instead of usual singularity behind black hole horizon, all fields and curvature invariants are regular everywhere for the regular black holes. Through gauge invariant approach, the linearly dynamical stability of the regular black hole is studied....

Article

Full-text available

- Feb 2018

Recently, a new class of modified gravity theories formulated via an additional scalar and vector field on top of the standard tensor field has been proposed. The direct implications of these theories are expected to be relevant for cosmology and astrophysics. In the present work, we revisit the modified framework of the scalar-vector-tensor theori...

Article

- Mar 2006

We derive the black hole solutions in higher curvature gravitational theories and discuss their spacetime structures. In our analysis Lovelock theory is mainly investigated, which includes cosmological constant, Einstein-Hilbert action, and Gauss-Bonnet term as its lower order terms. Among the solutions, there are solutions which may become extreme...

Get high-quality answers from experts.