Discussion
Started 27th Apr, 2021

Are Dr. Hans-Otto Carmesin Models supported by the Supernova Observations?

Dr. Hans-Otto Carmesin is a prolific theoretician who wrote among other things, these two books:
Modeling SN1a data:
That said, he leads a field where a lot of unsupported claims are tossed around without anything to support it. That is why they are unsupported..:)
As Dr. Carmesin professed, scientists should follow the teachings of Aristotle and always use the simplest possible model that is consistent with Reality.
Dr. Carmesin's model has nonlocality, dimensional transitions, the usual suspects (Dark Matter and Dark Energy), and an epoch-dependent Dark Energy (figure 8.15 on the first book above).
It is a fantastic work and from my point of view, unnecessary and incorrect.
Unnecessary because there is HU which is capable to explain everything Dr. Carmesin explained without the need for a Big Bang, Dark Energy, Dark Matter, epoch-dependent Dark Matter, Polychromatic Vacuum. Because of that, Aristotle and Occam's Razor would support HU and rebut Dr. Carmesin's work.
Attached is my summary of the problems I found on Dr. Carmesin's claims that SN1a distances support his work.
#########################################
#########################################
#########################################
This is an ongoing discussion.
Dr. Carmesin provided a reply to my objections and confirmed that he is not sure if his model can predict the SN1a distances.
In fact, he said: "My theory does not fail to predict these distances. I just did not calculate these distances yet for a good reason: I tested my full theory by calculating the measured Hubble constants of the Hubble tension."
First, that is not a good reason. Second, I calculated the distances according to his model and the model failed. See the plot and the attached python script.
#########################################
My plot of his model showcases that the model fails to predict the observed distances.
I also drive home the fact that Dr. Carmesin's model modifies the meaning of H0 (the Hubble Constant). Because of that comparison of results are not straightforward and seems to not have been considered before.
The plots also show that HU model predicts the observed distances without any parameters.

Most recent answer

Marco Pereira
Rutgers, The State University of New Jersey
Dear Hans-Otto,
I thank you, in the name of all the readers, for your books and wisdom.
I also derived Quantum Gravity and offered everyone these articles.
I remind you that my work has no parameters and that my prediction for the G-dependence of the Absolute Luminosity yielded a G-factor that was off just by 11% from the observed.
My Quantum Gravity theory predicts the maximum density inside a Black Hole and creates Matter directly from deformed space.
Here is the maximum density inside a Black Hole:
I also predicted the position of Earth in the Hyperspherical Universe and replicated the CMB observations (together with the spherical harmonic spectral decomposition). I did that using interdimensional hyperspherical harmonic spectral decomposition, after a grid search for the best location. Here is the grid search:
Here is Planck's CMB observation:
and here is the hyperspherical harmonic acoustic spectral simulation of the same:
at Earth's position:
χ= 339.46 degrees
θ = 341.1 degrees
ϕ= 104.08 degrees
More details here:
Here is the Equation of State of the Universe:
Here is the 3D Map of the Observable Universe:
CENSORSHIP
My theory has been published since 2007 and it has been censored at Los Alamos archives and mainstream journals (including the one where Dr. Amendola is the editor)!
You have your voice. You are allowed to publish your work. I am not.
I have a story to tell, one that is distinct from the story you tell and that everyone wants to hear.
Can Scientists handle that? Science should be able to do so.
I would like you to offer to be my endorser at Los Alamos Archives.
Best Regards,
Marco Pereira
PS- Please confirm that your theory failed to predict the SN1a distances and please provide me with its E(z).
When reading my work, please disregard the SDSS data analysis. I retracted that part. I have no problem accepting my mistakes when I recognize them as mistakes.

All replies (4)

Hans-Otto Carmesin
Universität Bremen
Dear Marco,
I reply to your three objections:
In your FIRST objection you criticize that I use the value H0 = 67.36 km/(s*Mpc) of the Hubble constant at z = 1090 (see e. g. Planck Collaboration 2020).
My reply to it is as follows: In my theory I derive properties of the universe from the beginning at the Planck scale as a function of the time.
In order to compare with observation, my theory describes the actual state of the universe. Correspondingly, I need an indicator for the actual time after the Big Bang. For it I use that value of the Hubble constant H_0(z=1090). (Equivalently, I could use the age of the universe as an input, of course.)
In your FIRST objection you also criticize that I use a definition of the Hubble constant that can describe the OBSERVATIONAL FACT of the HUBBLE TENSION.
(For that fact see e. g. Riess, Adam G. and others (2018): TYPE IA SUPERNOVA DISTANCES AT REDSHIFT > 1.5 FROM THE HUBBLE SPACE TELESCOPE MULTI-CYCLE TREASURY PROGRAMS: THE EARLY EXPANSION RATE. ApJ, 853, 126. Another reference is: Scolnic, D. M. and others (2018): THE COMPLETE LIGHT-CURVE SAMPLE OF SPECTROSCOPICALLY CONFIRMED TYPE IA SUPERNOVAE FROM PAN-STARRS1 AND COSMOLOGICAL CONSTRAINTS FROM THE COMBINED PANTHEON SAMPLE. ApJ, 859, 101.)
The definition that I use is in full agreement with the usual theory.
(See e. g. Riess, Adam G. and others (2018) above. Another reference is Amendola, Luca (2021): Lecture notes: Cosmology. University of Heidelberg, http://www.thphys.uni-heidelberg.de/~amendola/teaching.html.(download 2021). Moreover I explained it in my books, see below and see in https://www.researchgate.net/publication/351093848_H0_20210424pdf).
In your SECOND objection you suggest my full quantum mechanical and relativistic theory would fail to predict SN1a distances.
My reply is as follows: My theory does not fail to predict these distances. I just did not calculate these distances yet for a good reason:
I tested my full theory by calculating the measured Hubble constants of the Hubble tension.
(See my new book https://www.researchgate.net/publication/350373240_Quanta_of_Spacetime_Explain_Observations_Dark_Energy_Graviton_and_Nonlocality, see the cover for instance. See also my books below and my H0_20210424 – material above.)
As a result I find precise accordance with actual (2017-2021) observations of the observed H0 values, including the SN1a values of H0.
I emphasize that this test is much more critical and meaningful than the test with the SN1a distance moduli (from the year 2011) that you suggested to me. Indeed, I showed already for the case of my semiclassical theory that it is easier to obtain accordance with those distance moduli values that you suggest than to obtain accordance with the Hubble tension values.
My reply to your THIRD objection is as follows:
Indeed, my theory predicts essential parameters. Everybody can read it in my DIGITAL OPEN SOURCE book (English Part):
You mentioned sigma_8. I did not publish any derivation of sigma_8 yet, as I clearly wrote in our discussion on April 17th, 2021.
If you are interested in my more details of my theory, please read my books or papers, see e. g.:
or
or
or
or
or
or
Marco Pereira
Rutgers, The State University of New Jersey
Dear Hans-Otto,
Please let me make this point. When reviewing a theory, one should if possible take a step back (preferably outside of the theory). That is necessary to properly evaluate the hypotheses used, the quality of the predictions or fittings.
###########################################
PARAMETER INTERCHANGEABILITY
###########################################
You said: The definition that I use is in full agreement with the usual theory.
My answer: NO, if you consider L-CDM as the usual theory
###########################################
You, Amendola, and the gazillion theorists working on this problem wove a wide web of connected models and might have created a House of Cards. From my point of view, the data does not warrant the level of speculation since I provided BETTER predictions or fittings with just inertial motion and a better topology.
I am an outsider and I can see that H0 is a short-distance model (linearization for small z). It comes directly from d(z)=c/H0*Integral_0^z(1/E(z))
$E(z) = \sqrt{\Omega_r*(1+z)^4 + Omega_{m}*(1+z)^3 + Omega_K*(1+z)^2 + Omega_{L_c} }$
for the Friedman Model E(z=0) is ONE.
That is a requirement otherwise instead of having v=H0*d, you will have v=E(0)*H0*d
Your model has an E(0)=1.12 (if I am correct).
I still need to know where is Omega_{lambda,coor}(z). I tried to find it out and couldn't find it.
I don't know who did it and when the requirement to make E(0)=1 was lifted and that is not important.
What is important is that it changes the definition of Hubble Parameter. Hence you cannot compare H0 from Friedman fittings (L-CDM) with your predictions or fittings.
If you insist that a model has parameters and if they have the same name, you can interchangeably use them, then you have to say, that your models will always fail for short distances.
Compare that will my theory which only has two parameters (H0 and a G-factor that expresses the epoch-dependence of G).
Even when you consider Aristotle's precept, a model that has a single factor (G-factor is predicted within 11%) is preferable that a model that depends upon Omega_m, Omega_L, sigma_8, H0, and a very large number of assumptions, including epoch-dependent Dark Energy, Dimensional transition, False Vacuum, etc).
###########################################
WHY HU IS BETTER
The theory also explains the creation of the Universe with just the Heisenberg Principle and no explosions are required. The Universe expansion does not care about its contents.
In my theory, all Dark Matter observations are explained just with the topology (Lightspeed Expanding Hyperspherical Universe topology) and the receding galaxies are explained as INERTIAL MOTION in 4D (no forces are required).
In addition, have an epoch-dependent G is consistent with astronomical observations.
###########################################
WHY THE H0 TENSION TELLS YOU THAT THE MODELS ARE JUST BAD
In addition, the fact that different observations (SN1a, CMB, BAO) require different parameters and H0 should be enough to tell you that the model is wrong.
###########################################
WHY CARMESIN MODEL IS PROBLEMATIC
Expanding space, a Multichromatic False Vacuum, Epoch-dependent Dark Energy is not supported by anything other than the model itself.
###########################################
QUESTION:
I asked more than one time if your main model failed (as shown in my picture). I really need to know the answer to this question. That is important. If the model fails to predict distances, your whole theory has no support of any kind.
If I made a mistake, I should correct it, so please answer this question.
###########################################
###########################################
WHAT WOULD ARISTOTLE DO?
In other words, HU is cleaner, simpler, and consistent with observations and makes better and broader predictions.
###########################################
###########################################
With respect to your second answer: "In your SECOND objection you suggest my full quantum mechanical and relativistic theory would fail to predict SN1a distances.
My reply is as follows: My theory does not fail to predict these distances. I just did not calculate these distances yet for a good reason" I tested my full theory by calculating the measured Hubble constants of the Hubble tension.
MP Answer: I calculated the distances and plotted the results and the theory fails.
Your measured Hubble Constants are not properly comparable since they don't have the same definition (Hubble definition from the Hubble Law).
###########################################
###########################################
###########################################
With respect to the answer to the third question: You provided the wide web you wove. Instead of reading n books, m papers, I would rather focus on what supports the theory and if there is a simpler theory that can explain what your theory explains.
Best Regards,
Marco
PS - Where is the Omega_{lambda, corr}(z). Please provide me with that answer. That is crucial for me to know if I have properly implemented your full model.
Here are the python notebooks
Using your H0 of 67.36
Showcasing my theory (HU):
Hans-Otto Carmesin
Universität Bremen
Dear Marco,
I remind what I offer to you and to each interested reader:
Based on quantum physics and general relativity, I derived a theory of quantum gravity.
As a test, I applied that theory to observation: For observed values such as mu_observed and H0_observed. By using my theory, I derived the corresponding values such as mu_theoretical and H0_theoretical. The comparison showed that these theoretical values are within the errors of measurement of the corresponding observed values. I executed similar tests for many other observations, of course.
(I documented the whole procedure in my papers and books including supplementary resources, such as graphs and program codes. For instance see the following:
My OPEN SOURCE book
with the supplementary resources for distance moduli
and the program code for these distance moduli
moreover with the supplementary resources for H0 values
and the program code for these H0 values
See also my OPEN SOURCE book (English Part)
You can find further material in my profile at ResearchGate and in additional books.)
Thank you for your interest in my research.
Kind regards,
Hans-Otto
Marco Pereira
Rutgers, The State University of New Jersey
Dear Hans-Otto,
I thank you, in the name of all the readers, for your books and wisdom.
I also derived Quantum Gravity and offered everyone these articles.
I remind you that my work has no parameters and that my prediction for the G-dependence of the Absolute Luminosity yielded a G-factor that was off just by 11% from the observed.
My Quantum Gravity theory predicts the maximum density inside a Black Hole and creates Matter directly from deformed space.
Here is the maximum density inside a Black Hole:
I also predicted the position of Earth in the Hyperspherical Universe and replicated the CMB observations (together with the spherical harmonic spectral decomposition). I did that using interdimensional hyperspherical harmonic spectral decomposition, after a grid search for the best location. Here is the grid search:
Here is Planck's CMB observation:
and here is the hyperspherical harmonic acoustic spectral simulation of the same:
at Earth's position:
χ= 339.46 degrees
θ = 341.1 degrees
ϕ= 104.08 degrees
More details here:
Here is the Equation of State of the Universe:
Here is the 3D Map of the Observable Universe:
CENSORSHIP
My theory has been published since 2007 and it has been censored at Los Alamos archives and mainstream journals (including the one where Dr. Amendola is the editor)!
You have your voice. You are allowed to publish your work. I am not.
I have a story to tell, one that is distinct from the story you tell and that everyone wants to hear.
Can Scientists handle that? Science should be able to do so.
I would like you to offer to be my endorser at Los Alamos Archives.
Best Regards,
Marco Pereira
PS- Please confirm that your theory failed to predict the SN1a distances and please provide me with its E(z).
When reading my work, please disregard the SDSS data analysis. I retracted that part. I have no problem accepting my mistakes when I recognize them as mistakes.

Similar questions and discussions

Why the academic community keeps censoring a better explanation for the Supernova Cosmology Data?
Question
1 answer
  • Marco PereiraMarco Pereira
I created a simple model for the Universe. A theory that replaces Relativity and explains the Universe without the need for Dark Energy, Dark Matter, the Higgs Mechanism for Mass creation, the Big Bang, etc.
The Hypergeometrical Universe Theory (HU) has three hypotheses:
  • The Universe is a Lightspeed Expanding Hyperspherical Hypersurface
  • Particles are polymers of the Fundamental Dilator (FD). FDs are coherences between stationary states of deformation of space. In other words, HU's Universe contains only space, deformed space, and time. HU also provides a replacement for the Big Bang Model called The Big Pop Cosmogenesis (don't confuse it with a copycat - the Big Flash...:) Plagiarizers, copycats abound... but they are always crummy copies since the plagiarizers don't copy the whole theory.
  • FD's obey the Quantum Lagrangian Principle (QLP). QLP states that FDs travel in a 4D spatial manifold without doing work. This requires them to dilate space in phase with the local dilaton field. In other words, they add their contribution to traveling metric fluctuations coherently, and that permits Gravitation and Electromagnetism to be an extensive properties of matter.
QLP is the basis for Quantum Mechanics of Material Systems. I qualified "Material Systems" because SPACE itself is quantized. The two phases involved in the FD coherence are the proton and the electron phases. FDs are actually 4D constructs, shapeshifting deformations of space that also spin in 4D space while traveling along the radial direction at the speed of light.
I make it tempting to plagiarizers by giving them a link to a spreadsheet where the Supernova Data is modeled with only two parameters (and a topology, of course).
There, you can find the SN1a data, their distances calculated from the Distance Modulus. You will also find the correction of the distances by the G-dependence of Absolute Luminosity of SN1a (I calculated it). That results in a correction of the SN1a Distances by G^{-1.66).
If you have trouble understanding, please feel free to ask questions.
Here is a Quora answer with some details
So, this is a model that explains the Big Bang:
The Big Pop Cosmogenesis - replacement to the Big Bang
Propagate the Universe equation of state up to the current epoch:
Big Pop Article
I also modeled the Neutronium Acoustic Oscillations to recover the "4D Sound" created by the Blackholium-Neutronium phase transition. I was able to find the location of Earth within a Hyperspherical Universe and predict the Galaxy Density Distribution across the Observable Universe:
Here, I created a map for the observable and unobservable Universe and located Earth on it:
Here, is how I created the map of the Hyperspherical Universe from the knowledge obtained by the Planck Satellite:
3D galaxy density map of the current universe:
This is not a small amount of information that has been blocked by the community since 2006. When I say that, it is to be understood that not a single scientist reached out and offered to fight censorship at the Los Alamos Archives or to be a reviewer at a visible journal.
If you disagree, you are welcome to be my endorser, reviewer... etc.
Of course, my theory also explains the early formation of galaxies, the spiral galaxy rotation curve conundrum, weak gravitational lensing results, and predicts the Universe Dimensionality Probability Distribution:
Check the spreadsheet. Learn that a two-parameter model is better than 7-parameter model.
NO.4 How do light and particles know that they are choosing the shortest path?
Discussion
19 replies
  • Chian FanChian Fan
Mach said [1], the principle of minimum xxxx, are they the natural purpose?
Born said in his "Physics in My Generation"[2], that while it is understandable that a particle chooses the straightest path to travel at a given moment, we cannot understand how it can quickly compare all possible motions to reach a point and pick the shortest path —— a question that makes one feels too metaphysical.
Speaking of the Hamiltonian principle and the minimum light path, Schrödinger recognizes the wonder of this problem [3]: Admittedly, the Hamilton principle does not say exactly that the mass point chooses the quickest way, but it does say something so similar - the analogy with the principle of the shortest travelling time of light is so close, that one was faced with a puzzle. It seemed as if Nature had realized one and the same law twice by entirely different means: first in the case of light, by means of a fairly obvious play of rays; and again in the case of the mass points, which was anything but obvious, unless somehow wave nature were to be attributed to them also. And this, it seemed impossible to do. Because the "mass points" on which the laws of mechanics had really been confirmed experimentally at that time were only the large, visible, sometimes very large bodies, the planets, for which a thing like "wave nature" appeared to be out of the question.
Feynman had a topic of minimum action in his "Lecture of Physics" [4]. It discusses how particle motion in optics, classical mechanics, and quantum mechanics can follow the shortest path. He argues that light "detects" the shortest path by phase superposition, but when a baffle with a slit is placed on the path, the light cannot check all the paths and therefore cannot calculate which path to take, and the phenomenon of diffraction of light occurs. Here, Feynman defined the path of light in two parts, before and after the diffraction occurs. If we take a single photon as an example, then before diffraction he considered that the photon travels along the normal geometric optical path, choosing the shortest path. After diffraction occurs, the photon loses its ability to "find" the shortest path and takes a different path to the diffraction screen, with different possibilities. This leads to the concept of probability amplitude in quantum mechanics.
To explain why light and particles can choose the "shortest path", the only logical point of view should be that light and particles do not look for the shortest path, but create it and define it, whether in flat or curved spacetime. Therefore, we should think about what light and particles must be based on, or what they must be, in order to be able to define the shortest paths directly through themselves in accordance with physics.
[1] Ernst Mach, Popular Scientific Lectures.
[2] Born, M. (1968). Physics in My Generation, Springer.
[3] Schrödinger, E. (1933). "The fundamental idea of wave mechanics. Nobel lecture " 12 (1933).
[4] Feynman, R. P. (2005). The Feynman Lectures on Physics(II), Chinese ed.
Keywords: light, Fermat principle of the shortest light time, Hamilton principle, Feynman path integral, Axiomatic

Related Publications

Article
Full-text available
In this work we construct a unified model of dark energy and dark matter. This is done with the following three elements: a gravitating scalar field, phi with a non-conventional kinetic term, as in the string theory tachyon; an arbitrary potential, V(phi); two measures -- a metric measure (sqrt{-g}) and a non-metric measure (Phi). The model has two...
Article
The statefinder indices are employed to test the superfluid Chaplygin gas (SCG) model describing the dark sector of the universe. The model involves Bose-Einstein condensate (BEC) as dark energy (DE) and an excited state above it as dark matter (DM). The condensate is assumed to have a negative pressure and is embodied as an exotic fluid with the C...
Article
Full-text available
We study the formation of galaxy clusters in the presence of thawing class of scalar field dark energy (DE). We consider cases where the scalar field has canonical as well as non-canonical kinetic terms in its action. We also consider various forms for the potential of the scalar field, e.g. linear, quadratic, inverse quadratic, exponential as well...
Got a technical question?
Get high-quality answers from experts.