Science topic
Model Theory - Science topic
Explore the latest questions and answers in Model Theory, and find Model Theory experts.
Questions related to Model Theory
SPIRAL cosmological redshift hypothesis and cosmology model in theory does not require hyper-dense proto stellar formation, just that the proto-stars existed by he cosmic inflation expansion event.
However from my limited perspective the natural observations align best with proto-stellar formation being hyper-dense with a proportional expansion, during a /the cosmic inflation expansion event, hypothesized in SPIRAL..
So assume SPIRAL and hyper-dense proto stellar formation.
How could the scientific process have worked if this is the actuality?
What factual natural observations appear to align with this?
What if any factual natural observations might precluded this?

This is a question about Godel Numbering. As I understand it, the axioms of a system are mapped to a set of composite numbers. Is this really the case, so for example the 5 axioms of Euclidean plane geometry are mapped to 5 composite numbers? Does this also imply that theorems of the system are now composite numbers that are dependent on the composite numbers that were the target of the map from the set of axioms PLUS the elementary numbers that describe the logical operations, such as +, if..then, There exists, ext.?
In the 21st century, we can safely say that absolutely all modern achievements in the field of science are based on the successes of modeling theory, on the basis of which practical recommendations are given that are useful in physics, technology, biology, sociology, etc., are extracted. In addition, this is due to the fact that the application of the principles of the theory of measurements in determining the fundamental constants allows us to check the consistency and correctness of the basic physical theories. In addition to the above, the quantitative predictions of the main physical theories depend on the numerical values of the constants included in these theories: each new sign can lead to the detection of a previously unknown inconsistency or, conversely, can eliminate the existing inconsistency in our description of the physical world. At the same time, scientists have come to a clear understanding of the limitations of our efforts to achieve very high measurement accuracy.
The very act of measurement already presupposes the presence of a physical and mathematical model that describes the phenomenon under study. Simulation theory focuses on the process of measuring the experimental determination of values using special equipment called measuring instruments. This theory covers only aspects of data analysis and the procedure for measuring the observed quantity or after the formulation of a mathematical model. Thus, the problem of uncertainty before experimental or computer simulations caused by the limited number of quantities recorded in the mathematical model is usually ignored in measurement theory.
I was wondering is there any
- model theory of number theory ,hence are there model theorists working in number theory
- the development of arithmatic geometry ,does it have anything to do with questions in logic;and is there any group studying this interaction.
- Anyone is welcome and up for collaboration
- I am interested in finding interaction between algerraic and arithmatic number theory with logic,and to study it to answer logical questions about Arithmatic
Hi, is anyone familiar with a model/theory of memory recall and recognition in math education? Thank you
I would appreciate if you can suggest an model/theory would best fit for investigating the availability of government information and e-services and its impact to the users' economic benefits/growth except SERVQUEL/TAM. Your valuable suggestions and sharing relevant resources would be a great contribution.
I want to investigate if delivery of government information and e-services delivery will have any effects on rural people economic growth / benefits in the least developed / developing countries context. Any model / theories / papers to recommend, please?
I have behavioral data (feeding latency) which is the dependent variable. There are 4 populations from which the behavioral data is collected. So population becomes a random effect. I have various environmental parameters like dissolved oxygen, water velocity, temperature, fish diversity index, habitat complexity etc. as the independent variables (continuous). I want to see which of these variables or combination of variables will have significant effect on the behavior.
what are the key models and theories used to assess the CPD for teachers.
Dear all,
I am looking for a model or theory, which deals with general dimensions of technology (for example functional dimension, technical dimension, economical dimension).
Does anyone have recommendations?
Thank you very much for all suggestions and help!
Best regards from Germany,
Katharina Dröge
Hi,
I've heard that TA can be used to generate a model/theory from the data, similar to grounded theory, but none of the TA sources I've gone through state anything related to this claim. Does anyone know if the claim is true? If so, are there any sources that explain the process in detail or to any degree?
Thanks!
- Tez
The independence property is defined by Shelah. I am looking for other versions or generalization of the definition in which some circumstances are changed.
Main reference (p.316, Def. 4.1):
Shelah, S. "Stability, the f.c.p. and superstability." Ann. Math. Logic 3, 271–362 (1971)
Another reference: (First Def. in introduction)
Gurevich, Yuri, and Peter H. Schmitt. "The theory of ordered abelian groups does not have the independence property." Transactions of the American Mathematical Society 284.1 (1984): 171-182.
I am interested in published guidance or worked examples on how to analyse and synthesize existing theories, models and frameworks.
From the literature review, a model has been developed which is a new mechanism for poverty alleviation. I want to use grounded theory methodology research approach to establish a general theory through interviewing the stakeholders of the model.
I would appreciate if anyone answer how grounded theory methods I will adopt in this case.
Psihologija (www.psihologijajournal.org.rs) is a scholarly open access, no fee, peer-reviewed journal published quarterly. It is currently referenced in the Social Sciences Citation Index (SSCI).
As a journal mainly focusing on psychology, neuroscience and psychiatry, Psihologija calls for papers related to all aspects of Internet, digital media, smartphones and other technology use that could lead to potentially detrimental mental health effects. Original research and review articles about specific models and theories, definition, classification, assessment, epidemiology, co-morbidity and treatment options, focusing mainly on, although not limited to:
· Internet gaming
· Internet gambling
· Excessive social media/networks use
· Online dating, cyber-relationships/sex and pornography
· Excessive online information collection
· Cyberbullying
· Smartphones, tablets and other technology use.
The Theory Wiki at IS.TheorizeIt.Org gets over 200,000 visits annually, but is due for a bit of an update. If you publish on this theory, we would love your updates.
Kai :-)
a model or theory that will guide by mphil thesis. the topic is "lived experiences of parents of children with cancer"
the basic theory of this Model is theory of social behavior
" I dont know about the innovation so i dont use it, if i know about it i'll use the innovation"
i've been getting that quite a lot when doing my research, my question, is there any theory about lacking of information in adoption technology? Please let me know if there any journal or book that i can read
thanks
Population Simulation Problem: I need someone to collaborate in writing a population simulation program incorporating kinship rules of endogamy/exogamy, mating rules, territory and carrying capacity.
I am an evolutionary sociologist and author of a new multi-species population theory, which I can demonstrate in diagrams (also published in books). I want to have it expressed mathematically but need help for this.
Then I want help to model this theory, simulating populations, to show how different kinship and marriage rules, responsive to local environmental feedback, produce different fertility opportunities, beyond the predator/prey/density models, but I cannot write programs.
So, I need someone to add to an extant population simulation program or write a new one, so that I will be able to run variations myself. I would acknowledge the program writer as a contributor to the theory.
This could be a good new thesis basis for a program writer and life-sciences grad. Our work would need to be confidential prior to publication. Maybe more than one person could be involved. I welcome any advice as to what would be necessary.
I am looking for recent subjects in the area of using Markov chains in queueing models or theory for the thesis of a master student in mathematics.
Thanks a lot in advance.
Mohamed I Riffi
what do you think is the best way of managing toxic employees in an organisation? kindly share your thoughts, recommended approaches, model, theory or etc.
I am trying to simulate diesel spray injection in constant volume chamber. I am using 2D axi-symmetric model with DPM. I understand from atomizer model theory that injection pressure is essential to calculate injection velocity. But I am not able to specify the same.Additional information about defining conditions of continuous phase in DPM will also be helpful. Thanks in advance.
Suggest me the Model that will me direct me in writing this project more persistently and clearly.
When we talk about human comfort, we usually think of thermal comfort, and many good models for individual thermal comfort have been proposed.
So my confusion is “How to measure the individual olfactory comfort?”. Could you tell me some models or theory related to it?
Or what factors can affect individual olfactory comfort?
In order to start going into the topic, let us consider two alternatives. If we think that the photon dies out when absorbed then there is not much to talk about. However, if we consider the second alternative, lets us point out that in loosing its energy it becomes unobservable since our senses as well as all our apparatus need an energy transfer to achieve any detection.
So, if photons do survive after being absorbed they thus became ghost photons, i.e. invisible. Evidently this is problematic. But let us not dismiss so fast.
Let us make an imperfect analogy between a photon and a spring. If the spring vibrates it has an oscillatory energy. If it transfers its oscillatory energy to an external material it looses its energy, but the spring is still alive, it has not disappeared. Well, if you see the photon as an oscillator then the analogy makes some sense.
Let us address now a still more controversial issue. Let us suppose that if the spring is not stressed it has no strain mass. But if it is vibrating it has then just energy without having mass, and this analogically applies to the photon.
Well, let now consider the case of a stressed spring that is vibrating. It has then mass and energy. Again, analogically this applies to massive elementary particles.
Why should we appeal to very complicated models and theories? Is it really worthy?
Those interested in this viewpoint and willing to go deeper into this issue may read the paper: “Space, this great unknown”, available at: https://www.researchgate.net/publication/301585930_Space_this_great_unknown
Article Space, this great unknown
i am trying to generate diffrent mppt algorithm for pv model using simulink. i am kinda lost even though i have the whole system is running. its just i am trying to test diffrent algorithms and compare the results. could anyone help me showing simulink model or the theory behind it?
thanks
Study Related Material and some related research papers.
I have to do a presentation a model that is governed by distributed theory. I have to emphasize the real world implication of the model and the theory or theories that govern it.
Any models or theories used to identify the influence factors of counseling theoretical orientation.
I was looking for examples of first order sentences written in the language of fields, true in Q (field of rational numbers) and C (field of complex numbers) but false in R (field of real numbers). I found the following recipe to construct such sentences. Let a be a statement true in C but false in R and let b be a statement true in Q but false in R. Then the statement z = a \/ b is of course true in Q and C, but false in R.
Using this method, I found the following z:=
(Ex x^2 = 2) ---> (Au Ev v^2 = u)
which formulated in english sounds as "If 2 has a square-root in the field, then all elements of the field have square roots in the field." Of course, in Q the premise is false, so the implication is true. In C both premise and conclusion are true, so the implication is true. In R, the premise is true and the conclusion false, so the implication is false. Bingo.
However, this example is just constructed and does not really contain too much mathematical enlightment. Do you know more interesting and more substantial (natural) examples? (from both logic and algebraic point of view)
The Baier and Katoen textbook references this paper
- E. M. Clarke and I. A. Draghicescu. Expressibility results for linear-time and branching-time logics, pages 428–437. Springer Berlin Heidelberg, Berlin, Heidelberg, 1989.
to say that, given a CTL formula ϕ, if there exists an equivalent LTL one, it can be obtained by dropping all branch quantifiers (i.e. A and E) from ϕ.
The equivalence definition (from Baier & Katoen): CTL formula Φ and LTL formula φ (both over AP) are equivalent, denoted Φ ≡ φ, if for any transition system TS over AP:
TS |= Φ if and only if TS |= φ.
(Satisfaction |= is CTL or LTL respectively.)
Is there a syntactic criterion that provides a guarantee that if a CTL formula passes the test, then an equivalent LTL formula does exist?
Please note: Just dropping all branch quantifiers is not enough. For an example, consider 'AF AG p', where p is an atomic predicate. The LTL formula 'F G p' obtained by dropping the branch quantifiers is NOT equivalent to 'AF AG p', since it is not expressible in CTL. The question is whether there is a way (sufficient, but not necessary is Ok) of looking at a CTL formula and saying that it does have an equivalent LTL one?
I am emphasizing the need for a syntactic criterion, as opposed to the semantic one: drop the branch quantifiers and check the equivalence with the resulting formula. Something along the lines of: if, after pushing negations down to atomic predicates, all branch quantifiers are universal (A) and <some additional requirement>, then the formula has an equivalent LTL one (which, necessarily, can be obtained by dropping the branch quantifiers).
An additional requirement (or a totally different criterion) is necessary -- see the `AF AG p`.
Same question on CS Theory Stack Exchange (see the link)
What are the scientific basis and best acceptable modelling theory which exist prove projections of CC data and analysis of uncertainty analysis scientifically?
Can you propose a model theory and basis set to be used in the computational studies of cyclic carbenes (cycloprpene carbene). Is it okay to apply to for both singlet and triplet states?
Thanks in advance.
Regards
Renjith Thomas
I would like to know a good text on non-standard models of Peano arithmetic. And also, any article about then. Thanks.
1. Surfactant molecule is made from water-loving head and grease-loving tail (Figure 1). My question: How do we measure the cross-sectional area of the alkyl chain of surfactant? Do we measure it vertically (refer to GREEN DOUBLE ARROWS of Figure 1) or horizontally (refer to RED DOUBLE ARROWS of Figure 1)? Or do we just take the “theoretical value" of the alkyl chain from the literature (estimated 20-25 A˚2)[1]?
2. I have read a paper entitled, "New Adsorption Model - Theory, Phenomena and New Concept - " by Shibata et. al. [2]. One of the sentences in para 3 page 2 stated that, i quoted, “The important finding is that molecular surface area is less than the cross-sectional area of the alkyl chain for C16E8 and C18E8. Such small molecular surface areas strongly suggest that Gibbs adsorption just at air/water interface in an adequate. This is one of contradictions for the Gibbs adsorption."
Why does when the molecular surface area is less than the cross-sectional area of the alkyl chain, it is said to be contradicted with the Gibbs adsorption?
Reference:

Could contradiction play a role in quantum systems, as part of the mechanism of measurement, forcing a single random outcome from the spectrum of possibilities?
All ideas are welcome, including outrageous ones.
In arithmetics or algebras that cannot be completed, if any statement is logically independent of the axioms, is it also mathematically undecidable. Are these concepts identical?
Hello,
I am trying to write a chapter for my thesis about the most important models of economic growth, but somehow I can not figure out how to make a proper classification.
So far, I have this structure:
1. The classical theory of economic growth
a. Adam Smith theories
b. David Ricardo theories
c. Robert Malthus theories
2. Keynes theory of economic growth
3. Post-keynes theories of economic growth
a. Harrod-Domar model
4. Neoclassical theories of economic growth
a. Solow-Swan model
b. Ramsey - Cass - Koopmans model
5. New theories of economic growth (endogenous models)
a. Romer
b. Lucas
I fear that this classification is wrong and that I am not looking at the primary models/theories of economic growth. Can anybody guide me?
Kind regards,
Stefan
I know that there is a model M for ZF such that. for an uncountable set S in this model and for every collection $\{ (X_s, d_s): s\in S\}$ of metric spaces in this model, their product $\prod_{s\in S} X_s$ in M is metrizable in M. In particular, for an uncountable set S in M, the product $\mathbb{R}^S$ is metrizable, however, I have not found this result in the literature so far. I would be grateful if you could tell me whether you have located it in the literature. If your answer is YES, please, tell me where I can find this result. I know how to prove the result.
Due to the specific way in which higher education is organized, any course of studies promotes the development of competences, which might seem, at first sight, beyond the grasp of didactics of higher education
I´m working on a model which depicts the development of students' competency in the intersection of didactics and organisation of higher educatio (see attachted file).
There are a lot of different theories for modelling and I´m not sure which one to choose best. Thank you!!!

In the semialgebraic context, Delfs and Knebusch defined in 1985 their "locally semialgebraic spaces" and later (only Knebusch) "weakly semialgebraic spaces" as some infinite gluings of semialgebraic spaces. But the majority of model theory seems to be carried out in Mn, where (M,...) is a structure (a kind of "affine" situation).
Do model theorists need to pass to infinite gluings from time to time?
Please can anyone help me with theories or suggest articles that help me understand why most countries used merger (amalgamation) system of banks as a suitable banking sector reforms. I am trying to synthesis the models used by my country’s former Central bank governor (Charles C. Soludo), in the banking reformation.
I. On Fri Oct 5 23:35:03 EDT 2007 Finnur Larusson wrote
I confirm that Larusson "proof" under corrections can be formalized in ZFC.
II. On Sun Oct 7 14:12:37 EDT 2007 Timothy Y. Chow wrote:
"In order to deduce "ZFC is inconsistent" from "ZFC |- ~con(ZFC)" one needs
something more than the consistency of ZFC, e.g., that ZFC has an
omega-model (i.e., a model in which the integers are the standard
integers).
To put it another way, why should we "believe" a statement just because
there's a ZFC-proof of it? It's clear that if ZFC is inconsistent, then
we *won't* "believe" ZFC-proofs. What's slightly more subtle is that the
mere consistency of ZFC isn't quite enough to get us to believe
arithmetical theorems of ZFC; we must also believe that these arithmetical
theorems are asserting something about the standard naturals. It is
"conceivable" that ZFC might be consistent but that the only models it has
are those in which the integers are nonstandard, in which case we might
not "believe" an arithmetical statement such as "ZFC is inconsistent" even
if there is a ZFC-proof of it.
So you need to replace your initial statement that "we assume throughout
that ZFC is consistent" to "we assume throughout that ZFC has an
omega-model"; then you should see that your "paradox" dissipates.".
J.Foukzon.Remark1. Let Mst be an omega-model of ZFC and let ZFC[Mst] be a ZFC with a quantifiers bounded on model Mst. Then easy to see that Larusson "paradox" valid inside ZFC[Mst]
III. On Wed Oct 10 14:12:46 EDT 2007 Richard Heck wrote:
Or, more directly, what you need is reflection for ZFC: Bew_{ZFC}(A) -->
A. And that of course is not available in ZFC, by L"ob's theorem.
J.Foukzon.Remark2 However such reflection is .available in ZFC[Mst] by standard interpretation of Bew_{ZFC}(A) in omega-model Mst
Is it possible to include repeated measures (permanent environment effect) inside the age(time) classes when utilizing random regression models?
The idea is trying to maintain the "time classes" interval length, so I won`t lose too much information in "not so big" data files.
Anyone have any experience with that?
Does that even have any support on random regression models theory?
Thanks in advance.
Looking for a model/theory/framework/ classic paper or systematic review that provides an overview on what factors influence a patient's decisions, in general and specific to treatment decisions?
I am looking for a proof such that:
Given a set of Horn clauses, show that there is a unit refutation from S if and only if S is unsatisfiable
Besides Nursing Science Quarterly.
I need to find an established definitions of Green Marketing and its antecedents.
Is there a model that can be used in understanding sustainability of Green marketing.
Thank you
In a formal arithmetical system, axiomatised under the field axioms, the square root of minus one is logically independent of axioms. This is proved using Soundness and Completeness Theorems together. This arithmetic is incomplete and is therefore subject to Gödel's Incompletenss Theorems. But can it be said that the logical independence of the square root of minus one, is a consequence of incompleteness?
Nowadays, the consistency of theories is not demanded and in alternative we search for relative consistency. In the future things may change. In particular, in the answer 70 and more easily in answer 76, it was proved that set theory is consistent as a result of a relative consistency. There were published several datasets proving the consistency of set theory. In the last times it was publshed a paper in a journal, without success, since there is some inertia concerning the acceptance of the consistency of NFU set theory. It can be said that NFU set theory is consistent as the result of a relative consistency: since Peano arithmetic is consistent than NFU is consistent too. By a similar argument it can be prooved that set theory is consistent too: since NFU set theory is consistent then set theory is consistent. Thus, set theory is consistent, and since the related proof can be turned finite then we also prooved the Hilbert's Program, that was refered in many books on proof theory. There is an extension of set theory, the MK set theory, which is a joint foundation of set theory and category theory, two well known foundations of mathematics. Once again a paper by myself with title "Conssitency of Set Theory" was rejected without a valid reason. This agrees with an answer given by me 26 days ago. With set theory consistent we can replace the use of models to prove the independence of axioms (as did by Goedel and Cohen) by deduction in set theory.
I want to scale down a 7m blade to a 0.30 m one as scaled down model then I want to study the dynamics and deflections under normal and tangential forces.
I work on using similitude theory such as using Buckingham Pi-theory and have some problems with using them.
By Godel's incompleteness theorem it is impossible to prove consistency of the current widely accepted foundation of mathematics ZFC within ZFC. But this theorem says nothing about existence or non-existence of a possible formal proof for inconsistency of ZFC within ZFC that means it is possible that some day set theorists or other working mathematicians find an inconsistency between two mathematical theorems.
My first question is about any possible option which could be chosen by set theorists, logicians and mathematicians in this imaginary situation.
Another question is about possible impacts of discovering an inconsistency in mathematics on philosophy of mathematics and some fields of human knowledge like theoretical physics which use mathematics extensively.
------------------------
It seems weakening the current axiomatic foundation of mathematics in any sense (including removing a particular axiom or moving to another weaker axiomatic system) causes an expected problem. In fact avoiding the contradiction by means of weakening our axiomatic system (which seems the only accessible choice) sends some accepted parts of current mathematics into the realm of "non-mathematics". Thus in this case we need to choose between different parts of mathematics that which one is good and useful and which one is not. This could be the matter of many discussions. For example if the Axiom of Choice (AC) is a part of that contradiction then by removing it from the foundation we will lose many useful tools of mathematics including many essential theorems like "Every vector space has a basis" that harms linear algebra extensively.
------------------------
I've proved the following theorem using model-theoretic techniques, namely ultraproducts: A continuum is locally connected if every semi-monotone mapping onto it (from another continuum) is monotone. Monotone means the usual thing; semi-monotone means that every subcontinuum K of the range space is the image of a subcontinuum in the domain space, which contains the pre-image of the interior of K. The part that uses ultraproducts is where we want to prove that non-locally connected implies being the image under a semi-monotone map that isn't monotone. Basically, I'm wondering if someone has any insights into obtaining a new proof more palatable to a continuum theorist. (E.g.: start with a non-locally connected metric continuum Y and directly construct a metric continuum X and a semi-monotone f:X->Y which is not monotone.)