Science topics: PhilosophyLogic

Science topic

# Logic - Science topic

An open group for the discussion of various logics and their applications

Questions related to Logic

It is common to affirm that "One can never perform any measurement whose result is an irrational number."

This is equivalent to say the contrapositive, that anything that can be measured or produced is a rational number.

But the irrational number √2 can be produced to infinite length in finite steps, as 2×sin(45 degrees). It also exists like that in nature, as the diagonal of a unit square.

There is no logical mystery in these two apparently opposing views. Nature is not Boolean, a new logic is needed.

In this new logic, the statements 'p' and 'not p' can coexist. In the US, Pierce already said it. In Russia, Setun used it.

This opens quantum mechanics to be logical, and sheds new light into quantum computation.

One can no longer expect that a mythical quantum "analog computer" will magically solve things by annealing. Nature is also solving problems algebraically, where there is no such limitation.

Gödel’s undecidability is Boolean, and does not apply. The LEM (Law of the Excluded Middle) falls.

What is your qualified opinion?

knowing that the acid concentration and the temperature are optimal.

Increasing the reaction temperature from ambient to 80°C increased the leaching yield

Right now, in 2022, we can read with perfect understanding mathematical articles and books

written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.

The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.

Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put

into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).

But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then

Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.

For instance suppose a formal mathematics Agda file started with

{-# OPTIONS --without-K --exact-split #-}

Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.

My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?

10 years from now will Agda be backwards compatible enough to still recognise

current Agda files ?

Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?

Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?

Quarries have an important economic and social role, but they have repercussions on the natural framework (the environment), which calls for proposing alternative solutions to ensure the supply of the construction and public works sector with basic materials, taking into account the environmental dimension and the sustainability of resources in accordance with the logic of governance in the measure.

For example, when the instrument has been logically validated by some experts about its content validity, and then the instrument will be empirically validated (e.g. scale item validity) to several respondents, should the researchers determine the population of specific groups of people (e.g., in certain district) and concern about the sampling method? Or the instrument could be randomly distributed to anyone without paying attention to specific population?

If the instrument is valid empirically concerning about the population to which the instrument is distributed, could the researcher claim that the instrument is valid in that population and maybe the instrument is not valid in other population?

On the graph of changes in air temperature, one can see that after an increase in air temperature by about 1 degree since 1997 and a decrease in air temperature in 1999, the temperature trend has changed. Trend changes were recorded immediately after El Niño in 1998. This is logical, since El Niño affects the planet's climate. But the level changes in the Atlantic and Pacific Oceans after the rise and fall of the sea level during El Niño in 1998, the level change trend has continued. The trend change occurred with a delay of several years, around 2004. Graphs taken from the Internet in the public domain.

If you consider evolution law "fitest survive" it means what survive is traits are labeled fitest just because it did (empirical basis, not logical). Similarly, dynamics law means

"what accelerates is forcive" i. E earth(gravity), human (muscular).

In this case, care was taken by Newton to avoid this objection. Force is action and motion was considered action in Aristotle (and Common sense). By he made it passive (inertia property) and had a descriptive field devoted to it (kinematics) so that it Will not appear as action.

I do not find many astrophysicists and cosmologists engaging themselves with the question of the probable origin or origins of the cosmos. Even when a work is purportedly to be on this theme, the authors beat around the bush so much so that they have finally time only to conclude with some suggestions. These suggestions are based only on the kind of reason that experimental astrophysicists use, and not on the general logic emerging from and beyond the cosmological theories...! What would be the philosophical and cosmological reasons behind this diffidence of scientists?

Raphael Neelamkavil

How can i generate the rules matrix of a fuzzy system that has two outputs and the outputs are logically combined using the "and" operator. (In MATLAB envirement)?

p--->q and r

The fuzzy system has 3 rules like above. The membership functions are triangular, and the linguistic variables are "Low", "Medium" and "High".

This is a question about Godel Numbering. As I understand it, the axioms of a system are mapped to a set of composite numbers. Is this really the case, so for example the 5 axioms of Euclidean plane geometry are mapped to 5 composite numbers? Does this also imply that theorems of the system are now composite numbers that are dependent on the composite numbers that were the target of the map from the set of axioms PLUS the elementary numbers that describe the logical operations, such as +, if..then, There exists, ext.?

Good afternoon Everyone.

I hope this message finds you well.

I am writing this message to ask for your help.

The Ls-Dyna software has been installed on my desktop. However, I could not use the maximum capacity of this desktop. This new desktop has 16-cores with 24 logical processors (see the attached picture 1). However, when I chose to use 8 or 16 NCPU to run the model, an error has been received as shown in the attached picture 2.

Is it possible that you can show me the way to increase the "OMP_NUM_THREADS" on my desktop?

If you need further information, please let me know.

Many thanks!!!

Can we use methodology from quantitative to qualitative? What kind of logic can we give in this scenario?

Putnam critisized logical positivists acounts of the meaning of scientific terms and nature of scientific theories because they were incompatible with minimal scientific realism. However, before discounting antirealism as valud stance he leaned toward it in his internal realist years, the view that epistemic version of truth is the most valid.

QM stands for quantum mechanics, CS for computer science, CC for cellphones/computers, and Mathematics seems to have somehow missed all three.

Now is the time to catch up. We have come to a stand off. Students are ending up neurologically sick -- feeling math-averse (neurophobia and math trauma).

It is not a matter of selection. Only group/rote work [1] seems to infuse Mathematics, as taught today, even in highly-qualified students. Paradoxically, one finds that the more intellectually qualified, the more averse!

And yet current Mathematics still tries to "instruct" it to the students, and other disciplines, in the US and the world.

This failure has been denied [1], and the “fault” has been put on the victims -- the students. Paradigm shifts are needed: first, QM.

According to QM, only integer numbers should be necessary to build all one can see in nature. Otherwise, all of Physics would be contradicted. This would also contradict all of CS and all of CC. Computers, for example, only work with integers, and are error-free.

Since this contradiction is not even imaginable, QM is ontologically correct. Therefore, all sciences must reflect it, according to a "holographic principle" (HP) in nature, including Mathematics.

The HP is often referred to as "the micro as in the macro" or vice-versa.

These two words macro and micro are commonly viewed as antonyms, meaning that they are the opposite of each other. Macro means on a large scale. Micro means on a very small scale. Both are important, though often complementary, views of nature. The view from the macro, while taking into account the micro, is called "universality" in Physics.

The QM main principle was given by Niels Bohr as, "all states at once". This needs to be understood as different from the following possibilities:

- "Copenhagen interpretation", or
- a "collapse" of the quantum function upon measurement, or
- waves, as a picture of QM, or
- wave-particle duality, or
- the Heisenberg principle, or
- a probabilistic view of QM.

In opening the "black box" of QM as viewed by Bohr, QM does not represent Natur (defined by the philosopher Kant), but Wirklichkeit (ditto). It is not how nature is, but seems to work. A story.

A known analogy is the Plato's Cave". The shadows one can see on the wall are Wirklichkeit, and the open reality outside is Natur. After watching Wirklichkeit for some time, people can make an idea what Natur looks like, even though imprisoned in the Cave.

QM is, ontologically, how one can describe Wirklichkeit -- which is the subject of Physics. Natur may not have QM behavior, and continuity may exist in Natur.

Mathematics is not concerned with Wirklichkeit only, but can include Natur. Although, as no one can see Natur, one is led to treat any "pure" Mathematics as a speculation.

Albeit, Mathematics must agree with Physics in the realm of Wirklichkeit.

This does not happen today, and creates a clash. The inconsistency is seen as an opening. What IS mathematics?

Today's calculus teaching can be seen as relying on outdated ideas, going back before QM, computer science (CS), and cellphones/computers (CC) were discovered. Three major paradigm shifts seem to have been missed.

But these three paradigm shifts represent how science is done, in our seemingly endless task in going from Wirklichkeit to Natur.

One recognizes then, that science does not behave continuously, but by jumps -- that break from the past, and open a new future -- in going from Wirklichkeit to Natur, and back -- does the envisioned Wirklichkeit reproduce the actual Wirklichkeit?

This has been called a "paradigm shift" (PS) by T. S. Kuhn in Structure of Scientific Revolutions [2].

David Hilbert [3], at the turn of the XX century, proposed twenty-three problems intended to guide research in the dawning century, claims otherwise, that “History teaches the continuity of the development of science.”

Michael Harris [4], a mathematician, writes that he, "would still be glad to lift the veil, but we no longer believe in continuity. And we may no longer be sure that it’s enough to lift a veil to make our goals clear to ourselves, much less to outsiders."

The outdated ideas currently in Mathematics date back to the time of Newton, Leibnitz, and Cauchy, before the 3 PSs mentioned. They include (aka Fictions, or incorrect models): microscopic continuity, infinitesimals, hyper-reals, Cauchy epsilon-deltas, and Cauchy accumulation points.

Even in 4-year universities one finds classes, such as at Caltech, MIT, CSU, and abroad, teaching Fictions today [1].

This failure can be denied [1], and the “fault” can be put on the victims -- the students. But, there is no feeling of inferiority. PS [2] combats this, American style -- by innovation.

A technical innovation can reduce this gap by a PS [2] -- already heralded and error-free, as one can see in the history of sciences (contradicting David Hilbert [3]).

This reduces risk by following an experimental model that works, even though no one has explained it mathematically, without Fictions.

By jump-starting to QM one has a solution, which we mark as a first paradigm shift.

Where Fictions interfere with known Physics, they must be mercilessly deprecated -- although, and yet, current Mathematics still tries to "instruct" it to the students, in the US and the world.

The other paradigm shifts, stand for computer science (CS), and cellphones/computers (CC).

We now introduce a coherent, holographic principle (HP), a universe, and understanding yet to be discovered, as two new paradigm shifts -- and gain a cosmic perspective: who is the Creator of all this marvelous scheme?

This final PS leads to a comment. One does not need to learn anything, confirms https://www.researchgate.net/profile/Robert-Fuchs

One just has to observe.

Mathematics seems, thus, to be discovered — as a hologram and coherent with other sciences, and not, somehow, invented by mere humans.

What is your qualified opinion?

REFERENCES

[2] T. S. Kuhn, Structure of Scientific Revolutions, 1962.

[3] David Hilbert, Paris International Congress of Mathematicians (ICM), 1900.

[4 I] Michael Harris, “Mathematics Without Apologies”, Princeton University Press, ISBN 978-0-691-1-17583-6, 2017.

Hello friends, i want to know what is the difference if the lag return is giving positive coefficient value in mean equation and negative coefficient value in variance equation? Can someone tell me what's the logic behind this?

Thank You

**Seriality**on certain modal frames - that for every world

*w*there is some world

*v*such that

*w*R

*v*- corresponds to the axiom: ◻P→♢P, assuming a standard interpretation of the relevant operators. Consider what we might call

**reverse seriality**: that for every world

*w*there is some world

*v*such that

*v*R

*w*.

Is there an axiom to which

**reverse seriality**corresponds, assuming a standard interpretation of the operators? I've observed any frame satisfying**reflexivity**, i.e. every*w*is such that*w*R*w*which corresponds to the axiom: ◻P →P, satisfies**reverse seriality,**but I've gotten no further on the answer.(P.S. If this is a 'good question' let me relay that I heard it from a student who took a course with Sean Ebel-Duggan who posed it; on the other hand, if this is a 'bad question' then it's likely due to my or the student's misunderstanding of Sean's actual question).

So A is a further version of B, and people said, "If you did B, why not A"

Example: "If you decide to pay for her meal, why not just pay for her everything?"

It sounds like straw men and slippery slope, but I don't think they are either.

Hello, I am new to R so a little stuck. I have 164 items as part of a scale development project (n=1271), and I want to set a cutoff of .40 for factor analysis. I tried using logical operators and used this script

loload <- tempdf1 %>% filter(Q1.0<.40) which set up the new datafile for 'loload' but didn't put any data in there. I then tried using this script with all 164 items separated by a comma, which returned an error message.

I'm quite stuck; numerous google searches don't offer a lot unless I want to do things to one specific variable.

Any help is appreciated.

Is it time for the update of terminology, to reflect logically, is it as simply already what it is?

Could there be a better term to better reflect?

Booking: Arrange for and reserve in advance, engage in service, and employment for a limited time period.

If we some how transform a Binary Search Tree into a form where no node other than root may have both right and left child and the nodes the right sub-tree of the root may only have right child, and vice versa, such a configuration of BST is inherently sorted with its root being approximately in the middle (in case of nearly complete BST’s). To to this we need to do reverse rotations. Unlike AVL and red black trees, where roatations are done to make the tree balanced, we would do reversed rotations.

I would like to explain the pseudo code and logical implementation of the algorithm through the images in following pdf. The algorithm is to first sort the left subtree with respect to the root and then the right subtree. These two subparts will be opposite to each other, that is, left would interchange with right. For simplicity I have taken a BST with right subtree, with respect to root, sorted.

To improve the complexity as compared to tree sort we can augment the above algorithm. We can add a flag to each node where 0 stands for a normal node while 1 is when the node has non null right child, in the original unsorted BST. The nodes with flag 1 have an entry in a hash table with key being their pointers and the values being the right most node. For example node 23's pointer would map to 30.5's pointer. Then we would not have to traverse all the nodes in between for the iteration. If we have 23's pointer and 30.5's pointer we can do the required operation in O(1). This will bring down time complexity , as compared to tree sort.

Please review the algorithm and give suggestion if this algorithm is of any use and if I should write a research paper on it.

For various reasons it can sometimes become necessary to change the mindset, to change our attitude to something, eg following trauma or illness. We can re-examine our beliefs with reasonable logic and be successful in turning a negative mindset into a positive one. However, how do we do that without our emotions and misinterpretations of the world getting in the way?

Which beams will yield and form plastic hinge first? beams located on the lower half of the building or beams on the upper part of the building? top storey is subjected to greater lateral force compared to bottom storey. However logically speaking, it seems that the bending of the structure will occur from the bottom first ( as if treating the whole structure as cantilever), and the upper portion just moves along with it, hence lower moment and less likely to experience yielding. Is it correct?

I wrote this week's blog post with one of our PhD students, Christina Baker. Christina is conducting a study of school nurses to find out the most commonly reported barriers to COVID-19 vaccination for children. Many of them are linked to Intuitive-mind thinking rather than to Narrative logic: https://sites.google.com/view/two-minds/blog

I would like to ask what is the best (the most simple) way to implement a deductive system based on non-monotonic logic (example: can we do this using PROLOG)?

Thanks in advance!

Is there any theory or logic to fix 2nd order construct as formative and reflective. As at first order discriminant validity has established so it can be clue that the 2nd order is formative if not what can be idea to set 2nd order reflective or formative. I saw so many scale like Organisation commitement having three dimension ( AC, CC, NC). Are these cause or reflection of OC at higher order ? Further if we want calculte utility then expectancy and value are these cause utility or reflection of utility at 2nd order ?

How can I find a list of open problems in Homotopy Type Theory and Univalent Foundations ?

Can someone tell me your experience with this book or a related one "Introduction to Logic: and to the Methodology of Deductive Sciences" I will appreciate...

How can I find a list of open problems in Homotopy Type Theory and Univalent Foundations ?

Could you suggest to me some essays that deal, wether it be in interpretive, critical, or merely informative manner, with Jacques Lacan's paper of Logical Time and the Assertion of Anticipated Certainty?

Need to know logic behind it

Metamathematics -- the fundamental logical paradigm of maths -- was never fully defined by Hilbert (nor anyone else), causing severe yet commonly ignored consequences for all branches of maths and maths theory ever since. So, it is difficult to find relevant papers and anybody interested in investigating or discussing the subject.

At beginning era of digital design, the binary logic is used in all industrial applications. The binary logic uses 2 states i.e. 0’s and 1’s to represent each state. Since the binary logic uses 2 states, it requires more number of bits to represent a number.

The ternary logic uses less number of bits to represent a number compare to binary logic. It also reduces the area and also the power of the circuit. The ternary logic uses 3 states i.e. 0,1,2 where logic 0 is considered as low state and logic 1 is considered as middle state and logic 2 is considered as high state.

The quaternary logic uses 4 states i.e. 0,1,2,3 states. The quaternary logic further reduces number of bits and also enhances the power compare to binary and ternary logic.

Re: ARTICLE: "Should Type Theory replace Set Theory as the Foundation of Mathematics?" BY

Thorsten Altenkirch

Type Theory is indicated (by the author) to be a sometimes better alternative and a sometimes-replacement for regular set theory AND thus a sometimes better replacement for the logical foundations for math (and Science). It seems to allow turning what is qualitative and not amenable to regular set theory into things that can be the clear particular objects of logical reasoning. Is this the case? (<-- REALLY, I am asking you.)

It is very rarely, if ever, I have addressed anything that I did not have a good understanding of; BUT, here is the exception (and a BIG one). (I HAVE VERY, VERY little understanding of this Article -- even from the most crude qualitative standpoint. You would say I should have researched this more, but it in not my bailiwick , only more confusion, on my part would likely occur, "shedding no light". My sincere apologies. ANYHOW:

:

If indeed things are as the author, Thorsten Altenkirch, says: it seems different things (other than those related to standard propositions in regular set theory) could widen the use of set theory itself yet retaining (including) all of regular set theory (with all of its virtues, as needed). BUT, in addition it is indicated it could be applied to areas (PERHAPS, like biological and behavior science) where present set theory (and the math founded on it) cannot now be applied.

"[ The ] type theoretic axiom of choice hardly corresponds to the axiom of
choice as it is used in set theory. Indeed, it is not an axiom but just a derivable
fact."

More Quoting of the author: "Mathematicians would normally avoid non-structural properties, because they entail that results are may not be transferable between different representations of the same concept. However, frequently non-structural properties are exploited to prove structural properties and then it is not clear whether the result is transferable." .... "And because we cannot
talk about elements in isolation it is not possible to even state non-structural properties of the natural numbers. Indeed, we cannot distinguish different representations, for example using binary numbers instead." ... "we can actually play the same trick as in set theory
and define our number classes as subsets of the largest number class we want to
consider and we have indeed the subset relations we may expect. ... Hence Type Theory allows us to do basically the same things as set theory"
... as far as numbers are concerned (modulo the question of constructivity) but in a more disciplined fashion limiting the statements we can express and prove to
purely structural ones."

"we cannot talk about elements in isolation. This means that we cannot observe
intensional properties of our constructions. This already applies to Intensional
Type Theory, so for example we cannot observe any difference between two
functions which are pointwise equal." ...

"...Hence in ITT (regular set theory) while we cannot distinguish extensionally equal functions we do not identify them either. This seems to be a rather inconvenient incomplete-
ness of ITT, [ (common set theory)] which is overcome by Type Theory (HoTT)"

"[It] reflects mathematical practice to view isomorphic structures as equal.
However, this is certainly not supported by set theory which can distinguish
isomorphic structures. Yes, indeed all structural properties are preserved but
what exactly are those. In HoTT all properties are structural, hence the problem
disappears. ..."

"While not all developments
can be done constructively it is worthwhile to know the difference and the difference shouldn’t be relegated to prose but should be a mathematical statement." [AND}: ...

"Mathematicians think and they often implicitly assume that
isomorphic representations are interchangeable, which at closer inspection isn’t
correct when working in set theory. Modern Type Theory goes one step further
by stating that isomorphic representations are actually equal, indeed because
they are always interchangeable."...

..."The two main features that distinguish set theory and type theory: con-
structive reasoning and univalence are not independent of each other. Indeed
by being more explicit about choices we have made we can frequently avoid
using the axiom of choice which is used to resurrect choices hidden in a proposition. Replacing propositions by types shows that that the axiom of choice in many cases is only needed because conventional logic limits us to think about propositions when we should have used more general types."

Oh, here's the link to THE ARTICLE:

Sudoku is not just pleasure/pain, a morning wake-up or a diversion, but also an exercise in logic. When people make the same mistake in logic every time, what can the type of error reveal about the solver's brain?

I have built a task scheduling algorithm in MATLAB. I am currently working on task scheduling optimization in fog computing. I have created the physical components, logical components(application modules) and also mapped the modules to the physical devices in iFogSim. But how to actually run these modules and call the scheduling algorithm from MATLAB to iFogSim.

Any help is really appreciated.

As per the attachment: There are three sets of students. Each set is evaluated by individual judges. Hence the marks are very varied in the three sets. How to give rational marks to all? I wish to make the highest and lowest marks of all three sets equal, and the other marks follow. Is this possible?

Hello

I want to run energy minimization for my protein structure with Amber.

I decided to run more steps of minimization for the whole system compared to the first round of minimization that solute is frozen. I don`t know it`s logical or not.

Do you have any idea about this?

Regards

The original meaning of the word "theory" comes close to "view", or even "world view". As such it has already been used by the ancient Greek philosophers, e.g. Aristoteles or Plato. Over the centuries, its meaning has become more and more precise, culminating in a

**well-defined logical notion of the correspondence**between a part of the (outer) real world and the (inner) symbolic world we use to think about or describe it.In more popular parlance,

**Wikipedia**summarizes it in the statement: "*A theory is a**rational**type of**abstract**thinking about a**phenomenon**or the results of such thinking*."^{*}) Of course, what is meant with "*phenomenon*" (also an ancient Greek word) is typically left unspecified: it may be a very specific class of objects or events, or it may be something as big as our universe (as in "cosmological theory").Over the years, I have observed a

**gradual inflation of the technical term "theory"**as defined and used in scientific methodology. The (dualistic) notion of a correspondence between the real world on the one hand and the media we use to reflect about the latter (thought, language, ...) on the other hand seems to have been lost during the rise of empirical research with its strong emphasis on "phenomena" instead of "thoughts".The result is that the technical term "theory" appears to have also lost its well-defined meaning of a

**bridge between our outer world "as we observe it" and our inner world "as we reason about it"**. For instance:- In a recent paper (2021), the author (a well-known expert in a subfield of social science) promises to offer a theory (sic!) of a particular "phenomenon" in his subfield. As I am also much interested in the kind of phenomena he is doing research about, I of course hoped to find - at least - a worked-out theoretical model of those phenomena.
*Far out*! Besides a**simple flow-chart**of (some of) the processes involved, what he presented was a large collection of more or less confirmed "empirical facts" together with simple "interpretations" (mostly re-wordings) and pointers to possible or plausible relationships.- I didn't find any sign of the
**hallmarks of a good theory**: a worked-out theoretical model of those phenomena, on the basis of which I (or someone else) could**reason about those phenomena**, look for inconsistencies between assumptions and facts, derive crucial hypothesis to be tested, etc.: !

My questions to you:

- What are your experiences with this type of inflated use of the word "theory" in scientific research?
- Do you believe that there is a difference in this respect between social sciences and natural sciences?
- How can we bring the "empirical approach" and the "theoretical approach" together, again?

________________________________________

^{*) https://en.wikipedia.org/wiki/Theory}

Russell’s paradox is the most famous of the logical or set-theoretical paradoxes. Also known as the Russell-Zermelo paradox, the paradox arises within naïve set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself. Hence the paradox

I have been working with ontologies (RDF/OWL) a lot of time, using mostly them as an engineer, because they permitted SPARQL and rules essencially.

It's only recently, this year, that I started to really pay attention to the theoretical grounding of OWL. This lead me to dive into the zoo of many Description Logic and their desirable or undesirable properties.

I think there is some serious issues in the multiplication of work on DL, which are almost never considered under the perspective of actual usefulness, of their ability to describe the specific structures that are at core of many domains (law, clinical science, computer science...).

Quite some of the theoretical work in DL and logic seems to formally study and prove property about language (DL are language) that nobody is speaking or will ever speak. This is quite salient when considering the very little number of working reasoners (which are covering only a small fragment of DL described formally).

It seems to me that, after the incredibly fecund periods that started with Frege, Russel, Tarski, Hilbert, Godel, Carnap... The theoretical work was somewhat considered to be done and less attention was focused on formal language for Domain Description.

On the other hand, questions related to problem solving (planner) became treated only as SAT problem needing optimisation. With almost no reference to first order logic and thus having poor link with DL.

Finally, on the third hand, modal logic, which has clearly deep link with first order logic (the square operator/diamond operator and the existential quantifier/universal quantifier in particular), has been abandoned by computer scientist and become, more or less explicitly, a field of philosophy.

I think this state of affairs isn't satisfying and that there is a work of conceptual clarification and of revision of the foundation of mathematics that would integrate these development.

To that end, something that does seem absolutely essential is to give each other an easy access to reasoners. By easy access, I don't mean a program written in some obscure language whose source must be compiled on a specific linux.

I mean an access to the reasoning service through a (loosely standardized) REST API. These service should be accompanied with websites giving relevant example of using the reaoner, with an "online playground".

I think this could be done for classic DL such as EL or SHOIQ but also for modal logic in it's various kind (epistemic, deontic), and that could also could be done for planification based on First Order Logic.

I'm currently cogitating about the engineering question that would raise from such a logical zoo, and about a grammar that would be usable for every reasoning problem description involving this kind of logic.

If you are interested by the question and/or have skills in modern full stack architecture and Dockerisation, I'd be interested to have your opinion about the current situtation and the feasability of such a logic zoo, which would be an useful tool for clarifying the domain.

I have a paper that recommended minor revision, and the reviewers ask me to provide the control variable for my model. But I don’t have the model's data anymore, so I cant add control variables to my result. What can I respond to this issues reviewers comments? Any idea? Any good logical response?

I am interested in the incompleteness of boolean logic. Logic consistently constructed from 1 and 0 is always incomplete with respect to infinity but it is also incomplete with respect to i. If I plot y against x then there is nowhere to plot √-1 so I must add a third dimension to my graph. We can invent as many dimensions as we like but I am not aware of any well-defined value which cannot be plotted in three dimensions. Is there one? If so how is it defined in terms of 0 and 1?

Hello.

I've been recently doing a research on the Argument Based Validation model of Kane (1990, 2011) and Bachman and Palmer (2010).

In one of Kane's articles (2004, p.145) it is stated that argument based validation more than scientific theories uses informal logic and practical reasoning. My question is that what is the advantage of this approach? Also, I will be thankful if you provide examples about it since I cannot understand how it is really practiced.

Thank you

Logical explanation required with publications.

Logical suggestions required with publications.

Statistically absolute means all positive values but here I am confused whether absolute discretionary accruals are only the positive values of DA. is there any difference between ADA and DA? if no then what's the logic of using this keyword absolute with discretionary accruals?

Please dont provide formulae simply. Provide the logic of origination of differential and integral calculus.

I "heard" people saying that Feynman had the opinion that nobody understands QM. But what does that mean?

Does it mean that we are not smart enough? Or, alternatively, does it mean that QM disobeys the rules of the human logic?

Profound biochemical analysis into the structure of a certain protein can explain and even predict the mechanistic properties of said protein function and even of many more proteins that have a similar fold. What other biological molecule type (not protein) is an example of that same logic?

I was thinking that maybe phospholipids' structure or phosphorylation state can say something about its function, does it sound right?

I would appreciate it if you have another example

I have in mind, that Logic is mainly about thinking, abstract thinking, particularly reasoning. Reasoning is a process, structured by steps, when one conclusion usually is based on a previous one, and at the same time it can be the base, the foundation of further conclusions. Despite the mostly intuitive character of the algorithm as a concept (even not taking into account Turing and Markov theories/machines), it has a step by step structure, and they are connected, even one would say that logically connected (when they are correct algorithms). The different is, of course, the formal character of the logical proof.

Hello,

I am searching for auto tools that can be used to calculate the number of the real-consumed cycle or number of logical and mathematical operations for a code of thousands of lines?

Regards

What would be a good journal to submit the paper above ?

I have found 5 point Likert scale items for my mediator and 3 of my IVs, while I am unable to find 5-point Likert scale items for my DV and one IV. Can I change the scale from 7-point to 5-point if I am unable to find the same scale for my DV and one of my IVs?

Is there any particular technique to convert the scale from 7-point to 5-point? Does it require any rationale/logic?

I have designed a digital circuit using an all-optical logic gate using R-Soft CAD layout and the output achieved for logic 1 is only 6% of the maximum normalized power. i need to improve it.

is there any possibility in R-Soft CAD layout to design an amplifier.

Can anyone say how to design amplifier?

I've built a multi-valued logic model containing 64 nodes. Do someone here know a tool able to run 1,000 initial states for this simulation in synchronous update?

I have always wondered, what Trinity is?

but got no good answers, today a thought snapped into my mind, that why there exists the logical problem of trinity? and why there is no such thing called the Logical Problem of Unity of God?

Lets see, what specific defenders reply back

Dear,

I have a dataset of observed temperature data for a small region that I want to use in order to bias correct the RCM data provided by the website "

**esgf-data.dkrz**".However, in the experiment tab I found "evaluation / historical/ RCP4.5/RCP8.5”.

- I know this RCP are for future scenarios (not bias corrected) but what are the two others used for?
- In addition, I don't know which software to use to correct the data?

Logically I should correct the "historical" data experiment based on my observed one then do a projection for the RCP4.5.

In case I have no overserved temperature data:

- Is there any available sources providing reliable temperature historical data?

Any guidance or explanation is appreciated.

Regards

I want to simulate the heat pump (air source) with unite storage thermal Tank but I get the following msg:

''350 : Unable to open the file associated with an

ASSIGNED logical unit number.

Please check the ASSIGN statement and make sure

that the file exists at the specified location''

and I want to,know also how can I creat a file

external if necessary

thank you

The idea is introducing student to the topic from a more general subject, and introduce the different structures of, say, Propositional Logic as a formal system, and from them deduce the Boolean Algebra (while introducing the interpretation of formulas), and further logical laws.

Hi guys, I want to test a wind turbine performance with different rotation speeds and a fixed wind velocity. But when I used a TSR of 6,7,8,9 I found high torque coefficient so multiplied by high rotation speed, an high power coefficient is resulted (0.78?!). What must I do to have logical results?. Thank you.

I am working on some bactericide coatings which contain various amounts of copper. The goal is to observe the bacteria-killing efficiency of the coatings with copper content and exposure time for different gram-negative and gram-positive bacteria.

For some bacteria, we see a logical behavior but for some others, the fluctuation of the data is very large and no clear trend is observed (with copper content or exposure time). Do you think the different roughness of the coatings could influence the results a great deal? I have read that roughness definitely can influence the antibacterial properties of the surface but the roughness Ra of my samples differs just in the few nanometer range. 8, 10, 15, and so on.

I appreciate any helpful answer in advance.

Conditions to be accepted as an editor require the ability to generate new concepts and a certain level of experience measured by the H-level. Journals are also expected to arranges for subject matter experts (SMEs) to review their articles. In light of the collaborative processes that are crossing several fields, is the H-level still the best measure of an editor? Is the SME still the best peer-reviewer when we know that there is a limit on the burden of SMEs to review? Arguably, there are some topics that require specific knowledge (e.g., molecular biology, neurosurgery). Is the current trend in multi-collaborative projects making it better to focus on the editor’s ability to read the paper for understanding, adherence to protocols, structure, content, and the logical progression of supported thoughts leading to findings, recommendations, and conclusions?

Ultimately, authors have been blind to their reviewers. But if the present course is changing to open review, then perhaps the foundation and credentials of an author/editor-peer review process may have to change, too. I welcome your thoughts on the necessity of Author/Editor same subject matter expertise.

Hello everyone,

I hope you are all well.

I am currently working on the simulation of flexible MOFs for carbon capture using aspen adsorption. I am trying to insert the LJMY-Langmuir isotherm using the user-model approach, as shown in the image below. However, the code could not compile because the ERF function is not available in the flowsheet environment. I would be grateful if someone would help regarding this issue or guide me info a better approarch. Also, I am not sure if the code is logically correct.

Regards,

Yarub,

- Breaker interlock scheme

- Under-frequency relay (81) scheme

- Programmable logic controller-based load shedding

- Fast intelligent load shedding (ILS)

Asking a circular question about asking questions and providing answers as well as further questions. The process could go on ad finitum meaning that the process itself could circumvent (and not circumnavigate) the discovery of genuine knowledge that we need more than ever since it is now about apocalyptic global warming.

Hi,

I would like to know how can I calculate the shielding effectiveness SE and part it in the different components (SE

_{R}and SE_{A}) through the S-parameters (S_{11}and S_{21}). I am using a VNA for obtaining these 2 s-parameters (in dB) and now I would like to calculate the shielding effectiveness (SE).I came across with some equations that calculates R, T and A:

R = S

_{11}^{2}; T = S_{21}^{2}; A = 1-R-T; SE_{R}= -10log_{10}(1-R); SE_{A}=-10log_{10}(1-A); SE = SE_{R}+SE_{A};However, I am not obtaining logic results by doing this and I don't know if these equations are correct.

Thank you so much for your attention and help!

Ana