# Vladik KreinovichUniversity of Texas at El Paso | UTEP · Department of Computer Sciences

Vladik Kreinovich

PhD

## About

1,378

Publications

68,952

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

8,755

Citations

Citations since 2017

## Publications

Publications (1,378)

The fact that \(\infty \) is actively used as a symbol for infinity shows that this symbol is probably reasonable in this role, but why? In this paper, we provide a possible explanation for why this is indeed a reasonable symbol for infinity.

To elicit people’s opinions, we usually ask them to mark their degree of satisfaction on a scale—e.g., from 0 to 5 or from 0 to 10. Often, people are unsure about the exact degree: 7 or 8? To cover such situations, it is desirable to elicit not a single value but an interval of possible values. However, it turns out that most people are not comfort...

Why are we using the decimal system to describe numbers? Why all over the world, communities with more than 150 folks tend to split? In this paper, we show that both phenomena—as well as some other phenomena—can be explained if we take into account the seven plus minus two law, according to which a person can keep in immediate memory from 5 to 9 it...

In most practical applications, we approximate the spatial dependence by smooth functions. The main exception is geosciences, where, to describe, e.g., how the density depends on depth and/or on spatial location, geophysicists divide the area into regions on each of which the corresponding quantity is approximately constant. In this paper, we provi...

The paper describes and explains the teaching strategy of Iosif Yakovlevich Verebeichik, a successful mathematics teacher at special mathematical high schools—schools for students interested in and skilled in mathematics. The resulting strategy seems counterintuitive and contrary to all the pedagogical advice. Our explanation is not complete: it wo...

To simplify the design of compilers, Noam Chomsky proposed to first transform a description of a programming language—which is usually given in the form of a context-free grammar—into a simplified “normal” form. A natural question is: why this specific normal form? In this paper, we provide an answer to this question.

In many practical situations, we have a large number of objects, too many to be able to thoroughly analyze each of them. To get a general understanding, we need to select a representative sample. For us, this problem was motivated by the need to analyze the possible effect of an earthquake on building in El Paso, Texas. In this paper, we provide a...

The usual formulas for gauging the quality of a classification method assume that we know the ground truth, i.e., that for several objects, we know for sure to which class they belong. In practice, we often only know this with some degree of certainty. In this paper, we explain how to take this uncertainty into account when gauging the quality of a...

According to the general idea of quantization, all physical dependencies are only approximately deterministic, and all physical “constants” are actually varying. A natural conclusion—that some physicists made—is that Planck’s constant (that determines the magnitude of quantum effects) can also vary. In this paper, we use another general physics ide...

Reasonably recent experiments show that unhappiness is strongly correlated with the excessive interaction between two parts of the brain—amygdala and hippocampus. At first glance, in situations when outside signals are positive, additional interaction between two parts of the brain that get signals from different sensors should only reinforce the p...

Why do people become addicted, e.g., to gambling? Experiments have shown that simple lotteries, in which we can win a small prize with a certain probability, and not addictive. However, if we add a second possibility—of having a large prize with a small probability—the lottery becomes highly addictive to many participants. In this paper, we provide...

Historically, to describe numbers, some cultures used bases much larger than our usual base 10, namely, bases 20, 40, and 60. There are explanations for base 60, there is some explanation for base 20, but base 40—used in medieval Russia—remains largely a mystery. In this paper, we provide a possible explanation for all these three bases, an explana...

Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth functions were used to approximate these dependencies. In particular, neural networks used smooth activation functions such as the sigmoid function. However, the successes of deep learning showed that in many cases, non-smooth activation functions like...

While we currently only observe 3 spatial dimensions, according to modern physics, our space is actually at least 10-dimensional. In this paper, on different versions of the multi-D spatial models, we analyze how the existence of the additional spatial dimensions can help computations. It turns out that in all the versions, there is some speed up—m...

The main idea behind semi-supervised learning is that when we do not have enough human-generated labels, we train a machine learning system based on what we have, and we add the resulting labels (called pseudo-labels) to the training sample. Interesting, this idea works well, but why is somewhat a mystery: we did not add any new information so why...

The main idea behind a smart grid is to equip the grid with a dense lattice of sensors monitoring the state of the grid. If there is a fault, the sensors closer to the fault will detect larger deviations from the normal readings that sensors that are farther away. In this paper, we show that this fact can be used to locate the fault with high accur...

In situations when we have a perfect knowledge about the outcomes of several situations, a natural idea is to select the best of these situations. For example, among different investments, we should select the one with the largest gain. In practice, however, we rarely know the exact consequences of each action. In some cases, we know the lower and...

AlphaZero and its extension MuZero are computer programs that use machine-learning techniques to play at a superhuman level in chess, go, and a few other games. They achieved this level of play solely with reinforcement learning from self-play, without any domain knowledge except the game rules. It is a natural idea to adapt the methods and techniq...

What is 1/0: Students are first taught—in elementary school—that it is undefined, then—in calculus—then it is infinity. In both cases, the answer is usually provided based on abstract reasoning. But what about the practical meaning? In this paper, we show that, depending on the specific practical problem, we can have different answers to this quest...

In his unpublished paper, the famous logician Kurt Gödel provided arguments in favor of the existence of God. These arguments are presented in a very formal way, which makes them difficult to understand to many interested readers. In this paper, we describe a simplifying modification of Gödel’s proof which will hopefully make it easier to understan...

According to the analysis by the French philosopher Jean-Paul Sartre, the famous French poet and essayist Charles Baudelaire described (and followed) two main—somewhat unusual—ideas about art: that art should be vague, and that to create an object of art, one needs to aim for uniqueness. In this paper, we provide an algorithm-based explanation for...

Nobelist physicist Lev Landau was known for applying mathematical and physical reasoning to human relations. His advices may have been somewhat controversial, but they were usually well motivated. However, there was one advice for which no explanation remains—that a person should not marry his/her first and second true loves, and only start thinkin...

It is a known empirical fact that people overestimate small probabilities. This fact seems to be inconsistent with the fact that we humans are the product of billions years of improving evolution—and that we therefore perceive the world as accurately as possible. In this paper, we provide a possible explanation for this seeming contradiction.

In this paper, we show that many seemingly irrational Biblical ideas can actually be rationally interpreted: that God is everywhere, that we can only say what God is not, that God’s name is holy, why cannot you bless as many people as you want, etc. We do not insist on our interpretations, there probably are many others, our sole objective was to s...

Sigmund Freud famously placed what he called Oedipus complex at the center of his explanation of psychological and psychiatric problems. Freund’s analysis was based on anecdotal evidence and intuition, not on solid experiments—as a result, for a long time, many psychologists dismissed the universality of Freud’s findings. However, lately, experimen...

In many practical situations, we need to estimate our degree of belief in a statement \( A\, \& \,B\) when the only thing we know are the degrees of belief a and b in combined statements A and B. An algorithm for this estimation is known as an “and”-operation, or, for historical reasons, a t-norm. Usually, “and”-operations are selected in such a wa...

Among the most efficient characteristics of a probability distribution are its moments and, more generally, generalized moments. One of the most adequate numerical characteristics describing human behavior is expected utility. In both cases, the corresponding characteristic is the sum of results of applying appropriate nonlinear functions applied t...

Please send your abstracts (or copies of papers that you want to see reviewed here) to vladik@utep.edu, or by regular mail to Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA…

Somewhat surprisingly, several formulas of quantum physics—the physics of micro-world—provide a good first approximation to many social phenomena, in particular, to many economic phenomena, phenomena which are very far from micro-physics. In this paper, we provide three possible explanations for this surprising fact. First, we show that several for...

At present, the most successful machine learning technique is deep learning, that uses rectified linear activation function (ReLU) \(s(x) = \max (x,0)\) as a non-linear data processing unit. While this selection was guided by general ideas (which were often imprecise), the selection itself was still largely empirical. This leads to a natural questi...

Predictions are usually based on what is called laws of nature: many times, we observe the same relation between the states at different moments of time, and we conclude that the same relation will occur in the future. The more times the relation repeats, the more confident we are that the same phenomenon will be repeated again. This is how Newton’...

At present, the most efficient deep learning technique is the use of deep neural networks. However, recent empirical results show that in some situations, it is even more efficient to use “localized” learning—i.e., to divide the domain of inputs into sub-domains, learn the desired dependence separately on each sub-domain, and then “smooth” the resu...

We consider a general model of financial flows and prices with multiple sectors and instruments. Each sector optimizes the composition of assets and liabilities in its portfolio, whose utility is given by a quadratic function constrained to satisfy the accounting identity that appears in flow-of-funds accounts and the equilibrium conditions that gu...

This work is devoted to the creation of effective optical logic systems based on the use of light emitter of a certain color directly as a fuzzy variable — the carrier of logical information and the basis for building logical solutions by transforming light emitter with appropriate light filters. Optical processing of color information, which refle...

AlphaZero and its extension MuZero are computer programs that use machine-learning techniques to play at a superhuman level in chess, go, and a few other games. They achieved this level of play solely with reinforcement learning from self-play, without any domain knowledge except the game rules. It is a natural idea to adapt the methods and techniq...

Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation—where he proposed, analyzed, and utilized ordered weighted averaging (OWA). The OWA algorithm itself provides only a specific type of data aggregation. However, it turns out that if we...

Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation – where he proposed, analyzed, and utilized the use of Ordered Weighted Averaging (OWA). The OWA algorithm itself provides only a specific type of data aggregation. However, it turns ou...

This book focuses on an overview of the AI techniques, their foundations, their applications, and remaining challenges and open problems. Many artificial intelligence (AI) techniques do not explain their recommendations. Providing natural-language explanations for numerical AI recommendations is one of the main challenges of modern AI. To provide s...

Experts usually express their degrees of belief in their statements by the words of a natural language (like "maybe", "perhaps", etc.). If an expert system contains the degrees of beliefs t(A) and t(B) that correspond to the statements A and B, and a user asks this expert system whether "A & B" is true, then it is necessary to come up with a reason...

Multi-view techniques help us reconstruct a 3-D object and its properties from its 2-D (or even 1-D) projections. It turns out that similar techniques can be used in processing uncertainty—where many problems can reduced to a similar task of reconstructing properties of a multi-D object from its 1-D projections. In this chapter, we provide an overv...

In practice, we often need to find regression parameters in situations when for some of the values, we have several results of measuring this same value. If we know the accuracy of each of these measurements, then we can use the usual statistical techniques to combine the measurement results into a single estimate for the corresponding value. In so...

It is known that, in general, people overestimate the probabilities of joint events. In this paper, we provide an explanation for this phenomenon – an explanation based on Laplace Indeterminacy Principle and Maximum Entropy approach.KeywordsSubjective probabilityInterval uncertaintyMaximum Entropy approachLaplace Indeterminacy PrincipleFuzzy logic

Joule’s Energy Conservation Law was the first “meta-law”: a general principle that all physical equations must satisfy. It has led to many important and useful physical discoveries. However, a recent analysis seems to indicate that this meta-law is inconsistent with other principles—such as the existence of free will.
We show that this conclusion a...

While Deep Neural Networks (DNNs) have shown incredible performance in a variety of data, they are brittle and opaque: easily fooled by the presence of noise, and difficult to understand the underlying reasoning for their predictions or choices. This focus on accuracy at the expense of interpretability and robustness caused little concern since, un...

As a system becomes more complex, at first, its description and analysis becomes more complicated. However, a further increase in the system's complexity often makes this analysis simpler. A classical example is Central Limit Theorem: when we have a few independent sources of uncertainty, the resulting uncertainty is very difficult to describe, but...

One of the most effective image processing techniques is the use of convolutional neural networks, where we combine intensity values at grid points in the vicinity of each point. To speed up computations, researchers have developed a dilated version of this technique, in which only some points are processed. It turns out that the most efficient cas...

In this paper, we give a general explanation of why there are 3 basic colors and 4 basic tastes. One of the advantages of having an explanation on a system level (without involving physiological details) is that a general explanation works not only for humans, but for potential extra-terrestrial intelligent beings as well.

As a system becomes more complex, at first, its description and analysis becomes more complicated. However, a further increase in the system’s complexity often makes this analysis simpler. A classical example is Central Limit Theorem: when we have a few independent sources of uncertainty, the resulting uncertainty is very difficult to describe, but...

The aim of the 19th World Congress of the International Fuzzy Systems Association and the 12th Conference of the European Society for Fuzzy Logic and Technology, IFSA-EUSFLAT 2021, is to bring together researchers dealing with the theory and applications of computational intelligence, fuzzy logic, fuzzy systems, soft computing and related areas.
T...

This book presents innovative intelligent techniques, with an emphasis on their biomedical applications. Although many medical doctors are willing to share their knowledge – e.g. by incorporating it in computer-based advisory systems that can benefit other doctors – this knowledge is often expressed using imprecise (fuzzy) words from natural langua...

Mainly focusing on processing uncertainty, this book presents state-of-the-art techniques and demonstrates their use in applications to econometrics and other areas. Processing uncertainty is essential, considering that computers – which help us understand real-life processes and make better decisions based on that understanding – get their informa...

This book deals with the effect of public and semi-public companies on economy. In traditional economic models, several private companies – interested in maximizing their profit – interact (e.g., compete) with each other. Such models help to avoid wild oscillation in production and prices (typical for uncontrolled competition), and to come up with...

This book constitutes the full research papers and short monographs developed on the base of the refereed proceedings of the International Conference: Information and Communication Technologies for Research and Industry (ICIT 2020).
The book brings accepted research papers which present mathematical modelling, innovative approaches and methods of s...

This book lists current and potential biomedical uses of computational intelligence methods. These methods are used in diagnostics and treatment of such diseases as cancer, cardiac diseases, pneumonia, stroke, and COVID-19. Many biomedical problems are difficult; so, often, the current methods are not sufficient, new methods need to be developed. T...

Among the main fundamental challenges related to physics and human
intelligence are: How can we reconcile the free will with the deterministic character
of physical equations? What is the physical meaning of extra spatial dimensions
needed to make quantum physics consistent? and Why are we often smarter than
brain-simulating neural networks? In thi...

Purpose
In real life, we only know the consequences of each possible action with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. A usual way to compare such interval-valued alternatives is to use the optimism–pessimism criterion developed by Nobelist Leo Hurwicz. In thi...

Consider a semi-mixed duopoly with two producers where is a semi-public company and is a private firm. The companies supply a homogeneous produce under the expenditure estimated by the cost functions, , where is the output volume by producer i. The market-clearing supply is specified by a demand (inverse price) function , whose argument p is the pr...

Consider an oligopoly of at least two producers of a homogeneous good with cost functions , , , where is the supply by producer i. Consumers’ demand is described by a demand function , whose argument p is the market price established by a cleared market.

As usual, we formulate the Tolls Optimization Problem (TOP) as a single-leader-multi-follower game that occurs in a multi-commodity highway network. The usual parameters for this formulation are the following.

In Chap. 2 we presented mathematically rigorous proofs of the conjectures (cf.,
[1]) concerning the behavior of the semi-public company and private firm of a semi-mixed duopoly of a homogeneous good. The main difference of this work from the classical duopoly models is in the presence of one producer who maximizes not its net profit, but the convex...

Purpose
In many real-life situations, we do not know the exact values of the expected gain corresponding to different possible actions, we only have lower and upper bounds on these gains – i.e., in effect, intervals of possible gain values. The purpose of this study is to describe all possible ways to make decisions under such interval uncertainty....

Purpose
In many real-life situations ranging from financial to volcanic data, growth is described either by a power law – which is linear in log-log scale or by a quadratic dependence in the log-log scale. The purpose of this paper is to explain this empirical fact.
Design/methodology/approach
The authors use natural scale invariance requirements....

Purpose
The current pandemic is difficult to model – and thus difficult to control. In contrast to the previous epidemics, whose dynamics were smooth and well described by the existing models, the statistics of the current pandemic are highly oscillating. The purpose of this paper is to explain these oscillations and to see how this explanation can...

Millions of lines of code are written every day, and it is not practically possible to perfectly thoroughly test all this code on all possible situations. In practice, we need to be able to separate codes which are more probable to contain bugs – and which thus need to be tested more thoroughly – from codes which are less probable to contain flaws....

In this paper, we show that a practical need for synchronization of localization sensors leads to an interval-uncertainty problem. In principle, this problem can be solved by using the general linear programming algorithms, but this would take a long time - and this time is not easy to decrease, e.g., by parallelization since linear programming is...

In many practical applications ranging from self-driving cars to industrial application of mobile robots, it is important to take interval uncertainty into account when performing odometry, i.e., when estimating how our position and orientation (‘pose’) changes over time. In particular, one of the important aspects of this problem is detecting mism...

One of the biases potentially affecting systems engineers is the confirmation bias, when instead of selecting the best hypothesis based on the data, people stick to the previously-selected hypothesis until it is disproved. In this paper, on a simple example, we show how important it is to take care of this bias: namely, that because of this bias, w...

Changes in the elderlies depression level result from a large number of small independent factors. Such situations are ubiquitous in applications. In most such cases, due to the Central Limit Theorem, the corresponding distribution is close to Gaussian. For the changes in the elderlies depression level, however, the empirical distribution is far fr...

One of the main motivations for designing computer models of complex systems is to come up with recommendations on how to best control these systems. Many complex real-life systems are so complicated that it is not computationally possible to use realistic nonlinear models to find the corresponding optimal control. Instead, researchers make recomme...

Once we have an adequate description of the users’ preferences and of the corresponding application domain, we need to come up with a system design which is the most appropriate for this setting. One of the challenges in searching for such a design is that we need to take into account many different aspects of the resulting system. In many practica...

Once we have an adequate description of the users’ preferences and of the corresponding application domain, we need to come up with a system design which is the most appropriate for this setting. One of the ways to come up with a good design is to use the experience of successful similar systems—engineering and even biological. Examples of such sys...

To properly design a system, we need to know what the current state is and what the dynamics of this application domain are. Often, an important part of this information comes from expert estimates. Dealing with expert estimates is challenging: while measurement results come with guaranteed bounds on the corresponding measurement inaccuracy, the on...

In the ideal world, we should be able to ask each user’s opinion about each alternative, but for large systems, with many possible alternatives, this is not realistic. Therefore, we need to extrapolate the user’s preferences based on partial information that we can elicit from the user.

One of the biases potentially affecting systems engineers is the confirmation bias, when instead of selecting the best hypothesis based on the data, people stick to the previously-selected hypothesis until it is disproved. In this chapter, on a simple example, we show how important is to take care of this bias: namely, that because of this bias, we...

In analyzing information about the application domain, it is important to take into account that many real-world processes are probabilistic. In many cases, the corresponding probability distributions are Gaussian (normal)—which makes perfect sense, since such processes are affected by many independent factors and it is known that in such cases, th...

Once we have the information about the system, information coming from measurements and from expert estimates, we use this information to come up with a model describing the system. The usual way to come up with such a model is to formulate several different hypotheses and to select the one that best fits the data. Techniques for formulating hypoth...

Main objectives of systems engineering: a brief reminder. One of the main objectives of systems engineering is to design, maintain, and analyze systems that help the users.

## Projects

Projects (12)