ArticlePDF Available

AN ASSESSMENT METHOD FOR THE CONTROL SYSTEMS QUALITY МЕТОД ОЦІНКИ ЯКОСТІ СИСТЕМ КЕРУВАННЯ

Authors:
  • Ukrainian State University of Science and Technologies

Abstract and Figures

Possibilities and methods of applying the concept of uncertainty in order to assess the quality of control are investigated. An analysis of the approaches currently used for uncertainty assessment is carried out. The use of the informational approach for this purpose is substantiated. It is proposed to use informational uncertainty as a criterion for the quality of control tools. For this, the amount of negative information (misinformation) caused by the imperfection of management methods and devices is calculated. The method of estimating the amount of misinformation is based on Bongard's concept of uncertainty. Misinformation is considered as Bongard's negative useful information. The amount of misinformation is the difference between the Shannon entropy and the Bongard's uncertainty and is used as a criterion for absolute information uncertainty. The criterion of relative information uncertainty is also proposed as the ratio of the amount of misinformation introduced by the control tool to the maximum possible value of misinformation. The maximum value is the amount of misinformation at zero Shannon entropy. Mathematical expressions for calculating the absolute and relative uncertainty of control systems are given. Formulas for calculating deterministic analogs of Shannon's entropy and Bongard's uncertainty are proposed to assess the quality of control tools that are investigated by non-statistical methods. Appropriate expressions for calculating criteria of absolute and relative uncertainty based on transient processes of control systems are derived. The practical use of the proposed method is shown. To demonstrate the use of the criterion of information uncertainty, simulation of the PID controller was carried out using Scilab/Xcos tools. The vectors of input and output values obtained as a result of modeling were processed using the formulas introduced in this article. The criterion of relative information uncertainty was applied to compare the quality of PID controllers that were discretized by different methods. Анотація. Досліджуються можливості та методи використання концепції невизначеності з метою оцінки якості управління. Виконаний аналіз підходів, що нині використовуються для оцінки невизначеності. Обґрунтовується використання з цієї метою інформаційного підходу. Пропонується використовувати інформаційну невизначеність як критерій якості засобів керування. Для цього розраховується кількість негативної інформації (дезінформації), що спричиняється недосконалістю методів та пристроїв керування. Метод оцінки кількості дезінформації заснований на концепції невизначеності Бонгарда. Дезінформація розглядається як негативна корисна інформація Бонгарда. Кількість дезінформації є різницею між ентропією Шеннона і невизначеністю Бонгарда і використовується як критерій абсолютної інформаційної невизначеності. Запропонований також критерій відносної інформаційної невизначеності як відношення кількості внесеної засобом керування дезінформації до її максимально можливого значення. За максимальне значення береться кількість дезінформації при нульовому значенні ентропії Шеннона. Наведені математичні вирази для розрахунку абсолютної і відносної невизначеності систем керування. Для оцінки якості керуючих засобів, які досліджуються нестатистичними методами, запропоновані формули для розрахунку детермінованих аналогів ентропії Шеннона і невизначеності Бонгарда. Виведені відповідні вираження для розрахунку критеріїв абсолютної і відносної невизначеності на основі перехідних процесів систем керування. Показане практичне використання запропонованого методу. Для демонстрації використання критерію інформаційної невизначеності було проведено моделювання ПІД-регулятора засобами Scilab/Xcos. Отримані в результаті моделювання вектори значень вхідних і вихідних величин оброблялись за допомогою введених у цій статті формул. Критерій відносної інформаційної невизначеності був застосований для порівняння якості ПІД-регуляторів, які були дискретизовані різними методами. Introduction. During the analysis and synthesis of control systems, the task of assessing the effectiveness and quality of systems arises. Today, there are many criteria of effectiveness and quality, which are often contradictory. To assess the quality of control, the following are used: overshoot, oscillation, duration of the transient process, settling time, time to reach the first maximum of the controlled value, stability margin, response speed, frequency of natural oscillations of the system, etc. Improving one indicator leads to deterioration of another. It would be more convenient to have a single universal criterion that provides a comprehensive assessment of the quality of control tools. Thus, there is a need to develop a universal criterion for assessing the accuracy of the production of control influences, which would allow taking into account various factors affecting the quality of the designed control system. It is known that a high degree of generalization of patterns and phenomena in a wide variety of areas is achieved by using methods and concepts of information theory. This allows one to abstract from specific physical processes occurring in the designed system. Many researchers view information as a methodological basis for generalization and simplification. It is necessary to analyze information processes, estimate the amount of information circulating in the system, and calculate the degree of its distortion. Such characteristics of control systems as complexity, orderliness, organization and entropy are used. However, such approaches have not found wide application in the practice of analysis and synthesis of control systems. Lately the Uncertainty Approach (UA) has become established in measurement theory. But uncertainty is not only found in measurements. To a large extent, this concept also applies to problems of control. The expediency of using UA also arises in relation to the quality of controls. In paper [1], a similar problem was solved for assessing the quality of measuring instruments. It was proposed to use the information criterion. There is introduced the concept of information uncertainty, which is estimated by the amount of negative useful information, that is, misinformation introduced by the measuring instrument. This approach is also useful for solving problems of analysis and synthesis of control systems. Literature review. Since Shannon introduced the concept of information entropy as a quantitative measure of the uncertainty of some source of information, a large number of researchers have proposed a number of other approaches to estimate the uncertainty. The first push for this was made by Alfrped Renyi in his report at the 4th Berkeley Symposium [2]. He considered the problem of estimation of amount of uncertainty of the distribution ℘, that is, the amount of uncertainty concerning the outcome of an experiment, the possible results of which have the probabilities p1, p2, …, pn. Renyi pointed out that the Shannon entropy
Content may be subject to copyright.
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
11
УДК 681.586 AN ASSESSMENT METHOD FOR THE
CONTROL SYSTEMS QUALITY
МЕТОД ОЦІНКИ ЯКОСТІ СИСТЕМ КЕРУВАННЯ
Manko G. I.
Манко Г.І.
Ukrainian State University of Science and Technologies, Dnipro, Ukraine
ORCID: https://orcid.org/0000-0002-2242-5064
E-mail: gen.iv.manko@gmail.com
Copyright © 2024 by author and the journal “Automation of technological and business processes”.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licanses/by/4.0
DOI:
Abstract. Possibilities and methods of applying the concept of uncertainty in order to assess the quality of
control are investigated. An analysis of the approaches currently used for uncertainty assessment is carried
out. The use of the informational approach for this purpose is substantiated. It is proposed to use informational
uncertainty as a criterion for the quality of control tools. For this, the amount of negative information
(misinformation) caused by the imperfection of management methods and devices is calculated. The method
of estimating the amount of misinformation is based on Bongard's concept of uncertainty. Misinformation is
considered as Bongard's negative useful information. The amount of misinformation is the difference between
the Shannon entropy and the Bongard’s uncertainty and is used as a criterion for absolute information
uncertainty. The criterion of relative information uncertainty is also proposed as the ratio of the amount of
misinformation introduced by the control tool to the maximum possible value of misinformation. The maximum
value is the amount of misinformation at zero Shannon entropy. Mathematical expressions for calculating the
absolute and relative uncertainty of control systems are given.
Formulas for calculating deterministic analogs of Shannon's entropy and Bongard's uncertainty are proposed
to assess the quality of control tools that are investigated by non-statistical methods. Appropriate expressions
for calculating criteria of absolute and relative uncertainty based on transient processes of control systems
are derived.
The practical use of the proposed method is shown. To demonstrate the use of the criterion of information
uncertainty, simulation of the PID controller was carried out using Scilab/Xcos tools. The vectors of input and
output values obtained as a result of modeling were processed using the formulas introduced in this article.
The criterion of relative information uncertainty was applied to compare the quality of PID controllers that
were discretized by different methods.
Анотація. Досліджуються можливості та методи використання концепції невизначеності з метою
оцінки якості управління. Виконаний аналіз підходів, що нині використовуються для оцінки
невизначеності. Обґрунтовується використання з цієї метою інформаційного підходу. Пропонується
використовувати інформаційну невизначеність як критерій якості засобів керування. Для цього
розраховується кількість негативної інформації (дезінформації), що спричиняється недосконалістю
методів та пристроїв керування. Метод оцінки кількості дезінформації заснований на концепції
невизначеності Бонгарда. Дезінформація розглядається як негативна корисна інформація Бонгарда.
Кількість дезінформації є різницею між ентропією Шеннона і невизначеністю Бонгарда і
використовується як критерій абсолютної інформаційної невизначеності. Запропонований також
критерій відносної інформаційної невизначеності як відношення кількості внесеної засобом керування
дезінформації до її максимально можливого значення. За максимальне значення береться кількість
дезінформації при нульовому значенні ентропії Шеннона. Наведені математичні вирази для
розрахунку абсолютної і відносної невизначеності систем керування.
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
12
Для оцінки якості керуючих засобів, які досліджуються нестатистичними методами, запропоновані
формули для розрахунку детермінованих аналогів ентропії Шеннона і невизначеності Бонгарда.
Виведені відповідні вираження для розрахунку критеріїв абсолютної і відносної невизначеності на
основі перехідних процесів систем керування.
Показане практичне використання запропонованого методу. Для демонстрації використання
критерію інформаційної невизначеності було проведено моделювання ПІД-регулятора засобами
Scilab/Xcos. Отримані в результаті моделювання вектори значень вхідних і вихідних величин
оброблялись за допомогою введених у цій статті формул. Критерій відносної інформаційної
невизначеності був застосований для порівняння якості ПІД-регуляторів, які були дискретизовані
різними методами.
Key words: control system, criterion, quality of control, uncertainty measure, Entropy, Bongard's uncertainty,
misinformation, information saturation, Scilab/Xcos.
Ключові слова: система керування, критерій, якість контролю, міра невизначеності, ентропія,
невизначеність Бонгарда, дезінформація, інформаційна насиченість, Scilab/Xcos.
Introduction. During the analysis and synthesis of control systems, the task of assessing the effectiveness and
quality of systems arises. Today, there are many criteria of effectiveness and quality, which are often
contradictory. To assess the quality of control, the following are used: overshoot, oscillation, duration of the
transient process, settling time, time to reach the first maximum of the controlled value, stability margin,
response speed, frequency of natural oscillations of the system, etc. Improving one indicator leads to
deterioration of another. It would be more convenient to have a single universal criterion that provides a
comprehensive assessment of the quality of control tools.
Thus, there is a need to develop a universal criterion for assessing the accuracy of the production of control
influences, which would allow taking into account various factors affecting the quality of the designed control
system.
It is known that a high degree of generalization of patterns and phenomena in a wide variety of areas is achieved
by using methods and concepts of information theory. This allows one to abstract from specific physical
processes occurring in the designed system. Many researchers view information as a methodological basis for
generalization and simplification.
It is necessary to analyze information processes, estimate the amount of information circulating in the system,
and calculate the degree of its distortion. Such characteristics of control systems as complexity, orderliness,
organization and entropy are used. However, such approaches have not found wide application in the practice
of analysis and synthesis of control systems.
Lately the Uncertainty Approach (UA) has become established in measurement theory. But uncertainty is not
only found in measurements. To a large extent, this concept also applies to problems of control. The
expediency of using UA also arises in relation to the quality of controls.
In paper [1], a similar problem was solved for assessing the quality of measuring instruments. It was proposed
to use the information criterion. There is introduced the concept of information uncertainty, which is estimated
by the amount of negative useful information, that is, misinformation introduced by the measuring instrument.
This approach is also useful for solving problems of analysis and synthesis of control systems.
Literature review. Since Shannon introduced the concept of information entropy as a quantitative measure of
the uncertainty of some source of information, a large number of researchers have proposed a number of other
approaches to estimate the uncertainty.
The first push for this was made by Alfrped Renyi in his report at the 4th Berkeley Symposium [2]. He
considered the problem of estimation of amount of uncertainty of the distribution ℘, that is, the amount of
uncertainty concerning the outcome of an experiment, the possible results of which have the probabilities p1,
p2, …, pn. Renyi pointed out that the Shannon entropy
󰇛󰇜

(1)
is characterized by the following postulates:
(a) H(p1, p2, …, pn) is a symmetric function of its variables for n = 2, 3,…
(b) H(p, 1–p) is a continuous function of p for 0 ≤ p ≤ 1.
(c) H(1/2, 1/2) = 1.
(d) H[℘*℺] = H(℘) + H(℺) for two probability distribution ℘ = (p1, p2, …, pn) and ℺ = (q1, q2, …, qn).
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
13
Renyi indicated that there are many quantities other than (1) which satisfy the postulates above. He suggested
using the next quantity as a measure of the entropy of the distribution ℘ = (p1, p2, …, pn):
󰇛󰇜

 
where α > 0 and α ≠ 1. He called it the entropy of order α of the distribution ℘.
The authors of the article [3] made a review of entropy measures for uncertainty quantification that appeared
after Renyi, such as Tsallis entropy, Sample entropy, Permutation entropy, Approximate entropy, and
Transfer entropy. It is shown here that the information is the decline in disorder and ambiguity, uncertainty is
referred to the unlikelihood of logical reasoning, entropy is the expected information, and ignorance is the lack
of knowledge regarding the uncertainty.
Classical information theory is based on the use of probabilistic characteristics. As it shown in [4], evidence
theory is able to better handle unknown and imprecise information. Owing to its advantages, evidence theory
has more flexibility and effectiveness for modeling and processing uncertain information.
The essence of the evidence theory is described in sufficient detail in [5]. Evidence theory extends classical
probability theory. It is based on the basic probability assignment concept (b.p.a.), a generalization of the
concept of the probability distribution in probability theory. Each b.p.a. in evidence theory has a belief function
and a plausibility function associated with it. The belief (plausibility) value of a set is the minimum (maximum)
support of information represented by the b.p.a. on that set.
As evidence theory generalizes probability theory, there are more types of uncertainty in evidence theory than
in probability theory. In particular, Yong Deng [6] proposed a new uncertainty measure named as Deng
entropy: 󰇛󰇜 󰇛󰇜󰇛󰇜

where m is a mass function. If Y is a set of mutually exclusive and collectively exhaustive events denoted as
Y = {1,2,…,,…,||}, then mass function is a mapping m from 2 to [0, 1], defined as
:2→[0,1]
and satisfying the following conditions: 󰇛󰇜 󰇛󰇜

Deng entropy is the generalization of Shannon entropy since the value of Deng entropy is identical to that of
Shannon entropy when the b.p.a. defines a probability measure.
Paper [7] deals with the assessment of randomness of a certain variable X and with different criteria that can
be used to evaluate randomness. There are established optimal bounds between an tropic quantities and
statistical quantities in an interplay between information theory and statistics.
Article [8] states that there are situations where probabilistic measures of entropy do not work. To deal with
such situations, instead of taking the probability, the idea of fuzziness can be explored. In this paper, a two-
parameter generalized measure of fuzzy entropy is proposed, where instead of the probability p(xi) the
membership function µ(xi) of the fuzzy set is used:
This value satisfies all the properties of fuzzy entropy, thus it is a valid measure of fuzzy entropy.
In study [9] it is proposed a new intuitionistic fuzzy entropy-based algorithm for feature selection before
classification tasks in an information system. For this purpose, new intuitionistic fuzzy entropy has been
developed to measure feature entropy (uncertainty) as parameters for feature selection.
A certain contribution to the development of uncertainty assessment methods was made by Ukrainian
researchers. In report [10] it is proposed an information model of the functioning of advertising. There is
introduced ideas about useful and harmful (excess) information. The concept of user’s thesaurus is also
introduced. The effectiveness of advertising is determined by the mutual influence of useful and redundant
information. The effectiveness of advertising is determined by the perception of positive information by the
recipient. Analysis of the information model of advertising leads to the conclusion of a two-stage process of
functioning of advertising. This is due to the influence of redundant information on useful information that
should be perceived by the recipient. In the case of the advertising operation the redundant information is
always harmful.
Let I1(t) and I2(t) be the functions of accumulation of positive and negative information:
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
14

 

 󰆒
where T is a thesaurus of recipients; β is a coefficient of influence of the negative information on the positive
one and ’ is a coefficient of influence of the positive information on the negative one. It is presumed that β,
β' < 1.
Solving these equations, we obtain:
󰇝󰆓󰇞
󰆓
For large enough t:
󰇝󰆓󰇞
󰆓
These results show that an accumulation of both kinds of information I1(t) and I2(t) depends on their mutual
influence.
Based on the given expressions, the criterion of advertising effectiveness is obtained:
󰆓
󰆓

The article [11] deals with the analysis of reliability and objectivity of information that can be found on the
internet and the objectivity and reliability of such information is compared to the system's behavior. The terms
"useful" and "useless" information have been introduced. On the basis of Shannon's law of connection between
information and entropy, as the measure of system's organization the notion of information chaos is analyzed,
it illustrating growth of entropy in such system.
Described is the parameter which has to discern authentic useful information available for analyzing and
obtaining new knowledge from false and biased. A variant of the general scheme of the dynamic information
system, reflecting the appearance of inaccurate information, is given.
The vast majority of the considered articles represent exclusively theoretical developments. As an example of
the practical application of information approaches, we can cite [12, 13].
The article [12] provides examples of the practical use of the entropy approach to statistical hypothesis testing
problems. The author considers relative entropy as the difference between two hypotheses H0 and H1. The
Kullback-Leibler discrepancy is used as a measure of the discrepancy between the two distributions = (p1,
p2, …, pn) and = (q1, q2, …, qn): 󰇛󰇜
It is also argued that the relative entropy is a special case of the Renyi entropy of order α → 1.
Another option is the Jeffries divergence as a symmetric version of the Kullback-Leibler divergence;
󰇛󰇜󰇛󰇜󰇛󰇜
For distributions ℘ = (p1, p2, …, pn) and ℺ = (q1, q2, …, qn):
󰇛󰇜󰇛󰇜
As to [13] the structural reliability assessment is considered here for the larger and more complex systems with
implicit performance functions, also called black-box problems.
Kriging is a widely used and accurate interpolation method, that involves constructing a meta-model to
approximate the existing computer simulation model through the design of experiments and a small amount of
simulation model calculation. Using the active learning strategy, the Kriging model is iteratively updated until
the desired level of accuracy is achieved and deemed satisfactory.
This article aims to the improvement of efficiency for training Kriging metamodel by the proposed learning
function based on information entropy theory, namely IE-AK. By progressively exploring these important
samples to achieve the trade-off between accuracy and efficiency, the new method greatly accelerates the
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
15
convergence of the Kriging metamodel without loss of accuracy, which is validated by a series of numerical
examples.
Purpose and Objectives. The purpose of this report is to justify the application of the information uncertainty
criterion as a generalized criterion for the quality of control systems. This necessitates solving the following
problems:
- to formulate a precise statement of the task;
- to develop a quality assessment method based on the use of information criteria;
- to demonstrate the practical use of the described criteria.
Methods. The methods of the theory of automatic control, mathematical modeling, and mathematical statistics
were used during the research. Open source software for numerical computation Scilab was used as a tool.
Modeling of systems was carried out by the imitative simulation system Xcos.
Results and Discussion. The design and operation of control systems are carried out under conditions of
uncertainty caused by incomplete knowledge of the processes occurring in the system, unreliability of technical
and software tools, etc. The generation of control actions is carried out by the control device based on
information about the state of the control object and/or about external disturbances acting at this object.
Let us consider a typical structural diagram of the control process (Fig. 1).
Figure 1 Structural diagram of the control process
The control device generates the control action Y based on the input information, which is either data on the
deviation of the state parameter of the control object from the specified value, or data on the disturbance Z
acting at the object. For generalization, we denote the input of the control device as X. Thus, the control device
performs the functional transformation Y = f(Х).
Let us assume that P = {pi} is the probability distribution of input X, and Q = {qi} is the probability distribution
of output Y. For an ideal control device, P = Q. In reality, these distributions do not coincide due to the
imperfection of the device. The degree of mismatch can be estimated by the Bongard uncertainty value [14]:
󰇛󰇜

According to Gibbs' theorem, the inequality

 

is satisfied for all probability distributions pi | in і qi | in and for all nn; the equality holds if and
only if pi = qi for all in. The proof of the theorem is given in [15].
In general,
󰇛󰇜󰇛󰇜󰇛󰇜

where H(p) is Shannon's information entropy.
It is known that the amount of information is a measure of uncertainty reducing regarding the object
under study. Bongard introduced the concept of useful information contained in the hypothesis q, with respect
to a problem with an answer probability distribution p. If it is assumed that useful information is zero when qi
= 1/n, the increment in useful information is given by the expression:
I = log n N(p/q).
Since, due to the imperfection of the control device, the Bongard uncertainty increases relative to the
entropy H(p), it can be argued that such a device introduces misinformation in the amount
󰇛󰇜󰇛󰇜

Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
16
Following the findings of the article [1], we will use the last expression as a measure of the information
uncertainty (IU) of control devices. We will also use the relative information uncertainty (RIU) as the ratio of
IU to its maximum value, which occurs at H (p) = 0:
󰇛󰇜
 󰇛󰇜
󰇛󰇜 (2)
For control systems in which inputs and outputs are continuous quantities, sums are replaced by integrals, and
discrete probabilities are replaced by probability densities. Shannon entropy is replaced with differential
entropy:
󰇛󰇜 󰇛󰇜󰇛󰇜

Instead of Bongard's uncertainty we will use the following expression:
󰇛
󰇜 󰇛󰇜󰇛󰇜

So, the formula for calculating RIU will take the following form:
󰇛󰇜
󰇛󰇜
The parameters of discrete and continuous distribution laws can be obtained through statistical experiments.
Relative information uncertainty has the following properties:
for an ideal control device RIU is zero;
since H(p) ≤ N(p/q), RIU is a non-negative value;
RIU approaches 1 when N(p/q) >> H(p).
Using RIU makes it possible to compare the quality of various tools and methods of control to select the
optimal option.
In addition, RIU has the following advantages:
it provides an assessment of the quality of control using one generalized criterion;
it considers the laws of distribution of input and output signals, which also makes it possible to evaluate
how a particular device is suitable for a specific input value X;
it does not require complex mathematical calculations.
The quality of the controllers' operation is usually assessed by observing transient processes, i.e. the change in
the output value Y(t) that occurs when test signals X(t) are supplied to the controller. In this case, there is no
possibility of directly applying the relationships described above. It becomes necessary to assess the amount
of information (and misinformation) using non-statistical methods. To do this, we will introduce a number of
analogs for the quantities used in calculating information criteria. We will assess the information saturation of
the signal Y(t) using the intensity of its changes:
󰇛󰇜󰇻 
󰇻
where εy resolution threshold of Y(t) values, such as the quantization step.
As an analogue of the Shannon entropy, we will use the integral characteristic of the variability of the value
Y(t):
󰇛󰇜󰇛󰇜
󰇛󰇜
where θ=t/T relative time; T time interval of observation of the variable Y(t); εt time resolution threshold,
for example, the sampling interval of the signal in time.
An ideal regulator would be able to instantaneously track changes in the test signal X(t). We denote the output
of such an ideal regulator by Ў(t). Then, as an analogue of Bongard's uncertainty, we will use the following
quantity:
󰇛󰇜󰇛󰇜
󰇛󰇜
For practical calculations, it is convenient to use discrete forms:

Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
17
 
where πyi and πўi are values of the intensity of change in the outputs of the real and ideal regulators, determined
for discrete moments of time θi. RIU criterion:
(2)
To demonstrate the use of the criterion of information uncertainty, we will perform PID controller simulation
using Scilab/Xcos. The model is presented in fig. 2.
Figure 2 Xcos-model
In the upper part of the diagram, a model of a continuous-time PID controller is presented, in the lower part, a
discrete one. Vectors of input and output values are transferred to the Scilab workspace. These data are
processed according to formula (2) by the following Scilab-scenario:
pid=abs(diff(Yd.values));
piX=abs(diff(X.values));
Sh = -sum(pid.*log2(pid))/sum(pid);
Bo = -sum(pid.*log2(piX))/sum(pid);
critd = 1-Sh/Bo;
disp(critd)
Discretization of the PID controller was performed by two methods: by Euler and by Tustin. For each of these
options, the simulation was run and the amount of misinformation was calculated. The results in Table 1 show
the better quality of the controller discretized by Tustin's method. Which corresponds to the theory.
Table 1 Research results
Controller
The amount of misinformation
discretized by the Euler's method
0.0467428
discretized by Tustin's method
0.0565175
Conclusion. The suggested method of the control quality assessment, which is based on the application of
information criterion, allows for a simple and effective evaluation of the accuracy and quality of technical
control devices.
The generalized criterion of control quality can be the information uncertainty of the control action, based on
the use of Bongard’s uncertainty. The uncertainty assessment criterion is defined as the ratio of the amount of
misinformation introduced by the control device to the maximum possible amount.
Автоматизація технологічних і бізнес-процесів Volume 16, Issue 4 / 2024
http://www.atbp.ontu.edu.ua/
18
The proposed information criterion can be effectively used both for assessing the quality of existing control
systems and for selecting methods for increasing the accuracy of designed systems.
References
[1] G. Manko and E. Titova, “Informational uncertainty of measuring instruments”, Ukrainian Metrological
Journal, no. 4, pp. 1519, Apr. 2021. DOI: 10.24027/2306-7039.4.2021.250399
[2] A. Renyi, “On Measures of Entropy and Information”, 4th Berkeley Symposium on Mathematical Statistics
and Probability, Vol. 1, Berkeley, 20 June-30 July 1961, pp. 547-561.
[3] A. Namdari and Z. Li (Steven). “A review of entropy measures for uncertainty quantification of stochastic
processes”, Advances in Mechanical Engineering, vol. 11, no. 6, 2019. DOI:10.1177/1687814019857350.
[4] Y. Deng, “Uncertainty measure in evidence theory”, Science China Information Sciences, vol. 63, no. 11,
Nov. 2020. DOI: 10.1007/s11432-020-3006-9.
[5] J. Abelldn et al., “A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions”.
Entropy, no.25, p. 867, May 2023. DOI: 10.3390/e25060867/
[6] Yong Deng, "Deng entropy," Chaos, Solitons & Fractals, Elsevier, vol. 91(C), pp. 549553, 2016. DOI:
10.1016/j.chaos.2016.07.014.
[7] O. Rioul, “What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing”.
Phys. Sci. Forum, vol. 5, no. 30, 2022. DOI: 10.3390/psf2022005030.
[8] S. Peerzada et al., “A New Generalized Fuzzy Information Measure and Its Properties”, International
Journal of Advance Research in Science and Engineering, vol. 06, no. 12, pp. 16471654, Dec. 2017.
[9] K. Pandey et al., “Selecting features by utilizing intuitionistic fuzzy Entropy method”, Decision Making:
Applicationsin Management and Engineering, vol. 6, issue 1, pp. 111133, 2023. DOI:
10.31181/dmame07012023p.
[10] A. Vinkovska et al., “Information model of the economic efficiency of advertising”, SHS Web of
Conferences, vol. 65, article no. 04022, 6 p., May 2019. DOI: 10.1051/shsconf/20196504022
[11] V. Voloshyn et al., “The Analysis of Reliability and Objectivity of Information”, Information Modelling
and Knowledge Bases XXXIV, vol. 364, pp. 183194, Jan. 2023, DOI: 10.3233/FAIA220501.
[12] E. Ustaoglu and A. Evren, “On Some Relative Entropy Statistics”, Journal of Administrative Sciences
and Policy Studies, vol. 3, no. 2, pp. 7590, Dec. 2015. DOI: 10.15640/jasps.v3n2a5.
[13] K. Yuan et al., “AK-SYS-IE: A novel adaptive Kriging-based method for system reliability assessment
combining information entropy”, Reliability Engineering & System Safety, Elsevier, vol. 246, June 2024. DOI:
10.1016/j.ress.2024.110070.
[14] M. M. Bongard, Pattern Recognition, Rochelle Park, N.J.: Hayden Book Co., Spartan Books, 1970.
[15] G. J. Klir, Uncertainty and Information. Foundations of Generalized Information Theory, John Wiley &
Sons, Inc., Hoboken, New Jersey, 2006.
ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
The article deals with the analysis of reliability and objectivity of information that can be found on the internet and the objectivity and reliability of such information is compared to the system’s behavior. The terms “useful” and “useless” information have been introduced. On the basis of Shannon’s law of connection between information and entropy, as the measure of system’s organization the notion of information chaos is analyzed, it illustrating growth of entropy in such system. The work comprises a graphical interpretation of various events with Lars Onsager’s curves. Described is the parameter which has to discern authentic useful information available for analyzing and obtaining new knowledge from false and biased. A variant of the general scheme of the dynamic information system of the Internet, reflecting the appearance of inaccurate information, is given. The analysis of experts’ evaluation of the internet users’ reaction on appearance of false, biased or unreliable information showed that young users were oriented largely on emotional content, while the scientific society preferred reliability, objectiveness and authenticity of information.
Article
Full-text available
Recently, there has been a revival of interest in the use of the information approach in the theory of measurements. Unlike the traditional approach, information theory does not evaluate error or uncertainty, but entropy and the amount of information. This article analyzes a number of recent publications that develop ideas for the information approach. The limitations and disadvantages of both the entropy approach and the concept of uncertainty are indicated. As a compromise solution, it is proposed to use a criterion based on the Bongard’s uncertainty and useful information. The concept of information uncertainty is proposed, which is estimated by the amount of negative useful information, that is, misinformation introduced by the measuring instrument. Some methods for calculating information uncertainty are described. The problems of using the uncertainty approach are noted. This approach does not imply the use of such a generalized characteristic of measuring instruments as an accuracy class. The article proposes an analogue of the accuracy class in the form of relative informational uncertainty, expressed as a percentage. This will make it possible to evaluate the quality of the measuring instrument by a single parameter, the calculation of which requires a minimum of computational operations.
Article
Full-text available
Entropy is originally introduced to explain the inclination of intensity of heat, pressure, and density to gradually disappear over time. Based on the concept of entropy, the Second Law of Thermodynamics, which states that the entropy of an isolated system is likely to increase until it attains its equilibrium state, is developed. More recently, the implication of entropy has been extended beyond the field of thermodynamics, and entropy has been applied in many subjects with probabilistic nature. The concept of entropy is applicable and useful in characterizing the behavior of stochastic processes since it represents the uncertainty, ambiguity, and disorder of the processes without being restricted to the forms of the theoretical probability distributions. In order to measure and quantify the entropy, the existing probability of every event in the stochastic process must be determined. Different entropy measures have been studied and presented including Shannon entropy, Renyi entropy, Tsallis entropy, Sample entropy, Permutation entropy, Approximate entropy, and Transfer entropy. This review surveys the general formulations of the uncertainty quantification based on entropy as well as their various applications. The results of the existing studies show that entropy measures are powerful predictors for stochastic processes with uncertainties. In addition, we examine the stochastic process of lithium-ion battery capacity data and attempt to determine the relation between the changes in battery capacity over different cycles and two entropy measures: Sample entropy and Approximate entropy.
Article
Full-text available
The development of the economy and trade has led to the widespread use of advertising and the need for its constant improvement. There is a vast field of advertising theory. This paper proposes an information model of the functioning of advertising. We introduce ideas about useful and harmful (excess) information. The concept of user’s thesaurus is also introduced. The effectiveness of advertising is determined by the mutual influence of useful and redundant information. Differential equations are formulated on this basis, the solution of which allows to establish the patterns of the influence of psychological characteristics of users and the mode of presentation of information on the effectiveness of advertising.
Article
Full-text available
Statistical entropy is a measure of dispersion or spread of a random variable. Especially when the random variable is nominal, classical measures of dispersion like standard deviation can not be computed. In such cases, measures of variation, including entropy-based statistics;computed by using cell frequencies of a distributionmust be used. The asymptotic properties of entropy statistics have long been studied in literature. Relative entropy plays an important role in evaluating the degree of fit. In other words, relative entropy is a measure of goodness fit of an empirical distribution to a theoretical or hypothesized distribution. In this study for some frequently-used probability distributions,some relative entropy measures are derived by exploiting additivity property of Kullback-Leibler divergence and Jeffreys divergence. Their asymptotic properties under certain assumptions have been discussed. In the end, by some applications, the close relation between relative entropy statistics and other classical test statistics have been emphasized.
Article
Feature selection is the most significant pre-processing activity, which intends to reduce the data dimensionality for enhancing the machine learning process. The evaluation of feature selection must consider classification, performance, efficiency, stability, and many factors. Nowadays, uncertainty is commonly occurred in the feature selection process due to time limitations, imprecise information, and the subjectivity of human minds. Moreover, the theory of intuitionistic fuzzy set has been proven as an extremely valuable tool to tackle the uncertainty and ambiguity that arises in many practical situations. Thus, this study introduces a novel feature selection framework using intuitionistic fuzzy entropy. In this regard, new entropy for IFS is proposed first and then compared with some of the previously developed entropy measures. As entropy is a measure of uncertainty present in data (features), features with higher entropy values are filtered out, and the remaining features having lower entropy values have been used to classify the data. To verify the effectiveness of the proposed entropy-based feature selection, some experiments are done with ten standard benchmark datasets by employing a support vector machine, K-nearest neighbor, and Naïve Bias classifiers. The outcomes of the study validate that the proposed entropy-based filter feature selection is more feasible and impressive than existing filter-based feature selection methods.
Article
As an extension of probability theory, evidence theory is able to better handle unknown and imprecise information. Owing to its advantages, evidence theory has more flexibility and effectiveness for modeling and processing uncertain information. Uncertainty measure plays an essential role both in evidence theory and probability theory. In probability theory, Shannon entropy provides a novel perspective for measuring uncertainty. Various entropies exist for measuring the uncertainty of basic probability assignment (BPA) in evidence theory. However, from the standpoint of the requirements of uncertainty measurement and physics, these entropies are controversial. Therefore, the process for measuring BPA uncertainty currently remains an open issue in the literature. Firstly, this paper reviews the measures of uncertainty in evidence theory followed by an analysis of some related controversies. Secondly, we discuss the development of Deng entropy as an effective way to measure uncertainty, including introducing its definition, analyzing its properties, and comparing it to other measures. We also examine the concept of maximum Deng entropy, the pseudo-Pascal triangle of maximum Deng entropy, generalized belief entropy, and measures of divergence. In addition, we conduct an analysis of the application of Deng entropy and further examine the challenges for future studies on uncertainty measurement in evidence theory. Finally, a conclusion is provided to summarize this study.
Article
Monotone measures and Choquet capacities are introduced as a framework for formalizing imprecise probabilities. Arguments for using imprecise probabilities are presented and five representations of imprecise probabilities are introduced: lower probability functions, upper probability functions, close convex sets of probability distributions, Möbius representations, and interactive representations. It is also shown that the classical notion of expected value can be generalized via the Choquet integral. The emphasis of this chapter is on the various unifying features of imprecise probabilities.