ArticlePDF Available

Abstract and Figures

In the previous work (see [1]) the authors have shown how to solve a Lexicographic Multi-Objective Linear Programming (LMOLP) problem using the Grossone methodology described in [2]. That algorithm, called GrossSimplex, was a generalization of the well-known simplex algorithm, able to deal numerically with infinitesimal/infinite quantities. The aim of this work is to provide an algorithm able to solve a similar problem, with the addition of the constraint that some of the decision variables have to be integer. We have called this problem LMOMILP (Lexicographic Multi-Objective Mixed-Integer Linear Programming). This new problem is solved by introducing the GrossBB algorithm, which is a generalization of the Branch-and-Bound (BB) algorithm. The new method is able to deal with lower-bound and upper-bound estimates which involve infinite and infinitesimal numbers (namely, Grossone-based numbers). After providing theoretical conditions for its correctness, it is shown how the new method can be coupled with the GrossSimplex algorithm described in [1], to solve the original LMOMILP problem. To illustrate how the proposed algorithm finds the optimal solution, a series of LMOMILP benchmarks having a known solution is introduced. In particular, it is shown that the GrossBB combined with the GrossSimplex is able solve the proposed LMOMILP test problems with up to 200 objectives.
Situation at the end of iteration 1, for problem |T 1 |. Iteration 2. At this step the queue is composed by [ ˜ P l , ˜ P r ], the problems generated in the previous iteration. The GrossBB extracts now the next problem from the top of the queue (breadth-first visit), namely [ ˜ P l and denotes it as˜Pas˜ as˜P c , the current problem to solve. The optimal solution for the relaxed problem˜Rproblem˜ problem˜R c is ¯ x = [28.25, 52] T , with˜vwith˜ with˜v I ( ˜ P c ) = −850x 0 − 915.5x −1 − 80.25x −2 . We have to branch again, as we did during iteration 1, thus, a new left and right problems will be generated and added to the queue. The new left problem will have the additional constraint x 1 28, while the new right problem will have the additional constraint x 1 29. The length of the queue is now 3. Iteration 3. Extract the next problem from the top of the queue and denote it as˜Pas˜ as˜P c . The optimal solution of˜Rof˜ of˜R c is ¯ x = [29.25, 51] T and the associated˜vassociated˜ associated˜v I ( ˜ P c ) = −846x 0 − 919.5x −1 − 80.25x −2 . We have to branch, new left and right problems will be generated, both will be added to the queue. The left problem will have the additional constraint x 1 29, while the new on the right˜Pright˜ right˜P r will have the additional constraint x 1 30. The length of the queue is now 4. Iteration 4. Extract the next problem from the queue and indicate it as˜Pas˜ as˜P c . Solve˜RSolve˜ Solve˜R c , the relaxation of˜Pof˜ of˜P c , using the GrossSimplex. Since˜RSince˜ Since˜R c has an empty feasible region, prune this node by applying the first pruning rule. The length of the queue is now 3. Iteration 5. Extract the next problem from the top of the queue and denote it as˜Pas˜ as˜P c . The optimal solution of˜Rof˜ of˜R c is ¯ x = [28, 52.1667] T and the associated˜vassociated˜ associated˜v I ( ˜ P c ) = −850x 0 −913.6667x −1 −80.1667x −2 . We have to branch: new left and right problems will be generated and added to the queue. The left problem will have the additional constraint x 1 52, while the right one will have the additional constraint x 1 53. The length of the queue is now 4. Iteration 6. Extract the next problem from the queue and indicate it as˜Pas˜ as˜P c . This time the GrossSimplex returns an integer solution, i.e., a feasible solution for the LMOMILP initial problem |T 1 |: ¯ x = [30, 50] T and˜vand˜ and˜v I ( ˜ P c ) = −840x 0 − 920x −1 − 80x −2
… 
Content may be subject to copyright.
Solving the Lexicographic Multi-Objective Mixed-Integer
Linear Programming Problem Using
Branch-and-Bound and Grossone Methodology
Marco Cococcionia, Alessandro Cudazzoa, Massimo Pappalardoa, Yaroslav D. Sergeyevb,c,
aUniversity of Pisa, Pisa (Italy)
bUniversity of Calabria, Rende (Italy)
cLobachevsky State University of Nizhni Novgorod (Russia)
Abstract
In the previous work (see [1]) the authors have shown how to solve a Lexicographic Multi-Objective
Linear Programming (LMOLP) problem using the Grossone methodology described in [2]. That al-
gorithm, called GrossSimplex, was a generalization of the well-known simplex algorithm, able to deal
numerically with infinitesimal/infinite quantities.
The aim of this work is to provide an algorithm able to solve a similar problem, with the addition
of the constraint that some of the decision variables have to be integer. We have called this problem
LMOMILP (Lexicographic Multi-Objective Mixed-Integer Linear Programming).
This new problem is solved by introducing the GrossBB algorithm, which is a generalization of the
Branch-and-Bound (BB) algorithm. The new method is able to deal with lower-bound and upper-bound
estimates which involve infinite and infinitesimal numbers (namely, Grossone-based numbers). After
providing theoretical conditions for its correctness, it is shown how the new method can be coupled with
the GrossSimplex algorithm described in [1], to solve the original LMOMILP problem. To illustrate how
the proposed algorithm finds the optimal solution, a series of LMOMILP benchmarks having a known
solution is introduced. In particular, it is shown that the GrossBB combined with the GrossSimplex is
able solve the proposed LMOMILP test problems with up to 200 objectives.
Keywords
Multi-Objective Optimization; Lexicographic Optimization; Mixed-Integer Linear Programming; Numer-
ical Infinitesimals; Grossone Methodology
1. Introduction
It is well known that the Linear Programming, i.e., optimization of a linear function over a domain
being the intersection of linear inequalities has attracted a lot of attention since World War II. At the end
of ’90, multi-objective optimization problems with conflicting objectives started to be under an intense
investigation, especially using stochastic methods aiming at approximating the Pareto optimal frontier
(see [3, 4, 5] and references given therein). Recently, lexicographic multi-objective optimization problems
is gaining popularity (see [1, 6, 7, 8, 9]). The solution of the lexicographic multi-objective problem is
also an optimal in the Pareto sense, but, of course, not all the Pareto optimal solutions are lexicographic
optimal. Thus, being in general unique (when the problem is not multi-modal), the lexicographic opti-
mum is particularly interesting to find. In a previous work [1], the lexicographic multi-objective linear
programming problem (LMOLP) has been solved by introducing the GrossSimplex algorithm being a gen-
eralization of the well-known simplex algorithm able to deal with infinitesimal/infinite quantities modeled
using Grossone methodology (see [2]). The main idea of that work was to transform the multi-objective
problem into a single-objective one, where the objectives were summed up with infinitesimal weights, and
the order of the infinitesimal weights decreased with the decrease of the importance of the objectives.
In the present work, we investigate the case where some of the decision variables are integer. We
have called such class of problems LMOMILP (Lexicographic Multi-Objective Mixed-Integer Linear Pro-
gramming). The integrality constraints hugely affect the problem, similarly to what happens in the
Corresponding author. Tel.: +39 (0)984 494855.
Email addresses: marco.cococcioni@unipi.it (Marco Cococcioni), alessandro@cudazzo.com (Alessandro Cudazzo),
massimo.pappalardo@unipi.it (Massimo Pappalardo), yaro@dimes.unical.it (Yaroslav D. Sergeyev)
Preprint submitted to Elsevier December 24, 2019
single-objective case. Thus, we have decided to resort to the branch and bound (BB) approach, typically
used to solve MILP (Mixed-Integer Linear Programming) problems. The key idea here is to call the
GrossSimplex algorithm at each node visited by the BB algorithm to solve the relaxed problem (i.e., the
one without integer constraints). However, the BB algorithm needs to be generalized in order to work
with the GrossSimplex algorithm, since the latter returns as its output a bound for the optimal solution
at the current node which can be a number not only with finite but also with infinitesimal components.
The BB algorithm able to manage Grossone-based numbers and called GrossBB is introduced here, its
pruning rules and the terminating conditions are described and studied. Finally, LMOMILP test prob-
lems having known solutions are proposed and it is shown on a number of numerical experiments that the
GrossBB algorithm coupled with the GrossSimplex algorithm successfully find the correct solution. A
preliminary version of the present work, significantly shorter and not containing the proof of the pruning
rules contained herein, has been presented at the NUMTA’19 conference (see [10]).
In order to start, let us recall the basics of Grossone, the enabling methodology of this work. The
numeral ¬called Grossone has been introduced (see a recent survey [2]) as a basic element of a pow-
erful numeral system allowing one to express not only finite but also different infinite and infinitesimal
quantities (analogously, the numeral 1 is a basic element allowing one to express a variety of finite quan-
tities). From the foundational point of view, grossone has been introduced as an infinite unit of measure
equal to the number of elements of the set Nof natural numbers (notice that the ¬-based computational
methodology is not related to non-standard analysis (see [11]) and its non-contradictory has been studied
in depth in [12, 13, 14]). From the practical point of view, this methodology has given rise both to a
new supercomputer patented in several countries (see [15]) and called Infinity Computer and to a variety
of applications starting from optimization (see [1, 16, 17, 18, 19, 20, 21, 22]) and going through infinite
series (see [2, 23, 24, 25, 26]), fractals and cellular automata (see [23, 27, 28, 29, 30, 31]), hyperbolic ge-
ometry and percolation (see [32, 33, 34]), the first Hilbert problem and Turing machines (see [2, 35, 36]),
infinite decision making processes, game theory, and probability (see [37, 38, 39, 40, 41, 42]), numerical
differentiation and ordinary differential equations (see [43, 44, 45, 46, 47]), etc.
The remaining text in the paper is structured as follows. In Section 2, the mixed-integer linear pro-
gramming problem (MILP) is stated and the standard BB algorithm is presented briefly. In Section 3, the
Lexicographic Multi-Objective Mixed-Integer Linear Programming (LMOMILP) problem is formalized.
The Grossone methodology is briefly presented in Section 4, while Section 5 presents the GrossBB algo-
rithm and its pruning rules, terminating conditions and branching rule. Section 6 presents five LMOMILP
test problems and their solutions obtained using the proposed algorithm, whereas Section 7 is devoted to
conclusions.
2. Mixed-Integer Linear Programming: MILP
An integer programming problem is a mathematical optimization problem in which some or all of
the variables are restricted to be integers. In many settings where both the objective function and the
constraints are linear the terminology is the following: integer linear programming (ILP), if all variables
in the problem statement should be integers; mixed-integer linear programming (MILP) if only a subset
of them should be integer.
2.1. The MILP problem
The MILP problem can be formalized as follows:
min cTx
s. t. Ax 6b,
x=p
qpZk,qRnk,P
where cis a column vectorsRn,xis a column vector Rn(but kvariables are constrained to be integer),
Ais a full-rank matrix Rm×n,bis a column vector Rm. Hereinafter we assume that the feasibility
region of problem Pis bounded and non-empty. As in any MILP problem, from the problem P, we can
define the polyhedron with linear constraints:
S ≡ {xRn:Ax 6b}.(1)
2
Let us now introduce the new problem R, which is a relaxed version of problem P. Namely, it is obtained
from Pby removing the integrality constraint:
min cTx
s.t. Ax 6b,R
There are different techniques to find the optimal value of a MILP problem or an approximation, one
of these is the BB algorithm, which, as explained in the next subsection, solves the relaxed problems R
associated to a series of new sub-problems derived from P.
2.2. MILP solved using the LP-based BB algorithm
Before introducing the LMOMILP problem and the GrossBB algorithm, let us recall the MILP prob-
lem and its solution based on the BB algorithm, combined with an LP solver. When a MILP problem P
is bounded and non-empty (as we have assumed before), the total number of feasible solutions is finite.
The BB approach is based on the principle that the total set of feasible solutions can be partitioned into
smaller subsets of solutions. These smaller subsets can then be evaluated systematically until the best
solution is found. The BB approach is coupled with a Linear Programming (LP) solver when is used to
solve a MILP problem.
This method employs a tree structure (generally binary), nodes and branches are used as framework
for the solution process. We define the problem Pas the root problem and with Pij the problem at node
(ij), the nodes are enumerated and visited with the Breadth-First-Search (BFS) approach (Pij refers to
the j-th problem at level iof the tree). First of all, we will calculate the lower bound vI(P) by solving
the relaxation of Pand the upper bound vS(P) determined with a greedy algorithm (or we will assign
to it the value +).
The optimal solution v(P) will thus always be between these two values (integrality gap):
vI(P)6v(P)6vS(P).(2)
Hereinafter, since we shall use always binary trees, we will indicate with Pcthe current problem to
solve (the one at leaf node (Pc)), and with Pland Prthe corresponding sub-problems at its left and right,
respectively.
The BB algorithm uses the following pruning rules, terminating conditions and branching rule:
Theorem 1 (Pruning rules).Let xopt be the best solution found so far for Pand let be vS(P) = cTxopt
be the current upper bound. Considering the current node (Pc)and the associated problem Pc:
1. If the feasible region of Pcis empty the sub-tree with root (Pc)has no feasible solutions with a value
lower than cTxopt. So we can prune this node.
2. If vI(Pc)>vS(P), then we can prune at node (Pc), since the sub-tree with root (Pc)cannot have
feasible solutions having a value lower than vS(P).
3. If vI(Pc)< vS(P)and the optimal solution ¯x of the relaxed problem Rcof problem Pcis feasible
for P, then ¯x is a better candidate solution for Pand thus we can update xopt (xopt =¯x)and the
value of the upper bound (vS(P) = vI(Pc)). Finally, prune this node according to the second rule.
It can be proved that the three pruning rules above are correct (see [48]).
Terminating conditions for the BB
1. All the remaining leaves have been visited: if all the leaves have been visited the BB algorithm
stops.
2. Maximum number of iteration reached: when a given maximum number of iterations (pro-
vided by the user at the beginning) has been reached, the BB stops.
3. εoptimality reached: when the normalized difference between the global lower bound and the
global upper bound is close enough to zero, we can stop:
∆(P) = vS(P)vI(P)
|vS(P)|6, (3)
where the global lower bound vI(P) can be computed at any step as the minimum of the lower bounds
in the queue of the problems to be solved.
3
Branching rule for the BB
If vI(Pc)< vS(P) and none of the pruning rules have been applied, take ¯x, the optimal solution of the
relaxation at that node and branch on the component having the highest fractional part among those
variables having integer restrictions. In case of ties, branch on the first component. Thus we will create
two distinct sub-problems of the current problem, denoted as Pland Pr. The pseudo-code for the BB
algorithm is provided in Algorithm 1.
Algorithm 1 The LP-based BB Algorithm
Inputs: maxIter and a specific MILP problem |P|, to be put within the root node (P)
Outputs: xopt (the optimal solution), fopt (the optimal value)
Step 0. Insert |P|into a queue of the sub problems that must be solved. Put vS(P) = ,xopt = [ ],
and fopt =or use a greedy algorithm to get an initial feasible solution.
Step 1a. If all the remaining leaves have been visited (empty queue), or the maximum number of
iterations has been reached, or the εoptimality condition holds, then goto Step 4. Otherwise
extract from the head of the queue the next problem to solve and call it Pc(current problem).
Remark: this policy of insertion of new problems at the tail of the queue and the extraction from
its head leads to a breadth-first visit for the binary tree of the generated problems.
Step 1b. Solve Rc, the relaxed version of the problem Pcat hand, using the GrossSimplex and get ¯x
and fc( = cT¯x):
[¯x, fc,emptyPolyhedron]LPSolver(Rc)
Step 2a. If the LP solver has found that the polyhedron is empty, then prune the sub-tree of (Pc)
(according to Pruning Rule 1) by going to Step 1a (without branching (Pc)). Otherwise, we have
found a new lower value for Pc:
vI(Pc) = fc
Step 2b. If vI(Pc)>vS(P), then prune the sub-tree under Pc(according to Pruning Rule 2), by going
to Step 1a (without branching Pc).
Step 2c. If vI(Pc)< vS(P) and all components of ¯x that must be integer are actually -integer (i.e., ¯x
is feasible), then we have found a better upper bound estimate. Thus we can update the value of
vS(P) as:
vS(P) = vI(Pc).
In addition we set xopt =¯x and fopt =vI(Pc).Then we also prune the sub-tree under (Pc) (according
to Pruning Rule 3) by going to Step 1a (without branching (Pc)).
Step 3. If vI(Pc)< vS(P) but not all components of ¯x that must be integer are actually -integer, we
have to branch. Select the component of ¯xtof ¯x having the greatest fractional part, among all the
components that must be integer. Create two new nodes (i.e., problems) with a new constraint for
this variable, one with a new 6constraint for the rounded down value of ¯xtand another with a
new >constraint for the rounded up value of ¯xt.Let us call the two new problems Pland Prand
put them at the tail of the queue of the problems to be solved, then goto Step 1a.
Step 4. End of the algorithm.
3. Lexicographic Multi-Objective Mixed-Integer Linear Programming: LMOMILP
In this section we introduce the LMOMILP problem, which is stated as follows:
lexmin c1Tx,c2Tx, ..., cr T x
s.t. Ax 6b,
x=p
qpZk,qRnk
P
4
where ci, i = 1, ..., r, are column vectors Rn,xis a column vector Rn,Ais a full-rank matrix
Rm×n,bis a column vector Rm. lexmin in Pdenotes Lexicographic Minimum and means that the
first objective is much more important than the second, which is, on its turn, much more important than
the third one, and so on. Sometimes in literature this is denoted as c1Txc2Tx... cr T x.
As in any MILP problem, from the problem P, we can define the polyhedron defined by the linear
constraints alone:
S ≡ {xRn:Ax 6b}.(4)
Thus we can define problem R, the relaxation of a lexicographic (mixed) integer linear problem, obtained
from Pby removing the integrality constraint on each variable:
lexmin c1Tx,c2Tx, ..., cr T x
s.t. Ax 6b,R
Problem Ris called LMOLP (Lexicographic Multi-Objective Linear Problem), and can be solved, as
shown in [1], using the Grossone-based methodology from [2]). In addition, every thing we have said
above for the MILP problem is still valid for the LMOMILP.
Notice that the formulation of Pmakes no use of Gross-numbers or Gross-arrays involving ¬, namely,
it involves finite numbers only. Hereinafter we assume that Sis bounded and non-empty. In the next
section we briefly introduce the Grossone methodology, that will be used in Section 5 to transform problem
Pinto an equivalent formulation, based on the use of Grossone, which will be solved by the GrossBB
algorithm to be introduced in Section 5.
4. The Grossone-based Methodology
In [2, 49, 50, 51] a computational methodology working with an infinite unit of measure called Grossone
and indicated by the numeral ¬has been introduced as the number of elements of the set of natural
numbers N. On the one hand, this allows one to treat easily many problems related to the traditional
set theory operating with Cantor’s cardinals. In the new framework, instead of the usage of cardinals
the number of elements of infinite sets using ¬-based numerals can be computed. For instance, the
following sets that the traditional cardinalities identify as countable can be measured more precisely (see
[2, 35, 50]). In fact, it can be shown that the set of even numbers Ehas ¬
2elements, namely, two times
less than the set of natural numbers having ¬elements. The set of integers Zhas 2¬+1 elements, the
set Gof square natural numbers
G={x:x=n2, x N, n N}
has b¬celements, etc. Analogously, it becomes possible to discern among sets having the traditional
cardinality of continuum infinite sets with different number of elements. For instance, it follows that the
set of numbers x[0,1) expressed in the binary positional numeral system is equal to 2¬and the set of
numbers x[0,1) expressed in the decimal positional numeral system has 10¬>2¬elements (for more
examples see [2, 35, 50, 51]).
On the other hand, in the numeral system built upon Grossone, there is the opportunity to treat
infinite and infinitesimal numbers in a unique framework and to work with all of them numerically, i.e., by
executing arithmetic operations with floating-point numbers and the possibility to assign concrete infinite
and infinitesimal values to variables. This is one of the differences with Robinson’s Non-Standard Analysis
where non-standard infinite numbers are discussed but, if Kis a non-standard infinite integer, there is
no possibility to assign a value to K, it always remains just a symbol without any concrete numerical
value and only symbolic computations can be executed with it (see [11] for a detailed discussion).
The new numeral ¬is introduced by describing its properties (following the same approach that
lead to the introduction of zero in the past to switch from natural to integer numbers). To introduce
Grossone, three methodological postulates and The Infinite Unit Axiom is added to the axioms of real
numbers (see [2]). In particular, this axiom states that for any given finite integer nthe infinite number
¬
nis integer being larger than any finite number. Since the axiom is added to the standard axioms of
real numbers, all standard properties (commutative, associative, existence of inverse, etc.) also apply
to ¬and Grossone-based numerals. Instead of the usual symbol different infinite and/or infinitesimal
numerals can be used thanks to ¬. Indeterminate forms are not present and, for example, the following
5
relations hold for infinite numbers ¬,¬2and infinitesimals ¬1,¬2, as for any other (finite, infinite,
or infinitesimal) number expressible in the new numeral system:
0·¬=¬·0=0,¬¬= 0,¬
¬= 1,¬0= 1,1¬= 1,0¬= 0,
0·¬1=¬1·0=0,¬1>¬2>0,¬1¬1= 0,2¬¬=¬,
¬1
¬1= 1,(¬1)0= 1,¬·¬1= 1,¬·¬2=¬1,
5¬2
¬2= 5,60.1¬2
¬= 60.1¬,¬1
2¬2= 0.5¬,¬2·¬1=¬,¬2·¬2= 1.
A general way to express infinities and infinitesimals is also provided in [2, 49, 50, 51] by using records
similar to traditional positional number systems, but with the radix ¬. A number ˜cin this new numeral
system (˜cwill be called Gross-scalar from here on) can be constructed by subdividing it into groups of
corresponding powers of ¬and thus can be represented as
˜c=cpm¬pm+... +cp1¬p1+cp0¬p0+cp1¬p1+... +cpk¬pk,
where m, k N,exponents piare called Gross-powers (they can be numbers of the type of ˜c) with p0= 0,
and i=m, ..., 1,0,1, ..., k. Then, cpi6= 0 called Gross-digits are finite (positive or negative) numbers,
i=m, ..., 1,0,1, ..., k. In this numeral system, finite numbers are represented by numerals with the
highest Gross-power equal to zero, e.g., 6.2 = 6.2¬0. Infinitesimals are represented by numerals
having negative finite or infinite Gross-powers. The simplest infinitesimal is ¬1for which ¬1·¬= 1.
We notice that all infinitesimals are not equal to zero, e.g., ¬1>0. A number is infinite if it has at least
one positive finite or infinite Gross-power. For instance, the number 43.6¬4.56¬+ 16.7¬3.63.2¬2.1is
infinite, it consists of two infinite parts and one infinitesimal part.
In the context of this paper the following definition is important. A Gross-number (Gross-scalar) is
said purely finite iff the coefficient associated with the zeroth power of Grossone is the only one to be
different from zero. For instance, the number 3.4 is purely finite and 3.43.2¬2.1is finite but not
purely finite since it has an infinitesimal part.
5. LMOMILP solved using the GrossSimplex-based GrossBB algorithm
First of all, let us introduce the new problem ˜
P, formulated using Gross-numbers:
min ˜cTx
s.t. Ax 6b,
x=p
qpZk,qRnk,
˜
P
where ˜c is a column Gross-vector having nGross-scalar components built using purely finite vectors ci
˜c =
r
X
i=1
ci¬i+1 (5)
and ˜cTxis the Gross-scalar obtained by multiplying the Gross-vector ˜c by the purely finite vector x
˜cTx= (c1Tx)¬0+ (c2Tx)¬1+... + (crT x)¬r+1 ,(6)
where (6) can be equivalently written in the extended form as:
˜cTx= (c1
1x1+... +c1
nxn)¬0+ (c2
1x1+... +c2
nxn)¬1+... + (cr
1x1+... +cr
nxn)¬r+1.
What makes the new formulation ˜
Pattractive is the fact that its relaxed (from the integrality con-
straint) version is a Gross-LP problem (see [1]), which can be effectively solved using a single run of
the GrossSimplex algorithm proposed in [1]. This means that the set of multiple objective functions is
mapped into a single (Gross-) scalar function to be optimized. This opens the possibility to solve the
6
integer-constrained variant of the problem using an adaptation of the BB algorithm (see Alg. 2), coupled
with the GrossSimplex. Of course, the GrossSimplex will solve the relaxed version of ˜
P:
min ˜cTx
s.t. Ax 6b
˜
R
The following Theorem 2 shows that problem ˜
Pis equivalent to problem Pdefined in Section 3.
Theorem 2 (Equivalence of problem ˜
Pand problem P).Problem ˜
Pis equivalent to the problem Pand
both of them have the same solution.
Proof. The basic observation is that the integer relaxation Rof problem Pis an LMOLP problem, while
the integer relaxation of problem ˜
P, the ˜
Rproblem defined above, is a Gross-LP problem. In [1] we
have already proved the equivalence of problems Rand ˜
R. Since problems Pand ˜
Phave equivalent
relaxations, the two will also share the same solutions when the same integrality constraints will be taken
into account on both.
In the next subsections we will provide the pruning rules, terminating conditions and the branching
rule. Then we will introduce the GrossBB algorithm being a generalization of the BB algorithm able to
work with Gross-numbers.
5.1. Pruning rules for the GrossBB
The pruning rules presented above can be adapted to the GrossBB algorithm as follows.
Theorem 3 (Pruning rules for the GrossBB).Let xopt be the best solution found so far for ˜
P , and
let ˜vS(˜
P) = ˜cTxopt be the current upper bound. Considering the current node (˜
Pc)and the associated
problem ˜
Pcthe following assertions hold:
1. If the feasible region of problem ˜
Pcis empty the sub-tree with root (˜
Pc)has no feasible solutions
having values lower than ˜cTxopt. So we can prune this node.
2. If ˜vI(˜
Pc)>˜vS(˜
P), then we can prune at node (˜
Pc), since the sub-tree with root (˜
Pc)cannot have
feasible solutions having a value lower than ˜vS(˜
P).
3. If ˜vI(˜
Pc)<˜vS(˜
P)and the optimal solution ¯x of the relaxed problem ˜
Rcis feasible for ˜
P, then ¯x
is a better candidate solution for ˜
P, and thus we can update xopt (xopt =¯x)and the value of the
upper bound vS(˜
P) = ˜vI(˜
Pc)). Finally, prune this node according to the second rule.
Proof. Let us prove the correctness of the pruning rules introduced above.
[Pruning Rule 1]. If the feasible region of the current problem ˜
Rcis empty, then the one of ˜
Pcis
empty, too, since the domain of ˜
Pchas additional constrains (the integrality constraints). Furthermore,
all the domains of the problems in the leaves of the sub-tree having root in ˜
Pcwill be empty, as well, since
the domains of the leaves have all additional constrains with respect to ˜
Rc. This proves the correctness
of the first pruning rule. Now let us prove the second pruning rule.
[Pruning Rule 2]. Let us consider a generic leaf Pleaf of the sub-tree having root ˜
Pc. Let us indicate
with ˜v(˜
Pc) the optimal value of the current problem ˜
Pc(the one with the integer constraints). Then the
values at the leaves below ˜
Pcmust be greater than or equal to ˜v(˜
Pc):
˜v(˜
Pleaf )>˜v(˜
Pc)leaf in SubTree( ˜
Pc).
In fact, the domain of ˜
Pcincludes the ones of all the ˜
Pleaf , being each problem ˜
Pleaf obtained by enriching
˜
Pcwith additional constraints. On the other hand, it follows that
˜v(˜
Pc)>˜vI(˜
Pc),
since ˜vI(˜
Pc) is obtained as the optimal solution of ˜
Rc, a problem having a domain which includes the one
of ˜
Pc. Thus, the following chain of inequalities always holds:
˜v(˜
Pleaf )>˜v(˜
Pc)>˜vI(˜
Pc)leaf in SubTree( ˜
Pc)
Now, if ˜vI(˜
Pc)>˜vS(˜
P), we can add an element to the chain:
˜v(˜
Pleaf )>˜v(˜
Pc)>˜vI(˜
Pc)>˜vS(˜
P)leaf in SubTree( ˜
Pc)
7
from which we can conclude that
˜v(˜
Pleaf )>˜vS(˜
P)leaf in SubTree( ˜
Pc).
This means that all the leaves of the current node will contain solutions that are worse (or equivalent)
than the current upper bound. Thus the sub-tree rooted in ˜
Pccan to be pruned (i.e., not explicitly
explored). This proves the correctness of the second pruning rule.
Before proving the pruning rule 3, let us observe how the pruning rule above prevents from solving
multi-modal problems, in the sense that with such pruning rule we are only able to find a single optimum,
not all the solutions that might have the same cost function. In other words, the proposed pruning rule
does not allow one to solve multi-modal problems, because we are deciding not to explore the sub-tree at
a given node that could contain solutions having the same current optimal objective function value. To
solve multi-modal problems, that rule must be applied only when
˜vI(˜
Pc)>˜vS(˜
P).(7)
[Pruning Rule 3 ]. If ˜vI(˜
Pc)<˜vS(˜
P) and ¯x is feasible for ˜
P, (i.e., if all the components of ¯x that
must be integer are actually -integer), we have found a better estimate for the upper bound of ˜
P, and
thus we can update it:
˜vS(˜
P) = ˜vI(˜
Pc).
As a result, now ˜vI(˜
Pc) = ˜vS(˜
P), and thus:
˜v(˜
Pleaf )>˜v(˜
Pc)>˜vI(˜
Pc) = ˜vS(˜
P)leaf in SubTree( ˜
Pc).
Then again we have that the sub-tree having root ˜
Pccannot contain better solutions than ¯x:
˜v(˜
Pleaf )>˜vS(˜
P)leaf in SubTree( ˜
Pc).
This proves the correctness of the third pruning rule.
5.2. Terminating conditions for the GrossBB
Let us now discuss the terminating conditions for the GrossBB algorithm. The first two are exactly
the same of the classical BB, while the third requires some attention.
The terminating conditions are:
1. All the remaining leaves have been visited: if all the leaves have been visited the GrossBB
algorithm stops.
2. Maximum number of iteration reached: when a given maximum number of iterations (pro-
vided by the user at the beginning) has been reached, the GrossBB stops.
3. ˜optimality reached: when the normalized difference between the global lower bound and the
global upper bound at the i-th iteration is close enough to zero, the GrossBB stops:
˜
i(˜
P) = ˜vS(˜
P)˜vI(˜
P)
|˜vS(˜
P)|˜, (8)
where is the component-wise less than or equal to operator defined among two Gross-scalars and
different from the usual operator 6defined for ¬-based numbers. In particular, equation (8) requires
that all the Gross-digits of ˜
i(˜
P) are less or equal to Gross-digits of ˜. Let us make more comments upon
computations executed in (8). It first involves the difference between two Gross-scalars. This intermediate
result must be divided by the absolute value of ˜vS(˜
P). While computing the absolute value of a Gross-
scalar is straightforward, division (as it happens also in the traditional floating point arithmetic) requires
more efforts (see [2]). The result of the Gross-division is a Gross-scalar that must be compared with the
Gross-scalar ˜, which has the form
˜=0+1¬1+2¬2+... +r1¬r+1 .
Obviously, it is possible to chose 0=1=2... =, to simplify the presentation.
In order to illustrate the situation, let us see an example below. Suppose that we have a problem
with three objectives (r=3) and = 106has been chosen. Given the following
˜
i(˜
P)=1.1·107+ 5 ·103¬1+ 1.7·108¬2
it follows that ˜
i(˜
P)6˜but ˜
i(˜
P)6 ˜
because the first-order infinitesimal component of ˜
i(˜
P), namely (5 ·103), is not less or equal to . Thus
in this case the GrossBB algorithm cannot terminate: it will continue, trying to make all the components
less or equal to .
8
5.3. Branching rule for the GrossBB
When the sub-tree below ˜
Pccannot be pruned (because it could contain better solutions), its sub-tree
must be explored. Thus we have to branch the current node into Pland Prand to add these two new
nodes to the tail of the queue of the sub-problems to be analyzed and solved by the GrossSimplex.
Algorithm 2 provides a pseudo-code for the GrossBB algorithm. Thus, the GrossBB algorithm using
internally the GrossSimplex algorithm and the rules (pruning, terminating, branching) provided above is
able to solve a given ˜
PLMOMILP problem.
5.4. Final considerations before testing the algorithm
Let us conclude this section by introducing the concept of “epsilon integrality” and by commenting
upon the usage of division among Gross-numbers in the next two subsections.
5.4.1. Epsilon integrality
The concept of integrality used in pruning rule 3 can be formalized as follows. A vector xis
integer when all its components are integer. Its generic component xiis integer when
xi− bxic< ε or dxie − xi< ε.
5.4.2. Division among Gross-numbers could be avoided
Division among Gross-numbers used in equation (8) can be avoided, by multiplying the two sides of
the inequality by |˜vS(˜
P)|:
˜vS(˜
P)˜vI(˜
P)˜· |˜vS(˜
P)|
This multiplication allows us to create a division-free variant of the GrossBB algorithm. We thank
one of the anonymous reviewers for pointing this out. However, we guess that the use of division is
still interesting from the theoretical point of view, because it allowed us to clarify better the concept of
a Gross-number being “near zero”. Furthermore, the impact of division in equation (8) on the overall
computing time of the algorithm is not critical, since the overall computing time is mainly affected by
the time required to compute solutions of the relaxed LMOLP problems.
Algorithm 2 The GrossBB Algorithm using GrossSimplex method internally
Inputs: maxIter and a specific LMOMILP problem ˜
|P|, to be put within the root node ( ˜
P)
Outputs: xopt (the optimal solution, a purely finite vector), ˜
fopt (the optimal value, a Gross-scalar)
Step 0. Insert ˜
|P|into a queue of the sub problems that must be solved. Put ˜vS(˜
P) = ¬,xopt = [ ],
and ˜
fopt =¬or use a greedy algorithm to get an initial feasible solution.
Step 1a. If all the remaining leaves have been visited (empty queue), or the maximum number of
iterations has been reached, or the ˜-optimality condition holds, then goto Step 4. Otherwise
extract from the head of the queue the next problem to solve and call it ˜
Pc(current problem).
Remark: this policy of insertion of new problems at the tail of the queue and the extraction from
its head leads to a breadth-first visit for the binary tree of the generated problems.
Step 1b. Solve ˜
Rc, the relaxed version of the problem ˜
Pcat hand, using the GrossSimplex and get ¯x
and ˜
fc( = ˜cT¯x):
[¯x,˜
fc,emptyPolyhedron]GrossSimplex(˜
Rc)
Step 2a. If the LP solver has found that the polyhedron is empty, then prune the sub-tree of ( ˜
Pc)
(according to Pruning Rule 1) by going to Step 1a (without branching ( ˜
Pc)). Otherwise, we have
found a new lower value for ˜
Pc:
˜vI(˜
Pc) = ˜
fc
Step 2b. If ˜vI(˜
Pc)>˜vS(˜
P), then prune the sub-tree under ˜
Pc(according to Pruning Rule 2), by going
to Step 1a (without branching ˜
Pc).
9
Step 2c. If ˜vI(˜
Pc)<˜vS(˜
P) and all components of ¯x that must be integer are actually -integer (i.e., ¯x
is feasible), then we have found a better upper bound estimate. Thus we can update the value of
˜vS(˜
P) as:
˜vS(˜
P) = ˜vI(˜
Pc).
In addition, we set xopt =¯x and ˜
fopt = ˜vI(˜
Pc).Then we also prune the sub-tree under ( ˜
Pc)
(according to Pruning Rule 3) by going to Step 1a (without branching ( ˜
Pc)).
Step 3. If ˜vI(˜
Pc)<˜vS(˜
P) but not all components of ¯x that must be integer are actually -integer, we
have to branch. Select the component ¯xtof ¯x having the greatest fractional part, among all the
components that must be integer. Create two new nodes (i.e., problems) with a new constraint for
this variable, one with a new 6constraint for the rounded down value of ¯xtand another with a
new >constraint for the rounded up value of ¯xt.Let us call the two new problems ˜
Pland ˜
Prand
put them at the tail of the queue of the problems to be solved, then goto Step 1a.
Step 4. End of the algorithm.
6. Experimental results
In this section, we first introduce five LMOMILP test problems having known solution. Then we
verify that the GrossBB combined with the GrossSimplex is able to successfully solve these problems.
6.1. Test problem 1: the “kite” in 2D
This problem is a variation of the 2D problem with 3 objectives described in [8]:
lexmax 8x1+ 12x2,14x1+ 10x2, x1+x2
s.t. 2x1+ 1x26120
2x1+ 3x26210 + 2.5
4x1+ 3x26270
x1+ 2x2>60
200 6x1, x26+200,xZn
|T1|
The polygon Sassociated to this problem is shown in Fig. 1 (left sub-figure). The integer points
(feasible solutions) are shown as black spots whereas the domain of the relaxed problem (i.e., without
the integer constraints) is shown in light grey.
It can be seen that the first objective vector c1= [8,12]Tis orthogonal to segment [α, β] (α=
(0,70.83), β = (28.75,51.67)) shown in the same figure. All the nearest integer points parallel to this
segment are optimal for the first objective (see the right sub-figure in Fig. 1). Since the solution is not
unique, there is the chance to try to improve the second objective vector (c2= [14,10]T).
Figure 1: An example in two dimensions with three objectives. The black points on the left figure are all the feasible
solutions. All the nearest integer points parallel to the segment [α, β] (there are many), are optimal for the first objective,
while point (28,52) is the unique lexicographic optimum for the given problem (i.e., if the second objective is considered,
too). The third ob jective plays no role in this case. On the right, a zoom around point βis provided, with some optimal
solutions for the first objective highlighted (the ones with a bigger black spot).
10
Let us see now what happens when we solve this problem using the GrossBB Algorithm with the
GrossSimplex. notice, that since the |T1|problem is lexmax-formulated, we have to provide ˜
cto the
GrossBB algorithm.
Initialization. ˜vS(˜
P) = ¬,xopt = [ ],˜
fopt =¬and insert |T1|into a queue of the sub-problems
that must be solved.
Iteration 1. The GrossBB extracts from the queue of problems to be solved the only one present, and
denotes it as the current problem: ˜
Pc≡ |T1|). Then the algorithm solves its relaxed version: the solution
of ˜
Rcis ¯x = [28.7500,51.6667]T,with ˜vI(˜
Pc) = 850¬0919.167¬180.4167¬2. As already seen on
the MILP example, in this case we have to branch using the component having the highest fractional part
(among the variables with integer restrictions, of course). In this case, it is the first component, and thus
the new sub-problem on the left ˜
Plwill have the additional constraint x1628, while the new on the right
˜
Prwill have the additional constraint x1>29. This split makes the current solution [28.7500,51.6667]T
not optimal neither for problems ˜
Plnor for ˜
Pr(see Fig. 2).
Figure 2: Situation at the end of iteration 1, for problem |T1|.
Iteration 2. At this step the queue is composed by [ ˜
Pl,˜
Pr],the problems generated in the previous
iteration. The GrossBB extracts now the next problem from the top of the queue (breadth-first visit),
namely [ ˜
Pland denotes it as ˜
Pc, the current problem to solve. The optimal solution for the relaxed
problem ˜
Rcis ¯x = [28.25,52]T,with ˜vI(˜
Pc) = 850¬0915.5¬180.25¬2. We have to branch
again, as we did during iteration 1, thus, a new left and right problems will be generated and added to
the queue. The new left problem will have the additional constraint x1628, while the new right problem
will have the additional constraint x1>29. The length of the queue is now 3.
Iteration 3. Extract the next problem from the top of the queue and denote it as ˜
Pc. The optimal
solution of ˜
Rcis ¯x = [29.25,51]Tand the associated ˜vI(˜
Pc) = 846¬0919.5¬180.25¬2. We
have to branch, new left and right problems will be generated, both will be added to the queue. The
left problem will have the additional constraint x1629, while the new on the right ˜
Prwill have the
additional constraint x1>30. The length of the queue is now 4.
Iteration 4. Extract the next problem from the queue and indicate it as ˜
Pc. Solve ˜
Rc, the relaxation
of ˜
Pc, using the GrossSimplex. Since ˜
Rchas an empty feasible region, prune this node by applying the
first pruning rule. The length of the queue is now 3.
Iteration 5. Extract the next problem from the top of the queue and denote it as ˜
Pc. The optimal
solution of ˜
Rcis ¯x = [28,52.1667]Tand the associated ˜vI(˜
Pc) = 850¬0913.6667¬180.1667¬2. We
have to branch: new left and right problems will be generated and added to the queue. The left problem
will have the additional constraint x1652, while the right one will have the additional constraint x1>53.
The length of the queue is now 4.
Iteration 6. Extract the next problem from the queue and indicate it as ˜
Pc. This time the
GrossSimplex returns an integer solution, i.e., a feasible solution for the LMOMILP initial problem
|T1|:
¯x = [30,50]Tand ˜vI(˜
Pc) = 840¬0920¬180¬2
Since ˜vI(˜
Pc)<˜vS(˜
P), then we can update ˜vS(˜
P) = ˜vI(˜
Pc), xopt =¯x. Finally we can prune this node,
according to the third pruning rule.
Iteration 7. Extract the next problem from the queue and denote it as ˜
Pc. Again the GrossSimplex
returns an integer solution:
¯x = [29,51]Tand ˜vI(˜
Pc) = 844¬0916¬180¬2
Since ˜vI(˜
Pc)<˜vS(˜
P), then we can update ˜vS(˜
P) = ˜vI(˜
Pc), xopt =¯x. Finally we can prune this node,
according to the third pruning rule.
Iteration 8. Extract the next problem from the top of the queue and indicate it as ˜
Pc. The optimal
solution of ˜
Rcis ¯x = [26.75,53]T,with ˜vI(˜
Pc) = 850¬0904.5¬179.75¬2. We have to branch:
11
new left and right problems will be generated and added to the queue. The left problem will have the
additional constraint x1626, while the new on the right ˜
Prwill have the additional constraint x1>27.
The length of the queue is now 4.
Iteration 9. Extract the next problem from the queue ( ˜
Pc) and denote it as the current problem ˜
Pc.
Solve its relaxation using the GrossSimplex. In this case, the returned solution is feasible for the initial
LMOMILP problem |T1|because it has all integral components:
¯x = [28,52]Tand ˜vI(˜
Pc) = 848¬0912¬180¬2.
Since ˜vI(˜
Pc)<˜vS(˜
P), then update both ˜vS(˜
P) = ˜vI(˜
Pc) and xopt =¯x. Finally, prune this node by
applying the third pruning rule.
Iteration 10. Extract the next problem from the queue and indicate it as ˜
Pc. Solve ˜
Rc, the relaxation
of ˜
Pc, using the GrossSimplex. Since ˜
Rchas an empty feasible region, prune this node by applying the
first pruning rule.
Iterations 11-79. The GrossBB algorithm is not able to find a better solution than the ¯x = [28,52]T
already found, but continues to branch and explore the tree, until only two nodes remain in the queue.
The processing of the last two nodes is discussed in the last two iterations 80 and 81, below.
Iteration 80. Extract the next problem from the queue and indicate it as ˜
Pc. Solve ˜
Rcusing the
GrossSimplex. Since ˜
Rchas an empty feasible region, prune this node by applying the first pruning rule.
Iteration 81. At this point there is one last unsolved problem from the queue. Extract this problem
and denote it as ˜
Pc. The optimal solution of ˜
Rcis:
¯x = [1,70]T,with ˜vI(˜
Pc) = 848¬0714¬171¬2.
Since ˜vI(˜
Pc)˜vS(˜
P), prune this last node according to the third pruning rule. Being now the tree
empty, the GrossBB algorithm stops according to the first terminating condition and returns the optimal
solution found so far: xopt = [28,52]T.The optimal value of the objective function is ˜cTxopt = 848¬0+
912¬1+ 80¬2.
Table 1 provides a synthesis of the iterations performed by the GrossBB algorithm and described in
detail above.
6.2. Test problem 2: the unrotated “house” in 3D
This illustrative example is in three dimensions with three objectives:
lexmax x1,x2,x3
s.t. 10.26x1610.2
10 6x2610.2
10.26x3610.2
x1x262
x1+x262
20 6xi620, i = 1, ..., 3,xZ3
|T2|
with the domain being the cube shown in Fig. 3. It can be immediately seen that by considering the first
objective alone (maximize x1), all the nearest integer points parallel to square having vertices α,β,γ,
δare optimal for the first objective function (see Fig. 3). Since the optimum is not unique, the second
objective function can be considered in order to improve it without deteriorating the first objective.
Then, all the integer points close to the segment [β,γ] are all optimal for the second objective, too (see
Fig. 4, which provides the plant-view of Fig. 3 with x3=10). Again, the optimum in not unique, and,
therefore, the third objective is considered. This allows us to select the nearest integer point to γas
the unique solution that maximizes all the three objectives. The point [10,10,10] is the lexicographic
optimum to this problem.
The problem can be solved with GrossBB algorithm, as shown in Tab. 2. The solution xopt =
[10,10,10]Tis actually found after 5 iterations. The optimal value of the objective function is
computed in the form ˜
cTxopt = 10¬0+ 10¬1+ 10¬2.
6.3. Test problem 3: the rotated “house” in 5D
Algorithm 3 shows how to add a small rotation along the axis perpendicular to the plane containing
the first two variables x1and x2, for the “house” problem seen in previous example, after generalizing it
to the n-dimensional case with n= 5.
12
Table 1: Iterations performed by the GrossBB Algorithm using GrossSimplex during solving problem |T1|
Iteration result at node(iteration)
Initialize - ˜vS(˜
P) = ¬
- Queue len. 1 (add the root problem to the queue)
1 ˜vI(˜
Pc): 850¬0919.167¬180.4167¬2. Queue length : 0
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 2
-˜
∆ = 100¬0+ 100¬1+ 100¬2
2 ˜vI(˜
Pc): 850¬0915.5¬180.25¬2. Queue length: 1
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 3
-˜
∆ = 100¬0+ 100¬1+ 100¬2
3 ˜vI(˜
Pc): 846¬0919.5¬180.25¬2. Queue length: 2
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 4
-˜
∆ = 100¬0+ 100¬1+ 100¬2
4 prune node: rule 1, empty feasible region. Queue length: 3
5 ˜vI(˜
Pc): 850¬0913.667¬180.1667¬2. Queue length: 2
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 4
-˜
∆ = 100¬0+ 100¬1+ 100¬2
6 ˜vI(˜
Pc): 840¬0920¬180¬2. Queue length: 3
- A feasible solution has been found: xopt = [30,50]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
-˜
∆=0.0119048¬00.00688406¬1+ 0.00208333¬2
7 ˜vI(˜
Pc): 844¬0916¬180¬2. Queue length: 2
- A feasible solution has been found: xopt = [29,51]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
-˜
∆=0.354191¬00.127528¬1+ 0.104058¬2
8 ˜vI(˜
Pc): 850¬0904.5¬179.75¬2. Queue length: 1
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 3
-˜
∆=0.007109¬00.00254731¬1+ 0.00208333¬2
9 ˜vI(˜
Pc): 848¬0912¬180¬2. Queue length: 2
- A feasible solution has been found: xopt = [28,52]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
-˜
∆=0.00235849¬00.00822368¬10.003125¬2
10 prune node: rule 1, empty feasible region. Queue length: 1
... ... ... ... ... ....
80 prune node: rule 1, empty feasible region. Queue length: 1
81 ˜vI(˜
Pc): 848¬0714¬171¬2. Queue length: 0
- ˜vI(˜
Pc)>˜vS(˜
P) prune node: rule 2
result Iteration 81. Optimization ended. Optimal solution found:
xopt = [28,52]T
˜
fopt =848¬0912¬180¬2
˜
∆=0¬0+ 0¬1+ 0¬2
The problem consists of the lexicographic optimization of x1,x2, ..., x5. The method used to
generate a randomly rotated benchmark is shown in Alg. 3 (the generated rotation matrix Qis reported
in Appendix A). After the rotation, the following lower and upper bounds were added
2ρ6xi62ρ, i = 1, ..., 5.
As a result, the following problem (A0,b0) has been generated:
lexmax x1,x2, ..., x5
s.t. x0Z5:A0x06b0|T3|
where C0,A0and vector b0are reported in Appendix A as well.
The lexicographic optimum for this problem is xopt = [1000,999,1000,1000,1000]T.The
problem can be solved with GrossBB algorithm. After 11 steps, the algorithm has found the correct
lexicographic optimum (see Tab. 3 in Appendix B).
Algorithm 3 Generation of a randomly rotated “house” problem in Rn
Step 1. Let {Ax 6b,xZn}be the initial, unrotated problem, in n-dimensions. The problems is
formulated as follow (ρis a parameter that controls the size of the house):
13
Figure 3: The 3D unrotated “house” problem.
Figure 4: Section view of Fig. 3 with x3=10 (left) and its top-right zoom (right).
lexmax x1,x2, ..., xn
s.t. ρ+ 0.26x16ρ+ 0.2,
ρ6x26ρ+ 0.2
ρ+ 0.26xi6ρ+ 0.2, i = 3, ..., n
x1x262
x1+x262
xZn
Step 2. Use as rotation matrix Qwith a random little rotation:
rA = 0.0002;
rB = 0.0005;
φ= (rB-rA).*rand(1) + rA;
Q=
cos (φ) sin (φ) 0 0 .. 0
sin (φ) cos (φ) 0 0 ... 0
0 0 1 0 ... 0
0 0 0 1 ... 0
... ... ... ... ... ...
0 0 0 0 ... 1
Rn×n
Step 3. Rotate the polytope: A0=AQ (band Cdoes not change under rotations: b0=band
C0=C) and then add these additional constraints as lower and upper bound for every variables to
A0(they are twice the size of the house, in order to fully contain it):
2ρ6xi62ρ, i = 1, ..., n
14
Table 2: Iterations performed by GrossBB algorithm on test problem |T2|
Iteration result at node(iteration)
Initialize - ˜vS(˜
P) = ¬
- Queue len. 1 (add the root problem to the queue)
1 ˜vI(˜
Pc): 10.2¬010¬110.2¬2. Queue length: 0
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 2
-˜
∆ 100¬0+ 100¬1+ 100¬2
2 prune node: rule 1, empty feasible region. Queue length: 1
3 ˜vI(˜
Pc): 10¬010¬110.2¬2. Queue length: 0
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 2
-˜
∆ 100¬0+ 100¬1+ 100¬2
4 ˜vI(˜
Pc): 10¬010¬110¬2. Queue length: 1
- A feasible solution has been found: xopt = [10,10,10]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
-˜
∆: 0¬0+ 0¬1+ 0.02¬2
5 prune node: rule 1, empty feasible region. Queue length: 0
result Iteration 5. Optimization ended. Optimal solution found:
xopt = [10,10,10]T
˜
fopt =10¬010¬110¬2
˜
∆=0¬0+ 0¬1+ 0¬2
Step 4 For the unrotated problem the optimal integer solution is xopt = [ρ, ρ, ρ, ρ, ..., ρ]T.
When a rotation is applied, if the rotation is between a sufficiently small range of angles the optimal
solution is: xopt = [ρ, 1ρ, ρ, ρ, ..., ρ]T.
The optimal value is computed as: ˜
fopt =˜cTxopt , where ˜c is derived from C.
6.4. Test problem 4: the randomly rotated hypercube in 7D
Algorithm 4 describes how to generate a test problem in Rn, based on a randomly rotated hypercube
having side 2000 and centered in the origin. The feasible region is further constrained to be within an
unrotated hypercube, with lower side (200.4), centered in the origin, too. The randomly rotated external
hypercube does not play any role, since the feasible region is governed by the inner hypercube, but adds
complexity to the problem (i.e., it challenges more the GrossBB algorithm).
As an example, let us consider a hypercube in seven dimension. The corresponding rotation matrix
Q, matrix A0, and vector b0are reported in Appendix A. The resulting problem (|T4|) can be written as
follows
lexmax c01·x0,c02·x0, ..., c07·x0,
s.t. x0Z7:A0x06b0,|T4|
where c01Tis the first row of C0reported in Appendix A, c02Tis its second row, and so on. The
lexicographic optimum for this problem is
xopt = [100,100,100,100,100,100,100]T.
The GrossBB algorithm has been applied and the optimum has been obtained after 15 iterations, as
shown in Tab. 4 (see Appendix C).
Algorithm 4 Generation of a randomly rotated hypercube in Rn
Step 1. Let {Ax 6b,xZn}be the initial, unrotated hypercube problem, in n-dimensions. The
problem is formulated as follow:
lexmax x1, x2, ..., xn
s.t. 1000 6xi61000, i = 1, ..., n
xZn
Step 2. Generate a random rotation matrix Q. It can be computed using a QR factorization utility,
applied to a random matrix T. In particular, Tmust be an n-by-nmatrix having entries randomly
generated according to the normal distribution (zero mean and unitary variance). In Matlab the
15
matrix Qcan be obtained in this way:
T = randn(n);
[Q, R] = qr(T);
Step 3. Rotate the polytope: A0=AQ (bdoes not change under rotations: b0=band C0=C) and
then add the inner hypercube by adding these constraints to A0:
(100 + 0.2) 6xi6(100 + 0.2), i = 1, ..., n.
Step 4. Compute the LMOMILP optimum:
xopt = [100,100,...,100]T
˜
fopt =˜cTxopt , where ˜c is derived from C.
6.5. Test problem 5: the randomly rotated hypercube in 200D
As a stress test for the GrossBB algorithm, we have applied it on a rotated hypercube problem with
200 objectives in R200 generated using Algorithm 4. The behaviour of the GrossBB is exactly the same as
in the previous test problem in R7, but this time the solution is found after 401 iterations (= 1 + 200·2)
instead of after 15 (= 1+ 7·2), where the number one is due to the fact that the problem is always solved
at the root. Each time a problem is solved, two new sub-problems are generated by branching (the left
and right ones). Due to this fact the number of iterations is twice the number of dimensions (other than
the first iteration, of course). Indeed, for this particular problem, the left problems will always have an
empty solution (once relaxed), while the right ones will always have feasible solutions (when relaxed), but
these solutions will not be epsilon-integer. By construction, however, the feasible solution of the 200st
right problem will be epsilon-integer, i.e., will be feasible for the original integer-constrained problem,
and thus the algorithm will stop (this solution will be the vector xopt R200 equal to [100,100, ..., 100]T).
For these reasons, the iterations of the GrossBB for this problem look very similar to those reported on
Table 4 in Appendix C and thus we decided not to report them here due to space limitations. In this
case, the GrossBB needed 32 hours and 13 min to complete the 401 iterations on an Intel i7 920 with 4
cores (8 threads) at 3.6 Ghz.
7. A brief conclusion
In the previous work [1], the lexicographic multiple objective linear programming problem (LMOLP)
has been considered. To solve it, the GrossSimplex algorithm being a generalization of the simplex algo-
rithm able to work with infinitesimals and infinities using the Grossone methodology has been proposed.
In the present paper, the Lexicographic Multi-Objective Mixed-Integer Linear Programming Problem
(called LMOMILP) has been considered. To solve it, the GrossBB algorithm has been introduced. This
method is a generalization of the branch and bound to the case where not only finite but also infinitesimal
and infinite numbers expressible in the numeral system using ¬can be treated. It has been proved that
the proposed pruning rules and the terminating conditions ensure the correct functioning of the GrossBB
algorithm. Finally, five LMOMILP test problems having known solutions have been proposed and it was
shown that the introduced GrossBB algorithm solves all of them successfully.
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful comments.
16
Appendix A - Additional information for third and fourth test problems
This appendix reports information related to test problems 3 and 4. In order to construct the 5-dimensional problem
|T3|the following rotation matrix Q, matrices A0,C0, and vector b0have been used:
A0=
0.9999999087 0.0004273220262 0 0 0
0.0004273220262 0.9999999087 0 0 0
0 0 1.0 0 0
0 0 0 1.0 0
0 0 0 0 1.0
0.9999999087 0.0004273220262 0 0 0
0.0004273220262 0.9999999087 0 0 0
0 0 1.0 0 0
0 0 0 1.0 0
0 0 0 0 1.0
0.9995725867 1.000427231 0 0 0
1.000427231 0.9995725867 0 0 0
1.0 0 0 0 0
01.0 0 0 0
0 0 1.0 0 0
0 0 0 1.0 0
0 0 0 0 1.0
1.0 0 0 0 0
0 1.0 0 0 0
0 0 1.0 0 0
0 0 0 1.0 0
0 0 0 0 1.0
b0=
1000.2
1000.2
1000.2
1000.2
1000.2
1000.2
1000.0
1000.2
1000.2
1000.2
4.0
4.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
Q=
0.9999999087 0.0004273220262 0 0 0
0.0004273220262 0.9999999087 0 0 0
0 0 1.0 0 0
0 0 0 1.0 0
0 0 0 0 1.0
C0=
c1T
c2T
...
c5T
="1 0 .. 0
01... 0
... ... ... ...
0 0 ... 1#R5×5
The following matrices and vector are related to test problem |T4|in 7 dimensions:
A0=
0.3399 0.1993 0.1513 0.1745 0.1464 0.1863 0.8575
0.1419 0.7158 0.1318 0.6014 0.2768 0.1028 0.03592
0.1374 0.3492 0.2176 0.6205 0.5398 0.2576 0.2627
0.3681 0.08162 0.1831 0.26 0.2269 0.8112 0.2172
0.4154 0.1865 0.7852 0.02297 0.09166 0.407 0.03933
0.5363 0.3372 0.4952 0.3105 0.2907 0.2384 0.3401
0.4998 0.4133 0.1307 0.2414 0.6828 0.08847 0.1733
0.3399 0.1993 0.1513 0.1745 0.1464 0.1863 0.8575
0.1419 0.7158 0.1318 0.6014 0.2768 0.1028 0.03592
0.1374 0.3492 0.2176 0.6205 0.5398 0.2576 0.2627
0.3681 0.08162 0.1831 0.26 0.2269 0.8112 0.2172
0.4154 0.1865 0.7852 0.02297 0.09166 0.407 0.03933
0.5363 0.3372 0.4952 0.3105 0.2907 0.2384 0.3401
0.4998 0.4133 0.1307 0.2414 0.6828 0.08847 0.1733
1.0 0 0 0 0 0 0
01.0 0 0 0 0 0
0 0 1.0 0 0 0 0
0001.0 0 0 0
0 0 0 0 1.0 0 0
0 0 0 0 0 1.0 0
0 0 0 0 0 0 1.0
1.0 0 0 0 0 0 0
0 1.0 0 0 0 0 0
0 0 1.0 0 0 0 0
0 0 0 1.0 0 0 0
0 0 0 0 1.0 0 0
0 0 0 0 0 1.0 0
0 0 0 0 0 0 1.0
b0=
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
1000.0
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
100.2
Q=
0.34 0.199 0.151 0.175 0.146 0.186 0.857
0.142 0.716 0.132 0.601 0.277 0.103 0.0359
0.137 0.349 0.218 0.621 0.54 0.258 0.263
0.368 0.0816 0.183 0.26 0.227 0.811 0.217
0.415 0.187 0.785 0.023 0.0917 0.407 0.0393
0.536 0.337 0.495 0.31 0.291 0.238 0.34
0.50.413 0.131 0.241 0.683 0.0885 0.173
C0=
c1T
c2T
...
c7T
="1 0 .. 0
0 1 ... 0
... ... ... ...
0 0 ... 1#R7×7
17
Appendix B - Table 3 (GrossBB iterations on test problem |T3|)
Iter. result at node(iteration)
Initialize - ˜vS(˜
P) = ¬
- Queue len. 1 (add the ro ot problem to the queue)
1 ˜vI(˜
Pc): 1000.63¬0999.573¬11000.2¬21000.2¬31000.2¬4. Queue length: 0
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4
2 ˜vI(˜
Pc): 1000.63¬0999¬11000.2¬21000.2¬31000.2¬4. Queue length: 1
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 3
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4
3 prune node: rule 1, empty feasible region. Queue length: 2
4 prune node: rule 1, empty feasible region. Queue length: 1
5 ˜vI(˜
Pc): 1000¬0999¬11000.2¬21000.2¬31000.2¬4. Queue length: 0
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4
6 ˜vI(˜
Pc): 1000¬0999¬11000¬21000.2¬31000.2¬4. Queue length: 1
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 3
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4
7 prune node: rule 1, empty feasible region. Queue length: 2
8 ˜vI(˜
Pc): 1000¬0999¬11000¬21000¬31000.2¬4. Queue length: 1
- no pruning rules applied, branch ˜
Pcin two sub-problems. Queue length: 3
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4
9 prune node: rule 1, empty feasible region. Queue length: 2
10 ˜vI(˜
Pc): 1000¬0999¬11000¬21000¬31000¬4. Queue length: 1
- A feasible solution has been found: ˜xopt = [1000 999 1000 1000 1000]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
-˜
∆=0¬0+ 0¬1+ 0¬2+ 0¬3+ 0.0002¬4
11 prune no de: rule 1, empty feasible region. Queue length: 2
result Iteration 11. Optimization ended. Optimal solution found:
xopt = [1000,999,1000,1000,1000]T
˜
fopt =1000¬0999¬11000¬21000¬31000¬4
˜
∆=0¬0+ 0¬1+ 0¬2+ 0¬3+ 0¬4
Appendix C - Table 4 (GrossBB iterations on test problem |T4|)
Iter. result at node(iteration)
Initialize - ˜vS(˜
P) = ¬
- Queue len. 1 (add the ro ot problem to the queue)
1 ˜vI(˜
Pc): 100.2¬0100.2¬1100.2¬2100.2¬3100.2¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
2 prune node: rule 1, empty feasible region Queue length: 1
3 ˜vI(˜
Pc): 100¬0100.2¬1100.2¬2100.2¬3100.2¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
4 prune node: rule 1, empty feasible region Queue length: 1
5 ˜vI(˜
Pc): 100¬0100¬1100.2¬2100.2¬3100.2¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
6 prune node: rule 1, empty feasible region Queue length: 1
7 ˜vI(˜
Pc): 100¬0100¬1100¬2100.2¬3100.2¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
8 prune node: rule 1, empty feasible region Queue length: 1
9 ˜vI(˜
Pc): 100¬0100¬1100¬2100¬3100.2¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
10 prune no de: rule 1, empty feasible region Queue length: 1
11 ˜vI(˜
Pc): 100¬0100¬1100¬2100¬3100¬4100.2¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
12 prune no de: rule 1, empty feasible region Queue length: 1
13 ˜vI(˜
Pc): 100¬0100¬1100¬2100¬3100¬4100¬5100.2¬6
- Queue length: 0
- no pruning rules applied, branch ˜
Pcin two subproblem: Queue length: 2
- Delta 100¬0+ 100¬1+ 100¬2+ 100¬3+ 100¬4+ 100¬5+ 100¬6
14 prune no de: rule 1, empty feasible region Queue length: 1
15 ˜vI(˜
Pc): 100¬0100¬1100¬2100¬3100¬4100¬5100¬6
- Queue length: 0
- A feasible solution has been found: xopt = [100,100,100,100,100,100,100]T
- update ˜vS(˜
P) = ˜vI(˜
Pc), prune node: rule 3
- Delta 0¬0+ 0¬1+ 0¬2+ 0¬3+ 0¬4+ 0¬5+ 0¬6
result Iteration 15. Optimization ended. Optimal solution found:
xopt = [100,100,100,100,100,100,100]T
˜
fopt =100¬0100¬1100¬2100¬3100¬4100¬5100¬6
˜
∆=0¬0+ 0¬1+ 0¬2+ 0¬3+ 0¬4+ 0¬5+ 0¬6
[1] M. Cococcioni, M. Pappalardo, and Y. D. Sergeyev, “Lexicographic multi-objective linear programming using grossone
methodology: Theory and algorithm,” Applied Mathematics and Computation, vol. 318, pp. 298–311, 2018.
[2] Y. D. Sergeyev, “Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert
problems,” EMS Surveys in Mathematical Sciences, vol. 4, pp. 219–320, 2017.
[3] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons Inc, 2 ed., 2001.
18
[4] M. Cococcioni, P. Ducange, B. Lazzerini, and F. Marcelloni, “A new multi-objective evolutionary algorithm based on
convex hull for binary classifier optimization,” in Proc. 2007 IEEE Congress on Evolutionary Computation (IEEE-
CEC’07), pp. 3150–3156, 2007.
[5] P. M. Pardalos, A. ˇ
Zilinskas, and J. ˇ
Zilinskas, Non-Convex Multi-Objective Optimization. Springer International
Publishing, 2017.
[6] S. Khosravani, M. Jalali, A. Khajepour, A. Kasaiezadeh, S. K. Chen, and B. Litkouhi, “Application of lexicographic
optimization method to integrated vehicle control systems,” IEEE Transactions on Industrial Electronics, vol. 65,
no. 12, pp. 9677–9686, 2018.
[7] E. Weber, A. Rizzoli, R. Soncini-Sessa, and A. Castelletti, “A lexicographic optimization in water resource planning:
the case of lake verbano, italy,” in Proc. 1st Biennial Meeting of the International Environmental Modelling and
Software Society (IEMSS), 2002.
[8] I. Stanimirovic, “Compendious lexicographic method for multi-objective optimization,” Facta universitatis - series:
Mathematics and Informatics, vol. 27, no. 1, pp. 55–66, 2012.
[9] J. Marques-Silva, J. Argelich, A. Gra¸ca, and I. Lynce, “Boolean lexicographic optimization: algorithms & applications,”
Annals of Mathematics and Artificial Intelligence, vol. 62, no. 3, pp. 317–343, 2011.
[10] M. Cococcioni, A. Cudazzo, M. Pappalardo, and Y. D. Sergeyev, “Grossone methodology for lexicographic mixed-
integer linear programming problems,” in In Proc. of the the 3rd International Conference and Summer School on
Numerical Computations: Theory and Algorithms, June 2019.
[11] Y. D. Sergeyev, “Independence of the grossone-based infinity methodology from non-standard analysis and comments
upon logical fallacies in some texts asserting the opposite,” Foundations of Science, vol. 24, no. 1, pp. 153–170, 2019.
[12] G. Lolli, “Metamathematical investigations on the theory of grossone,” Applied Mathematics and Computation,
vol. 255, pp. 3–14, 2015.
[13] M. Margenstern, “Using grossone to count the number of elements of infinite sets and the connection with bijections,”
p-Adic Numbers, Ultrametric Analysis and Applications, vol. 3, no. 3, pp. 196–204, 2011.
[14] F. Montagna, G. Simi, and A. Sorbi, “Taking the Pirah˜a seriously,” Communications in Nonlinear Science and
Numerical Simulation, vol. 21, no. 1–3, pp. 52–69, 2015.
[15] Y. D. Sergeyev, Computer system for storing infinite, infinitesimal, and finite quantities and executing arithmetical
operations with them. USA patent 7,860,914, 2010.
[16] M. Cococcioni, M. Pappalardo, and Y. D. Sergeyev, “Towards lexicographic multi-objective linear programming using
grossone methodology,” in Proc. of the 2nd Intern. Conf. “Numerical Computations: Theory and Algorithms” (S. Y.
D., K. D. E., D. F., and M. M. S., eds.), vol. 1776, p. 090040, New York: AIP Publishing, 2016.
[17] L. Lai, L. Fiaschi, and M. Cococcioni, “Solving mixed pareto-lexicographic many-objective optimization problems:
The case of priority levels,” submitted to Swarm and Evolutionary Computation, 2019.
[18] R. De Leone, G. Fasano, and Y. D. Sergeyev, “Planar methods and grossone for the conjugate gradient breakdown in
nonlinear programming,” Computational Optimization and Applications, vol. 71, pp. 73–93, 2018.
[19] S. De Cosmis and R. De Leone, “The use of grossone in mathematical programming and operations research,” Applied
Mathematics and Computation, vol. 218, no. 16, pp. 8029–8038, 2012.
[20] R. De Leone, “Nonlinear programming and grossone: Quadratic programming and the role of constraint qualifications,”
Applied Mathematics and Computation, vol. 318, pp. 290–297, 2018.
[21] M. Gaudioso, G. Giallombardo, and M. S. Mukhametzhanov, “Numerical infinitesimals in a variable metric method
for convex nonsmooth optimization,” Applied Mathematics and Computation, vol. 318, pp. 312–320, 2018.
[22] Y. D. Sergeyev, D. E. Kvasov, and M. S. Mukhametzhanov, “On strong homogeneity of a class of global optimiza-
tion algorithms working with infinite and infinitesimal scales,” Communications in Nonlinear Science and Numerical
Simulation, vol. 59, pp. 319–330, 2018.
[23] F. Caldarola, “The Sierpinski curve viewed by numerical computations with infinities and infinitesimals,” Applied
Mathematics and Computation, vol. 318, pp. 321–328, 2018.
[24] Y. D. Sergeyev, “Numerical point of view on Calculus for functions assuming finite, infinite, and infinitesimal values
over finite, infinite, and infinitesimal domains,” Nonlinear Analysis Series A: Theory, Methods &Applications, vol. 71,
no. 12, pp. e1688–e1707, 2009.
[25] Y. D. Sergeyev, “Numerical infinities applied for studying Riemann series theorem and Ramanujan summation,” in
AIP Conference Proceedings of ICNAAM 2017, vol. 1978, p. 020004, New York: AIP Publishing, 2018.
[26] A. Zhigljavsky, “Computing sums of conditionally convergent and divergent series using the concept of grossone,”
Applied Mathematics and Computation, vol. 218, no. 16, pp. 8064–8076, 2012.
[27] F. Caldarola, “The exact measures of the Sierpinski d-dimensional tetrahedron in connection with a diophantine
nonlinear system,” Communications in Nonlinear Science and Numerical Simulation, vol. 63, pp. 228–238, 2018.
19
[28] L. D’Alotto, “A classification of two-dimensional cellular automata using infinite computations,” Indian Journal of
Mathematics, vol. 55, pp. 143–158, 2013.
[29] Y. D. Sergeyev, “Evaluating the exact infinitesimal values of area of Sierpinski’s carpet and volume of Menger’s sponge,”
Chaos, Solitons &Fractals, vol. 42, no. 5, pp. 3042–3046, 2009.
[30] Y. D. Sergeyev, “Using blinking fractals for mathematical modelling of processes of growth in biological systems,”
Informatica, vol. 22, no. 4, pp. 559–576, 2011.
[31] Y. D. Sergeyev, “The exact (up to infinitesimals) infinite perimeter of the Koch snowflake and its finite area,” Com-
munications in Nonlinear Science and Numerical Simulation, vol. 31, no. 1–3, pp. 21–29, 2016.
[32] D. Iudin, Y. D. Sergeyev, and M. Hayakawa, “Interpretation of percolation in terms of infinity computations,” Applied
Mathematics and Computation, vol. 218, no. 16, pp. 8099–8111, 2012.
[33] D. Iudin, Y. D. Sergeyev, and M. Hayakawa, “Infinity computations in cellular automaton forest-fire model,” Commu-
nications in Nonlinear Science and Numerical Simulation, vol. 20, no. 3, pp. 861–870, 2015.
[34] M. Margenstern, “Fibonacci words, hyperbolic tilings and grossone,” Communications in Nonlinear Science and
Numerical Simulation, vol. 21, no. 1–3, pp. 3–11, 2015.
[35] Y. D. Sergeyev, “Counting systems and the First Hilbert problem,” Nonlinear Analysis Series A: Theory, Methods &
Applications, vol. 72, no. 3-4, pp. 1701–1708, 2010.
[36] Y. D. Sergeyev and A. Garro, “Single-tape and multi-tape Turing machines through the lens of the Grossone method-
ology,” Journal of Supercomputing, vol. 65, no. 2, pp. 645–663, 2013.
[37] L. Fiaschi and M. Cococcioni, “Numerical asymptotic results in game theory using Sergeyev’s Infinity Computing,”
Int. Journal of Unconventional Computing, vol. 14, no. 1, pp. 1–25, 2018.
[38] L. Fiaschi and M. Cococcioni, “Generalizing pure and impure iterated prisoners dilemmas to the case of infinite
and infinitesimal quantities,” in In Proc. of the the 3rd International Conference and Summer School on Numerical
Computations: Theory and Algorithms, June 2019.
[39] L. Fiaschi and M. Cococcioni, “Non-archimedean game theory: a numerical approach,” in preparation, 2019.
[40] D. Rizza, “A study of mathematical determination through Bertrand’s Paradox,” Philosophia Mathematica, vol. 26,
no. 3, pp. 375–395, 2018.
[41] D. Rizza, “Numerical methods for infinite decision-making processes,” Int. Journal of Unconventional Computing,
vol. 14, no. 2, pp. 139–158, 2019.
[42] C. S. Calude and M. Dumitrescu, “Infinitesimal probabilities based on grossone,” SN Computer Science, 2020.
[43] P. Amodio, F. Iavernaro, F. Mazzia, M. S. Mukhametzhanov, and Y. D. Sergeyev, “A generalized Taylor method of
order three for the solution of initial value problems in standard and infinity floating-point arithmetic,” Mathematics
and Computers in Simulation, vol. 141, pp. 24–39, 2017.
[44] Y. D. Sergeyev, “Higher order numerical differentiation on the Infinity Computer,” Optimization Letters, vol. 5, no. 4,
pp. 575–585, 2011.
[45] Y. D. Sergeyev, “Solving ordinary differential equations by working with infinitesimals numerically on the Infinity
Computer,” Applied Mathematics and Computation, vol. 219, no. 22, pp. 10668–10681, 2013.
[46] Y. D. Sergeyev, M. S. Mukhametzhanov, F. Mazzia, F. Iavernaro, and P. Amodio, “Numerical methods for solving
initial value problems on the Infinity Computer,” Int. Journal of Unconventional Computing, vol. 12(1), pp. 3–23,
2016.
[47] F. Iavernaro, F. Mazzia, M. S. Mukhametzhanov, and Y. D. Sergeyev, “Conjugate-symplecticity properties of Euler-
Maclaurin methods and their implementation on the Infinity Computer,” Applied Numerical Mathematics, 2020.
[48] M. Pappalardo and M. Passacantando, Ricerca Operativa. Pisa University Press, 2012.
[49] Y. D. Sergeyev, “Lagrange Lecture: Methodology of numerical computations with infinities and infinitesimals,” Ren-
diconti del Seminario Matematico dell’Universit`a e del Politecnico di Torino, vol. 68, no. 2, pp. 95–113, 2010.
[50] Y. D. Sergeyev, “Un semplice modo per trattare le grandezze infinite ed infinitesime,” Matematica nella Societ`a e
nella Cultura: Rivista della Unione Matematica Italiana, vol. 8, no. 1, pp. 111–147, 2015.
[51] Y. D. Sergeyev, Arithmetic of Infinity. CS: Edizioni Orizzonti Meridionali, 2003, 2nd ed. 2013.
20
... GM has found successful applications in a lot of research fields, such as cellular automata [15], fractals [42], ordinary differential equations [2,31,41,45], game theory [13,24], non-linear optimization [20,21,25], evolutionary optimization [34,35], and numerical simulation [23], among others. Also, linear programming enjoyed the advent of GM [11,12,14]. In particular, in [14] the authors proposed a Grossone-based extension of the Simplex algorithm able to deal with lexicographic multi-objective linear programming problems, namely the Gross-Simplex (G-Simplex) algorithm. ...
... Ax ≤b (11) or ...
... Ax >b as the two vectors are left-multiplied by the same vector y ≥ 0. However, such a result conflicts with (11). ...
Article
Full-text available
The goal of this work is to propose a new type of constraint for linear programs: inequalities having infinite, finite, and infinitesimal values in the right-hand side. Because of the nature of such constraints, the feasible region polyhedron becomes more complex, since its vertices can be represented by non-purely finite coordinates , and so is the optimum of the problem. The introduction of such constraints enlarges the class of linear programs, where those described by finite values only become a special case. To tackle optimization problems over such polyhedra, there is a need for an ad-hoc solving routine: this work proposes a generalization of the Simplex algorithm, which is able to solve common linear programs as corner cases. Finally, the study presents three relevant applications that can benefit from the use of these novel constraints, making the use of the extended Simplex algorithm essential. For each application, an exemplifying benchmark is solved, showing the effectiveness of the proposed routine.
... In the present study we consider the Lexicographic Multi-Objective Integer Linear Programming (LMOILP) problem. Differently from [39], where we have solved the problem by extending the Branchand-Bound method, in this work we aim at devising a solution based on a generalisation of the classical cutting plane method. Then, to efficiently manage the addition of new constraints generated by the cutting planes, we employed the duality theory. ...
... Superscript T indicates the transpose and the symbol lexmin in P denotes Lexicographic Minimum and means that the first objective is infinitely more important than the second, which is, on its turn, infinitely more important than the third one, and so on. As done in [39,41], we can reformulate problem P as a single-objective one, by using the Grossone Methodology to obtain a scalarization. In the following subsections, we provide a brief introduction on Grossone methodology, focusing on the key points used in this paper, and the LMOILP reformulation. ...
... Notice that c qT x = c q 1 x 1 + ... + c q n x n is the q-th Gross-digit, q = 1, ..., r. In [39] we have shown that problemP is equivalent to problem P and this new formulationP is attractive because the set of multiple objective functions is mapped into a single (Gross-) scalar function to be optimised. Moreover, the continuous relaxation version is the following Gross-LP problem [1]: ...
Article
Full-text available
This work presents a new cutting plane method for lexicographic multi-objective integerlinear programming (LMOILP). The method uses Grossone Methodology to reformulate a LMOILP problem into one having a single non-Archimedean scalar objective function, asdone in [1] (but in that case in the absence of the integer constraints). The problem, without the integer constraints, is solved using the GrossSimplex algorithm presented in [1] to find a candidate optimal solution. Here a novel cutting plane is introduced, named Gross-basedObjective Function Cutting Plane whenever the optimal value of the Gross-scalar objectivefunction is Gross-fractional. When it happens to be not Gross-fractional, cutting planesare generated using the Fractional Cutting Plane, derived from fractional components of theoptimal solution. Moreover, by combining them, we proposed an algorithm that we calledGross-based Cutting Plane (GCP) method. It has been proved that it finds the optimalsolution of a LMOILP problem and terminates after a finite number of iterations. To speed-up the GCP, at each iteration subsequent the first one, we re-use the optimal basis of the lastcontinuous relaxation, by running the GrossDualSimplex algorithm. This is the well-known warm-start technique, which, however, needs specific attention due to the need to solve a linear problem having a non-Archimedean right-hand side. In the experimental part, we show the efficacy of the proposed approach.
... The use of numerical infinite and infinitesimal numbers in concrete applications is a quite recent trend that seems to be particularly fruitful, and among the research fields positively affected the most by such a novelty there is multi-objective optimization. Cococcioni et al. (2018Cococcioni et al. ( , 2021 used approaches based on non-finite numbers to linear programming, Cococcioni et al. (2020Cococcioni et al. ( , 2023 to mixed-integer linear programming, Astorino and Fuduli (2020);De Leone et al. (2020); Fiaschi and Cococcioni (2022) to quadratic programming, De Cosmis and De Leone (2012); Lai et al. (2020Lai et al. ( , 2021 to non-linear optimization. ...
Preprint
Full-text available
This work proposes a novel approach to the deep hierarchical classification task, i.e., the problem of classifying data according to multiple labels organized in a rigid parent-child structure. It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer. The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields: lexicographic multi-objective optimization, non-standard analysis, and deep learning. To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks, on the CIFAR10, CIFAR100 (where it has been originally and recently proposed before being adopted and tuned for multiple real-world applications) and Fashion-MNIST benchmarks. Evidence states that an LH-DNN can achieve comparable if not superior performance, especially in the learning of the hierarchical relations, in the face of a drastic reduction of the learning parameters, training epochs, and computational time, without the need for ad-hoc loss functions weighting values.
... In Eqs. (19)- (23), the inequalities are linear constraints and can be solved using exact algorithms based on mathematical principles [30]. However, the equality constraints describing the power flow are second-order cone constraints (SOCCs), which cannot be solved directly by using exact algorithms. ...
Article
Full-text available
In recent times, the impact of typhoon disasters on integrated energy active distribution networks (IEADNs) has received increasing attention, particularly, in terms of effective cascading fault path prediction and enhanced fault recovery performance. In this study, we propose a modified ORNL-PSerc-Alaska (OPA) model based on optimal power flow (OPF) calculation to forecast IEADN cascading fault paths. We first established the topology and operational model of the IEADNs, and the typical fault scenario was chosen according to the component fault probability and information entropy. The modified OPA model consisted of two layers: An upper-layer model to determine the cascading fault location and a lower-layer model to calculate the OPF by using Yalmip and CPLEX and provide the data to update the upper-layer model. The approach was validated via the modified IEEE 33-node distribution system and two real IEADNs. Simulation results showed that the fault trend forecasted by the novel OPA model corresponded well with the development and movement of the typhoon above the IEADN. The proposed model also increased the load recovery rate by >24% compared to the traditional OPA model.
... This is a subset of all the "sub-platforms", which is conflict-free in the sense that there cannot be two "sub-platforms" in this set when using which two trains will collide. For example, such a set was used in [13,14] when studying the task of assigning locomotives. Further, this set is denoted by Z and is considered to be given a priori, i.e., it is an input parameter in the problem under consideration. ...
Article
Algorithmic and contributory assistance in resolving railway traffic control problems is being developed. It is founded on presenting practical issues in optimizing declarations using linear programming implements. A novel challenge of determining a pathway control a time interval during which some sectors of the railway network are blocked for reparation work is added to the previously proposed models. Applicable statements, aimed at the synchronized examination for pathway control and a train timetable for a detailed portion of the railway network A mathematical model and an optimized strategy are offered as solutions. The creative situation is simplified to a mixed integer linear problem. To explain potential computing challenges in resolving an issue, a method for obtaining an estimate is planned. It would be founded on creating a rudimentary movement timetable and its consequent revision to the reason for the necessity for railway control. Two algorithms are implemented to get an approximate solution. A basic and adapted train schedule is formed in stages of groupings of trains connected by another origin and target terminals in the first. Steps were performed single train once a time as per the time of preparation for leaving in the second. A mathematical experimentation's findings are provided.
... The proposed method has been tested on four examples with at most 10 objective functions. The same authors addressed (Cococcioni et al. 2020) the case in which some variables are integers. They solved the problem using a branch-and-bound algorithm in which the relaxation at each node of the search tree is solved using the GrossSimplex. ...
Article
Some airlines use the preferential bidding system to construct the schedules of their pilots. In this system, the pilots bid on the different activities and the schedules that lexicographically maximize the scores of the pilots according to their seniority are selected. A sequential approach to solve this maximization problem is natural: The problem is first solved with the bids of the most senior pilot, and then it is solved with those of the second most senior without decreasing the score of the most senior, and so on. The literature admits that the structure of the problem somehow imposes such an approach. The problem can be modeled as an integer linear lexicographic program. We propose a new efficient method, which relies on column generation for solving its continuous relaxation and returns proven optimality gaps. To design this column generation, we prove that bounded linear lexicographic programs admit “primal-dual” feasible bases, and we show how to compute such bases efficiently. Another contribution on which our method relies is the extension of standard tools for resource-constrained longest path problems to their lexicographic versions. This is useful in our context because the generation of new columns is modeled as a lexicographic resource-constrained longest path problem. Numerical experiments show that this new method is already able to solve to proven optimality industrial instances provided by Air France, with up to 150 pilots. By adding a last ingredient in the resolution of the longest path problems, which exploits the specificity of the preferential bidding system, the method achieves for these instances computational times that are compatible with operational constraints. Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2022.0372 .
... The introduction of numerical encodings allowed researchers to implement software simulators for algebraic operations between non-Archimedean numbers, paving the way for their use in numerical computations. Some of the applications of these numbers regard multi-objective optimization problems combined with linear programming [2][3][4][5], quadratic programming [6], evolutionary algorithms [7], game theory [8], artificial intelligence [9,10], etc. A computer architecture able to manipulate Grossone-based numbers was patented in 2009 (see [11]). ...
Chapter
Full-text available
This work presents the design and synthesis of a processing unit for numbers encoded according to the recently introduced BAN format. Such an encoding allows one to represent numbers which are not only finite (as the reals) but also infinitely large or infinitely small, i.e., non-Archimedean. The motivation behind this study is the significant burst the non-Archimedean numerical computations have received in the last 20 years and the applications that have been found. With a hardware support, this operations would significantly increase in speed, enlarging the spectrum of possible applications to industrial and real-time ones.KeywordsNon-Archimedean fieldsAlpha TheoryBounded Algorithmic Number (BAN)Arithmetic UnitFPGA
... Grossone Methodology has found successful applications in a lot of research fields, such as cellular automata [13], fractals [14], ordinary differential equations [15][16][17][18], game theory [19,20], non-linear optimization [21][22][23], evolutionary optimization [24,25], and numerical simulation [26], among others. Also, linear programming enjoyed the advent of Grossone Methodology [1,27,28]. In particular, in [1] the authors proposed a Grossone-based extension of the Simplex algorithm able to deal with lexicographic multi-objective linear programming problems, namely the Gross-Simplex (G-Simplex) algorithm. ...
Preprint
Full-text available
The goal of this work is to propose a new type of constraint for linear programs: inequalities having a non-Archimedean right-hand side. Here, the word non-Archimedean refers to values that can be infinite, finite, or infinitesimal. Because of the nature of such constraints, the polyhedron describing the feasible region becomes more complex, since its vertices are now represented by non-Archimedean coordinates, and so is the optimum of the problem. The introduction of such a constraint enlarges the class of linear programs, where the Archimedean ones become a special case. To tackle optimization problems over the resulting polyhedra, this work presents a solving routine, which consists in a generalization of the Simplex algorithm. This solver optimizes Archimedean linear programs as corner cases. Finally, the study presents three relevant applications which can benefit from the use of constraints with non-Archimedean right-hand side, making the use of the extended Simplex algorithm essential. For each application, an exemplifying benchmark is solved, showing the effectiveness of the solving routine.
Conference Paper
Full-text available
This paper studies a class of mixed Pareto-Lexicographic multi-objective optimization problems where the preference among the objectives is available in different priority levels (PLs) before the start of the optimization process-akin to many practical problems involving domain experts. Each priority level (PL) is a group of objectives having an identical importance in terms of optimization, so that they must be optimized in the standard Pareto sense. However, between two PLs, a lexicographic preference structure exists. Clearly, finding the entire set of Pareto optimal solutions first and then choosing the lexicographic solutions using the given PL structure is not computationally efficient. A new efficient algorithm is presented here using a recent mathematical breakthrough in handling infinite and infinitesimal quantities: the Grossone methodology. The proposal has been implemented within a popular multi-objective optimization algorithm (NSGA-II), thereby obtaining its generalized version named PL-NSGA-II, although other EMO or EMaO algorithms could have also been used instead. A quantitative comparison of PL-NSGA-II performance against existing algorithms is made. Results clearly show the advantage of the proposed Grossone-based methodology in solving such priority-level many-objective problems.
Article
Full-text available
In this paper we consider the Pure and Impure Prisoner's Dilemmas. Our purpose is to theoretically extend them when using non-Archimedean quantities and to work with them numerically, potentially on a computer. The recently introduced Sergeyev's Grossone Methodology proved to be effective in addressing our problem, because it is both a simple yet effective way to model non-Archimedean quantities and a framework which allows one to perform numerical computations between them. In addition, we could be able, in the future, to perform the same computations in hardware, resorting to the infinity computer patented by Sergeyev himself. After creating the theoretical model for Pure and Impure Prisoner's Dilemmas using Grossone Methodology, we have numerically reproduced the diagrams associated to our two new models, using a Matlab simulator of the Infinity Computer. Finally, we have proved some theoretical properties of the simulated diagrams. Our tool is thus ready to assist the modeler in all that problems for which a non-Archimedean Pure/Impure Prisoner's Dilemma model provides a good description of reality: energy market modeling, international trades modeling, political merging processes, etc.
Article
Full-text available
This paper introduces a new class of optimization problems, called Mixed Pareto-Lexicographic Multi-objective Optimization Problems (MPL-MOPs), to provide a suitable model for scenarios where some objectives have priority over some others. Specifically, this work focuses on a relevant subclass of MPL-MOPs, namely problems involving Pareto optimization of two or more priority chains. A priority chain (PC) is a sequence of objectives lexicographically ordered by importance. After examining the main features of those problems, named PC-MPL-MOPs, we propose an innovative approach to deal with them, built upon the Grossone Methodology, a recent theory which enables handling the priority in an elegant and powerful way. The most interesting aspect of this technique is the possibility to seamlessly embed it in any existing evolutionary algorithm, without altering its logical structure. In order to provide concrete examples, we implemented it on top of the well-known NSGA-II and MOEA/D algorithms, calling these new generalized versions PC-NSGA-II and PC-MOEA/D, respectively. In the second part of this article, we test the strength of our strategy in solving multi- and even many-objective problems with priority chains, comparing it against the results achieved by standard priority-based and non-priority-based approaches. Experiments show that our algorithms are generally able to produce more solutions and of higher quality.
Chapter
Full-text available
In this work, a generalization of both Pure and Impure iterated Prisoner’s Dilemmas is presented. More precisely, the generalization concerns the use of non-Archimedean quantities, i.e., payoffs that can be infinite, finite or infinitesimal and probabilities that can be finite or infinitesimal. This new approach allows to model situations that cannot be adequately addressed using iterated games with purely finite quantities. This novel class of models contains, as a special case, the classical known ones. This is an important feature of the proposed methodology, which assures that we are proposing a generalization of the already known games. The properties of the generalized models have also been validated numerically, by using a Matlab simulator of Sergeyev’s Infinity Computer.
Chapter
Full-text available
In this work we have addressed lexicographic multi-objective linear programming problems where some of the variables are constrained to be integer. We have called this class of problems LMILP, which stands for Lexicographic Mixed Integer Linear Programming. Following one of the approach used to solve mixed integer linear programming problems, the branch and bound technique, we have extended it to work with infinitesimal/infinite numbers, exploiting the Grossone Methodology. The new algorithm, called GrossBB, is able to solve this new class of problems, by using internally the GrossSimplex algorithm (a recently introduced Grossone extension of the well-known simplex algorithm, to solve lexicographic LP problems without integer constraints). Finally we have illustrated the working principles of the GrossBB on a test problem.
Article
Full-text available
In finite probability theory, the only probability zero event is the impossible one, but in standard Kolmogorov probability theory, probability zero events occur all the time. Prominent logicians, probability experts and philosophers of probability, including Carnap, Kemeny, Shimony, Savage, De Finetti, Jeffrey, have successfully argued that a sound probability should be regular, that is, only the impossible event should have zero probability. This intuition is shared by physicists too. Totality is another desideratum which means that every event should be assigned a probability. Regularity and totality are achievable in rigorous mathematical terms even for infinite events via hyper-reals valued probabilities. While the mathematics of these theories is not objectionable, some philosophical arguments purport to show that infinitesimal probabilities are inherently problematic. In this paper, we present a simpler and natural construction—based on Sergeyev’s calculus with Grossone (in a formalism inspired by Lolli) enriched with infinitesimals—of a regular, total, finitely additive, uniformly distributed probability on infinite sets of positive integers. These probability spaces—which are inspired by and parallels the construction of classical probability—will be briefly studied. In this framework, De Finetti fair lottery has the natural solution and Williamson’s objections against infinitesimal probabilities are mathematically refuted.
Article
Full-text available
Prisoner’s Dilemma (PD) is a widely studied game that plays an important role in Game Theory. This paper aims at extending PD Tournaments to the case of infinite, finite or infinitesimal payoffs using Sergeyev’s Infinity Computing (IC). By exploiting IC, we are able to show the limits of the classical approach to PD Tournaments analysis of the classical theory, extending both the sets of the feasible and numerically computable tournaments. In particular we provide a numerical computation of the exact outcome of a simple PD Tournament where one player meets every other an infinite number of times, for both its deterministic and stochastic formulations.
Article
Full-text available
Certain mathematical problems prove very hard to solve because some of their intuitive features have not been assimilated or cannot be assimilated by the available mathematical resources. This state of affairs triggers an interesting dynamic whereby the introduction of novel conceptual resources converts the intuitive features into further mathematical determinations in light of which a solution to the original problem is made accessible. I illustrate this phenomenon through a study of Bertrand’s paradox.
Conference Paper
Full-text available
A computational methodology called Grossone Infinity Computing introduced with the intention to allow one to work with infinities and infinitesimals numerically has been applied recently to a number of problems in numerical mathematics (optimization, numerical differentiation, numerical algorithms for solving ODEs, etc.). The possibility to use a specially developed computational device called the Infinity Computer (patented in USA and EU) for working with infinite and infinitesimal numbers numerically gives an additional advantage to this approach in comparison with traditional methodologies studying infinities and infinitesimals only symbolically. The grossone methodology uses the Euclid’s Common Notion no. 5 ‘The whole is greater than the part’ and applies it to finite, infinite, and infinitesimal quantities and to finite and infinite sets and processes. It does not contradict Cantor’s and non-standard analysis views on infinity and can be considered as an applied development of their ideas. In this paper we consider infinite series and a particular attention is dedicated to divergent series with alternate signs. The Riemann series theorem states that conditionally convergent series can be rearranged in such a way that they either diverge or converge to an arbitrary real number. It is shown here that Riemann’s result is a consequence of the fact that symbol ∞ used traditionally does not allow us to express quantitatively the number of addends in the series, in other words, it just shows that the number of summands is infinite and does not allows us to count them. The usage of the grossone methodology allows us to see that (as it happens in the case where the number of addends is finite) rearrangements do not change the result for any sum with a fixed infinite number of summands. There are considered some traditional summation techniques such as Ramanujan summation producing results where to divergent series containing infinitely many positive integers negative results are assigned. It is shown that the careful counting of the number of addends in infinite series allows us to avoid this kind of results if grossone-based numerals are used.
Article
Multi-derivative one-step methods based upon Euler–Maclaurin integration formulae are considered for the solution of canonical Hamiltonian dynamical systems. Despite the negative result that simplecticity may not be attained by any multi-derivative Runge–Kutta methods, we show that the Euler–MacLaurin method of order p is conjugate-symplectic up to order p+2. This feature entitles them to play a role in the context of geometric integration and, to make their implementation competitive with the existing integrators, we explore the possibility of computing the underlying higher order derivatives with the aid of the Infinity Computer.