Figure 1 - uploaded by David E. Bernal Neira
Content may be subject to copyright.
k = 1, G(5, 0.25).

k = 1, G(5, 0.25).

Source publication
Preprint
Full-text available
Quantum devices can be used to solve constrained combinatorial optimization (COPT) problems thanks to the use of penalization methods to embed the COPT problem's constraints in its objective to obtain a quadratic unconstrained binary optimization (QUBO) reformulation of the COPT. However, the particular way in which this penalization is carried out...

Contexts in source publication

Context 1
... particular, Figures 1 and 2 compare the ∆ min obtained from the L-QUBO (12) and N-QUBO (20) for instances of the MkCS problem in which k = 1, where the underlying graphs are randomly selected G(5, 0.25) and G(5, 0.75) graphs. These bar plots, as well as the remaining ones in this section, provide information about the distribution of ∆ min . ...
Context 2
... Figures 1 and 2, it follows that the N-QUBO results in higher ∆ min than the L-QUBO; therefore, in theory, the N-QUBO Hamiltonian should converge faster to a low energy state than the L-QUBO Hamiltonian. We can state this more formally by performing a simple hypothesis test. ...
Context 3
... δ ∈ [0, 100%]. That is, in (28) we are statistically comparing the left-most bars of the N-QUBO subplot and the L-QUBO subplot of Figures 1 and 2 (for brevity, the results for the case p = 0.50 have not bee plotted), under the null hypothesis that the L-QUBO provides a higher mean ∆ min . Then, for any p ∈ {0.25, 0.50, 0.75} one gets that the null hypothesis H o in (28) can be rejected with 95% confidence for values of δ up to 2%. ...
Context 4
... following results show how the increased number of binary variables required by the L-QUBO affects the difference between the qubits required to embed both QUBO formulations. In Figures 5-12, the number of average qubits required by both the L-QUBO and the N-QUBO formulations are plotted for values of k ∈ {1, 2, 5}, graphs G(n, p) for values of n ∈ [5,50], p ∈ {0.25, 0.50, 0.75}, and D-Wave's 2000Q TM and Advantage 1.1 TM processors. The average is computed over five (5) random graphs G(n, p) generated for each combination of n, p values, as well as ten (10) runs of D-Wave's embedding algorithm. ...
Context 5
... bars plotted with each point in the graph represent the values within one standard deviation of the average value. From all Figures 5-12, it is clear that in terms of embedding requirements, the N-QUBO formulation is substantially better than the L-QUBO formulation. This is true not only in terms of the average qubits required to embed each QUBO, but the volatility of the number of qubits required to embed each QUBO. ...
Context 6
... Figures 9 and 11 it is clear that as k and p increase, this trend of being able to embed larger problems in terms of number of nodes n continues to be evidenced even more. Even using the more powerful Advantage 1.1 TM processor, Figure 12 shows that for sparse graphs (i.e., p = 0.25) and k = 5, the L-QUBO can only be embedded for graphs with up to n = 40, while it seems that the N-QUBO can be embedded for graphs with up to n = 80 (i.e., the double number of nodes). ...
Context 7
... our tests, p is estimated by running the quantum annealer 1000 times. From Figure 17 it is clear that when considering non-sparse graphs (i.e., p = 0.75) in the 2000Q TM processor, the advantages of the N-QUBO over the L-QUBO in terms of TTS only increase. In particular, notice that while a t run of 20µs is enough to find the optimal solution with some small probability for instances of the MkCS problem with underlying graphs of up to n = 50 nodes. ...
Context 8
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 9
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 10
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 11
... from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). However, by comparing Figures 17 and 18, as well as Figures 19 and 20, it follows that increasing the penalty constants in this way does not produce discernible improvements for the quantum annealing processors in terms of TTS. Most likely, this means that any theoretical advantages in terms of convergence obtained by increasing the value of penalty constants is off-set by the precision problems that using larger penalty parameters brings for the quantum annealing processors in practice. ...
Context 12
... from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). However, by comparing Figures 17 and 18, as well as Figures 19 and 20, it follows that increasing the penalty constants in this way does not produce discernible improvements for the quantum annealing processors in terms of TTS. Most likely, this means that any theoretical advantages in terms of convergence obtained by increasing the value of penalty constants is off-set by the precision problems that using larger penalty parameters brings for the quantum annealing processors in practice. ...
Context 13
... particular, Figures 1 and 2 compare the ∆ min obtained from the L-QUBO (12) and N-QUBO (20) for instances of the MkCS problem in which k = 1, where the underlying graphs are randomly selected G(5, 0.25) and G(5, 0.75) graphs. These bar plots, as well as the remaining ones in this section, provide information about the distribution of ∆ min . ...
Context 14
... Figures 1 and 2, it follows that the N-QUBO results in higher ∆ min than the L-QUBO; therefore, in theory, the N-QUBO Hamiltonian should converge faster to a low energy state than the L-QUBO Hamiltonian. We can state this more formally by performing a simple hypothesis test. ...
Context 15
... δ ∈ [0, 100%]. That is, in (28) we are statistically comparing the left-most bars of the N-QUBO subplot and the L-QUBO subplot of Figures 1 and 2 (for brevity, the results for the case p = 0.50 have not bee plotted), under the null hypothesis that the L-QUBO provides a higher mean ∆ min . Then, for any p ∈ {0.25, 0.50, 0.75} one gets that the null hypothesis H o in (28) can be rejected with 95% confidence for values of δ up to 2%. ...
Context 16
... following results show how the increased number of binary variables required by the L-QUBO affects the difference between the qubits required to embed both QUBO formulations. In Figures 5-12, the number of average qubits required by both the L-QUBO and the N-QUBO formulations are plotted for values of k ∈ {1, 2, 5}, graphs G(n, p) for values of n ∈ [5,50], p ∈ {0.25, 0.50, 0.75}, and D-Wave's 2000Q TM and Advantage 1.1 TM processors. The average is computed over five (5) random graphs G(n, p) generated for each combination of n, p values, as well as ten (10) runs of D-Wave's embedding algorithm. ...
Context 17
... bars plotted with each point in the graph represent the values within one standard deviation of the average value. From all Figures 5-12, it is clear that in terms of embedding requirements, the N-QUBO formulation is substantially better than the L-QUBO formulation. This is true not only in terms of the average qubits required to embed each QUBO, but the volatility of the number of qubits required to embed each QUBO. ...
Context 18
... Figures 9 and 11 it is clear that as k and p increase, this trend of being able to embed larger problems in terms of number of nodes n continues to be evidenced even more. Even using the more powerful Advantage 1.1 TM processor, Figure 12 shows that for sparse graphs (i.e., p = 0.25) and k = 5, the L-QUBO can only be embedded for graphs with up to n = 40, while it seems that the N-QUBO can be embedded for graphs with up to n = 80 (i.e., the double number of nodes). ...
Context 19
... our tests, p is estimated by running the quantum annealer 1000 times. From Figure 17 it is clear that when considering non-sparse graphs (i.e., p = 0.75) in the 2000Q TM processor, the advantages of the N-QUBO over the L-QUBO in terms of TTS only increase. In particular, notice that while a t run of 20µs is enough to find the optimal solution with some small probability for instances of the MkCS problem with underlying graphs of up to n = 50 nodes. ...
Context 20
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 21
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 22
... that from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.25), increasing the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would result in a faster convergence. However, by comparing Figures 13 and 14, as well as Figures 15 and 16, it follows that increasing the penalty constants in this way is actually counterproductive for both quantum annealing processors in terms of TTS (i.e., in Figures 14 and 16, the "slope" at which the TTS increases with the number of nodes is higher. Also, from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). ...
Context 23
... from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). However, by comparing Figures 17 and 18, as well as Figures 19 and 20, it follows that increasing the penalty constants in this way does not produce discernible improvements for the quantum annealing processors in terms of TTS. Most likely, this means that any theoretical advantages in terms of convergence obtained by increasing the value of penalty constants is off-set by the precision problems that using larger penalty parameters brings for the quantum annealing processors in practice. ...
Context 24
... from Table 1 it was expected that for instances of the MkCS problem with k = 2 and underlying graphs G(n, 0.75), the increase in the penalty constants from c 1 = c 2 = 1 to c 1 = c 2 = 5 would have an even slightly higher benefit in terms of speed of convergence (compared with G(n, 0.25) graphs). However, by comparing Figures 17 and 18, as well as Figures 19 and 20, it follows that increasing the penalty constants in this way does not produce discernible improvements for the quantum annealing processors in terms of TTS. Most likely, this means that any theoretical advantages in terms of convergence obtained by increasing the value of penalty constants is off-set by the precision problems that using larger penalty parameters brings for the quantum annealing processors in practice. ...

Similar publications

Article
Full-text available
Quantum devices can be used to solve constrained combinatorial optimization (COPT) problems thanks to the use of penalization methods to embed the COPT problem’s constraints in its objective to obtain a quadratic unconstrained binary optimization (QUBO) reformulation of the COPT. However, the particular way in which this penalization is carried out...