PreprintPDF Available

Solving Nurikabe with Ant Colony Optimization (Extended version) *

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
Preprint

Solving Nurikabe with Ant Colony Optimization (Extended version) *

Abstract and Figures

We present the first nature-inspired algorithm for the NP-complete Nurikabe pencil puzzle. Our method, based on Ant Colony Optimization (ACO), offers competitive performance with a direct logic-based solver, with improved run-time performance on smaller instances, but poorer performance on large instances. Importantly, our algorithm is "problem agnostic", and requires no heuristic information. This suggests the possibility of a generic ACO-based framework for the efficient solution of a wide range of similar logic puzzles and games. We further suggest that Nurikabe may provide a challenging benchmark for nature-inspired optimization.
Content may be subject to copyright.
Solving Nurikabe with Ant Colony Optimization
(Extended version)
Martyn Amos†1, Matthew Crossley 2, and Huw LLoyd§2
1Department of Computer and Information Sciences, Northumbria University, UK.
2School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University,
UK.
Abstract
We present the rst nature-inspired algorithm for the NP-
complete Nurikabe pencil puzzle. Our method, based on
Ant Colony Optimization (ACO), oers competitive per-
formance with a direct logic-based solver, with improved
run-time performance on smaller instances, but poorer
performance on large instances. Importantly, our algo-
rithm is “problem agnostic”, and requires no heuristic
information. This suggests the possibility of a generic
ACO-based framework for the ecient solution of a wide
range of similar logic puzzles and games. We further sug-
gest that Nurikabe may provide a challenging benchmark
for nature-inspired optimization.
1 Introduction
Nurikabe is a Japanese pencil puzzle [2], the wider set
of which includes well-known problems such as Sudoku
[5] and Hashiwokakero [1] (for a comprehensive review of
similar puzzle games, see [18]). The puzzle is attributed
to the designer “Lenin”, and rst appeared in 1991, pub-
lished by the Nikoli puzzle company. Its name is taken
from that of a spirit in Japanese folklore, which mani-
fests itself as an invisible barrier that impedes travellers.
This motivates the basic aim of the puzzle, which is to
construct a “wall” separating regions of the board.
The puzzle is played on a rectangular grid of white
cells, some of which initially contain numbers. A suc-
cessful solution to the puzzle requires the player to shade
in (colour black) non-numbered cells according to the fol-
lowing rules:
1. Black cells must form a single continuous region (the
“wall”).
2. Every numbered cell must occupy its own disjoint
white region (an “island”) whose size, in terms of the
number of cells it occupies, is the same as the number
label of that cell. The natural corollary of this rule is
that islands may not touch, horizontally or vertically
(immediate diagonal adjacency is allowed), as they
would therefore not be disjoint.
Extended version of a short paper presented at the Genetic and
Evolutionary Computation Conference (GECCO), July 13-17 2019,
Prague.
martyn.amos@northumbria.ac.uk
m.crossley@mmu.ac.uk
§huw.lloyd@mmu.ac.uk
3. There must not exist any 2×2 black regions.
In Figure 1, we show an example Nurikabe puzzle and
a correct solution. Note that, in the solution, each island
contains a number of white squares that is equal to its
labelled value, the black wall occupies a single continuous
region (with no 2×2 regions), and no islands are touch-
ing. We also show, in Figure 2, an invalid attempt at
a solution, with the following problems highlighted: (A)
Numerous 2×2 blocks of black squares, (B) Island con-
taining more than one value (which might be interpreted
as touching “4” and “3” islands, (C) Island containing
the wrong number of white squares, (D) Discontinuous
wall.
Figure 1: The structure of a Nurikabe puzzle instance
(left), and its solution (right).
Figure 2: Incorrect solution, with various issues high-
lighted.
The problem of solving Nurikabe is known to be NP-
complete [13, 23], even under the restriction that islands
may occupy no more than two cells. As such, it presents
1
a useful challenge for new algorithms. In this paper, we
present a novel method based on the well-established Ant
Colony Optimization (ACO) algorithm, and compare its
performance with an existing algorithm.
The rest of the paper is organized as follows: in Section
2 we briey review existing work on the problem, before
describing our own algorithm in Section 3. We present
our experimental results in Section 4, before concluding
in Section 5 with a consideration of their implications,
and a discussion of possible future work.
2 Related work
Surprisingly, Nurikabe has received relatively little atten-
tion in the literature, despite being a natural candidate
for automated solution. Existing theoretical results on
the game concern proofs of its NP-completeness, via re-
duction to planar 3SAT [13, 23], and a proof that the
problem remains NP-complete even if islands are num-
bered only 1 or 2 [14].
However, there does exist previous experimental work
on solving Nurikabe; in two papers, the same group
demonstrated how to solve a variety of pencil puzzles (in-
cluding Nurikabe) using both Answer Set Programming
and Constraint Programming [3, 4]. A subsequent solu-
tion method was also based on Constraint Programming
[32], and we use this code as the basis for our experi-
mental comparisons. To the best of our knowledge, the
work in this paper represents the rst attempt to solve
Nurikabe using a stochastic optimization algorithm.
There is a considerably more extensive literature on
the solution of the related Sudoku puzzle using stochas-
tic methods, including Evolutionary Algorithms [6, 22,
30, 33, 15, 27], Articial Bee Colony [26], Particle Swarm
Optimization [12, 24], Simulated Annealing [19, 17], It-
erated Local Search [25], Tabu Search [31] and Entropy
Minimization [11]. Ant Colony Optimization has been
applied to Sudoku [29, 28], and, most recently, by [20]
for problem instances up to 25 ×25. This latter work
shows that ACO can compete with the best stochastic
methods for the Sudoku problem, and it informs our ap-
proach to solving Nurikabe in the current paper.
3 Our ACO algorithm
The ACO metaheuristic [8, 10] is a well-established
nature-inspired algorithm that has been successfully ap-
plied to a wide range of combinatorial optimization prob-
lems [7, 21]. A common feature of ACO algorithms is
the use of a pheromone memory, often in addition to
local heuristic information, to guide the stochastic con-
struction of solutions on a graph by a population of
agents. Pheromone is reinforced on graph edges which
form part of a good solution, and gradually evaporates on
unfavourable edges. Our approach to solving Nurikabe is
inspired by recent work on solving Sudoku using ACO
[20], which uses a combination of constraint propagation
and ACO-based search. Here, our constraints are static
rather than dynamically propagated, but the basic two-
component approach still applies. Our ACO algorithm
closely follows the Ant Colony System (ACS) algorithm
[9], which was rst applied to the Travelling Salesman
Problem and Quadratic Assignment Problem. For both
these problems, ACS uses heuristic information to guide
the solution in combination with pheromone trails; for
our Nurikabe solution no heuristic information is avail-
able, and the solution relies entirely on pheromone. We
now describe the algorithm in detail.
3.1 Denitions and constraints
A Nurikabe puzzle is an n×mgrid of cells, each of which
is initially coloured white. A cell, c, might have a value,
vc, in which case it remains coloured white throughout.
We denote by Ithe set of all numbered cells, since they
represent the “seeds” of islands. Our algorithm proceeds
by gradually “growing” islands, repeatedly colouring cells
white, according to some constraints, until the size of the
island matches that island’s specied value, or no further
growth is possible under the constraints.
3.2 Algorithm overview
We present the pseudo-code of our method in Algorithm
1. Rather than “constructing” the wall around the is-
lands, we instead colour all cells black at the outset (apart
from numbered cells), and then individually grow the is-
lands, by repeatedly colouring selected cells white.
The basic ACS algorithm [9] models the foraging be-
haviour of ants, which lay pheromone to guide other ants
that follow them. The underlying principle of one of
positive feedback, in that “successful” ants have the op-
portunity to lay more pheromone, which biases future
search towards the solution they have found. The algo-
rithm works over a number of “generations” in which ants
work independently, but their behaviour is informed by
a global pheromone matrix. This is a data structure that
overlays the structure to be searched (e.g., graph, game
board), and which is used by each ant to make decisions
on movement. If we consider, as a concrete example, the
well-known Travelling Salesman Problem, then the un-
derlying network remains static, but the ants gradually
build up a dynamic overlying pheromone network. At
each generation, each ant moves across the network, grad-
ually building a tour; the choice of the next edge (and,
thus, the next node to visit) is informed by (a) the current
state of the global pheromone matrix, where edges with
stronger pheromone values are more likely to be selected,
and (b) the list of nodes that the ant has already vis-
ited (that is, it may not revisit a node). Ants may also
use a local pheromone update operator, which reduces
the pheromone value on edges as they are traversed, and
which essentially serves as a “repellent” to deter other
ants from producing the same solution, and hence en-
couraging diversity. At the end of each generation, the
ant with the shortest tour is selected, and the edges in
its tour are given additional pheromone (pheromone val-
ues are also subject to regular “evaporation” in order to
prevent premature convergence to suboptimal solutions).
In this way, over a number of generations the population
gradually converges on a good solution.
We now describe how the ACS algorithm as applied
to Nurikabe. At each iteration, a number of “ants” are
given their own local copy of the game board, and each
ant is placed on a randomly-selected numbered cell (that
is, the ‘seed” of an island). Each ant then moves around
2
the board, gradually “growing” the island by colouring
cells white, until either the island reaches the desired size,
or no more moves are possible. Movement is informed by
the global pheromone trail. The ant then moves to the
next numbered cell, and the process continues. At the
end of each generation, we therefore have a number of
possible solutions to the problem (one per ant); we then
select the best solution (the specic scoring function is
described below) and “reward” its white island cells with
additional pheromone. In this way, future generations of
ants are biased towards those cells.
Specically, at each iteration within a generation, each
ant constructs a set of possible candidate cells to which it
might move, based on both the game rule constraints and
the current state of the grid. The initial candidate set is
constructed by taking all cells that border the current
island, and is then pruned according to the following two
rules:
1. Remove any cell that is a cut cell for the black “wall”
region.
2. Remove any cell that is adjacent (horizontally or ver-
tically) to an existing island.
Remember that we are growing islands “into” a sea of
black “wall” blocks, so we need to ensure that (a) we do
not “fragment” the wall, and (b) that we do not grow an
island in such a way that it touches another island. The
rst rule ensures that the black “wall” remains contin-
uous (i.e., as a single component, with no disconnected
regions); we identify articulation points (or “cut cells”,
i.e., those cells that would fragment the wall into more
than one component if they were coloured white) using a
standard Depth-First Search.
The second rule ensures that every island remains dis-
joint, by removing from the candidate set any cell that
immediately borders an existing island region. Two of
the constraints on a valid Nurikabe solution are there-
fore satised by construction; the solution can then only
be invalid by its having any 2×2 block(s) of black cells,
or by having islands which contain fewer than vccells.
Once the candidate set has been pruned, the next cell
is selected according to ACS principles (see below), and
is added to the current island (this process includes a
local pheromone update). Once this island is completely
lled, the ant moves to the next island, and the process
repeats until all ants have completed their moves. The
best-performing ant is then selected (according to a cost
function which counts how many constraints are broken),
the global pheromone matrix is updated, and the “best
value evaporation” [20] operator is applied.
3.3 Algorithm specics
We now specify, in more detail, various components of
our algorithm. The central data structure is the global
pheromone matrix, which stores a single pheromone value
for each cell in the grid. Cells in the candidate set, N,
are probabilistically selected based on relative pheromone
values. Depending on the “greediness” of the selection,
either the cell with the highest pheromone value is chosen,
or a weighted (roulette) selection is made.
Once a cell is selected, the standard ACS local
pheromone operator is applied to that cell’s pheromone
1read in puzzle grid ;
2initialize global pheromone matrix ;
3while puzzle is not solved do
4for each ant do
5give ant local copy of grid ;
6assign ant to random starting island ;
7colour black all non-numbered cells in local
grid ;
8end
9while all ants not nished do
10 for each ant do
11 if all islands not lled to target size
then
12 create list, N, of all cells
surrounding current island ;
13 remove from Nany cut cells ;
14 remove from Nany cells adjacent to
another island ;
15 if possible, select a cell from N, and
add to current island (and update
local pheromone) ;
16 if current island is target size or N
is empty then
17 move to next island ;
18 end
19 end
20 else
21 mark ant as nished ;
22 end
23 end
24 end
25 nd best ant ;
26 do global pheromone update ;
27 do best value evaporation ;
28 end
Algorithm 1: Our ACS algorithm for Nurikabe
3
value, which reduces the probability of that value being
selected by a subsequent ant (thus preventing early con-
vergence).
Once all ants have completed the process of lling is-
lands, we then perform the global pheromone update,
which rewards only the best solution found (in line with
ACS principles). We characterise the “best” solution ac-
cording to a cost function which calculates the dierence
between the sum of the numbered squares (i.e., the total
target area of all islands on the grid) and the number of
white squares in the solution (i.e., the total island area
achieved by this solution), and adds a penalty for each
2×2 block of black squares (thus, a perfect solution will
have a cost value of zero).
At this point, we introduce a variation to the standard
ACS algorithm, called best value evaporation (BVE)[20].
In this modication to the standard ACS algorithm, an
ant’s solution is taken as the new best if its pheromone
contribution (τ= 1/C) is greater than the current best
pheromone value (τbest). The best pheromone value
τbest is subject to evaporation. In this way, the system
can make uphill moves by accepting as the current best a
solution with a higher cost than the previous best. The
aim of BVE is to prevent stagnation and early conver-
gence to a local minimum.
We now specify components of the algorithm in more
detail :
Line 2: For a Nurikabe puzzle of size n×mwe dene
a global pheromone matrix, τ, in which each element is
denoted as τi, where iis the cell index (1 in×m).
τirepresents the pheromone level associated with cell i.
Each element of the matrix is initialised to some xed
value, τ0(we use a value of 1/c, where c=n×mis the
total numbe r of cells on the board).
Line 15: We dene the candidate set,vjas the set of
all available cells for ant j, from which we have to choose
one. We have a choice of two methods to use when mak-
ing a selection; we might make a greedy selection, in which
case the member of vwith the highest pheromone con-
centration is selected:
g(v) = argmax
nvj
{τn}(1)
or we might make a weighted (i.e., “roulette wheel”) se-
lection, in which case the “weighted probability”, wp, of
cell ibeing selected is denoted as
wp(s) = τi
nvj
τn
(2)
The relative probabilities of each type of selection are
determined by the greediness parameter, q0(0 q01),
where 0q1is a uniform random number. A value
selection, s, is therefore made, as follows:
s={gif q < q0
Equation 2otherwise (3)
The local pheromone update is handled as follows; ev-
ery time an ant selects a value, s, at cell i, its pheromone
value in the matrix is updated according to:
τs
i(1 ξ)τs
i+ξτ0(4)
with ξ= 0.1(the standard setting for ACS).
Line 17: It occurred to us that the order in which
islands are selected might bias the search, so we im-
plemented six dierent schemes for selecting the next
island, all of which are evaluated in the experiments
described in Section 4. For the rst four of these
schemes, we initially sort the islands into some order and
then traverse this list starting at the randomly chosen
start cell, and wrapping around at the end of the list.
These are:
Scheme 0: Islands are sorted in raster-scan order
on the puzzle grid,
Scheme 1: Islands are randomly shued,
Scheme 2: Islands are sorted by vc(largest rst), and
Scheme 3: Islands are sorted by vc(smallest rst).
The nal two schemes select the next island based
on the ‘Manhattan’ distance of islands from the num-
bered cell of the current island:
Scheme 4: Next island is the furthest remaining is-
land, and
Scheme 5: Next island is the nearest remaining island.
Line 26: In order to perform the global pheromone
update, we must rst nd the best-performing ant. At
each iteration, each of the aants is given a cost value, f
(as described above). We then calculate the amount of
pheromone to add, τ, as follows:
τ=1
f(5)
If the value of τexceeds the current “best pheromone
to add” value, τbest, then we set τbest τ. We
then update all pheromone values corresponding to val-
ues in the current best solution, where ρis the standard
evaporation parameter (0 ρ1):
τs
i(1 ρ)τs
i+ρτbest.(6)
Note that in ACS, there is no global evaporation of
pheromone; the global pheromone update (equation 6) is
only applied to pheromone values corresponding to xed
values in the best solution.
Line 27: In order to prevent “lock in”, we then
evaporate the current best pheromone value, where 0
fBV E 1:
τbest τbest ×(1 fBVE )(7)
4 Experimental results
In this Section we present the results of experimental runs
of a Java implementation of our algorithm and, for com-
parison, the Copris Constraint Programming code [32].
All runs were carried out using a single core of a Xeon
E5-2640 v4 2.40 GHz processor, on a machine running
Ubuntu Linux. COPRIS is implemented in the Scala lan-
guage which, like our ACS code, runs on the Java Virtual
4
Machine, but uses a SAT (Boolean Satisiability) solver
compiled from C code for the constraint satisfaction.
4.1 Problem Instances
We use the collection of 911 Nurikabe instances available
from [16]. We ignore three instances in which some is-
lands have an unspecied number, leaving a total of 908
instances. The instances range in size, dened as the to-
tal number of cells m×n, from 9 to 2500 cells, and all
have a unique solution.
4.2 Comparison of Selection Schemes
We ran the ACS code 100 times per instance for each of
the six selection schemes. In all runs we set a timeout
of one minute of wall-clock time. For all ACS runs, we
used the following parameters: m= 10,ρ= 0.2,ξ= 0.1,
q0= 0.9, and fBVE = 0.001. Table 1 gives the results of
the runs, with the instances grouped by the number of
cells.
We see that selection scheme 1 (random selection) per-
forms the best, with the highest success rates in all cate-
gories. This is perhaps not surprising, as, intuitively, the
order in which ants build islands is likely to be important
and the random selection scheme improves the chances of
an ant visiting islands in a favourable order. With selec-
tion scheme 1, the system achieves an overall success rate
(over all 90,800 trials) of 69.9%, with 88.2% of instances
solved at least once. The success rate declines rapidly for
instance sizes >200:97.7% of instances with less than
200 cells are solved, compared to only 42.6% of the larger
instances. None of the 21 instances with more than 400
cells were solved by the ACS algorithm. In Figure 3, we
show the largest solution found by our ACS solver.
Figure 3: Largest solution found with ACS; instance with
336 cells.
4.3 Comparison with Constraint Pro-
gramming
We ran the Copris solver[32] on each of the instances,
using the same machine (and Java Virtual Machine) as
for the ACS solver. Since the SAT solver is deterministic,
we performed only one run per instance, again with a one
minute wall-clock time limit. Table 2 shows a comparison
of results obtained from COPRIS with the results from
our best-performing solver (scheme 1, random island se-
lection)
Figure 4 (left) shows a scatter plot of average solu-
tion times for all instances that were solved by both al-
gorithms; for ACS this measure includes the timeouts,
and is the total run time divided by the number of suc-
cessful runs. The upper half of the plot shows instances
where our ACS method performs worse, and the lower
half shows instances where our ACS method performs
better than the Copris solver. Figure 4 (right) shows the
success rates as a function of instance size. For ACS, this
shows the fraction of instances for which any run found
a solution.
Figure 4: Scatter plot of solution times for all instances
solved by both solvers (left), and solution rates as a func-
tion of instance size (right).
Our results (see Table 2 for a summary) show that the
ACS-based algorithm out-performs the Copris solver in
terms of runtime on the smallest instances, but the SAT
solver starts to win on larger (greater than 200 cells)
instances. On the smallest instances (0-99 cells), ACS
is generally quicker to achieve a solution, with 100 in-
stances solved in a shorter time by ACS, compared to 35
solved in shorter time by Copris. For the medium-sized
instances (100-199 cells) and larger, the Copris solver per-
forms better than ACS both in runtime and success rate.
It is worth noting that the runtime for the SAT solver is
always of order a second or longer, whereas ACS runs on
the small instances often complete in milliseconds. This
is potentially an important distinction, since real-time so-
lution of small puzzle instances may be important in the
context of an interactive game, in which case ACS would
possess a signicant advantage. These results contrast
with the results on Sudoku [20], in which ACS signif-
icantly outperformed the best direct solvers on harder
instances. This may suggest that Nurikabe oers a far
more challenging benchmark for ACO algorithms than
Sudoku.
5 Conclusions
In this paper we have presented the rst nature-inspired
algorithm for the computationally hard Nurikabe pencil
puzzle. We compared the performance of our method
against that of an existing logic-based solver, and found
that our algorithm was faster on smaller instances. Im-
portantly, our method relies on next to no heuristic infor-
5
Selection Scheme
0 1 2 3 4 5
Instance Size Ninst Nany fsuccess Nany fsuccess Nany fsuccess Nany fsuccess Nany fsuccess Nany fsuccess
0–99 136 135 0.857 135 0.942 134 0.843 135 0.856 134 0.900 135 0.842
100–199 615 558 0.575 599 0.787 562 0.588 553 0.576 582 0.676 549 0.527
200–299 64 25 0.095 39 0.267 28 0.108 26 0.094 30 0.158 17 0.062
300–399 72 11 0.023 28 0.081 11 0.033 11 0.018 12 0.023 10 0.025
>400 21 0 0.000 0 0.000 0 0.000 0 0.000 0 0.000 0 0.000
Total 908 729 0.526 801 0.700 735 0.534 725 0.526 758 0.606 711 0.489
Table 1: Success rates for the six island selection schemes. Within each instance size category, Ninst is the number
of instances, and for each scheme Nany is the number of instances for which any of the 100 runs on the instance
produced a solution. fsuccess is the fraction of the total number of runs which produced a solution.
Instance Size Ninst Nany (SAT) Nany (ACS) Nboth NSAT NACS tSAT < tACS tACS < tSAT
0–99 136 136 135 135 1 0 35 100
100–199 615 608 599 594 14 5 335 259
200–299 64 59 39 38 21 1 35 3
300-399 72 58 28 27 31 1 26 1
>400 21 2 0 0 2 0 - -
Total 908 863 801 794 69 7 431 363
Table 2: Comparison of the Copris SAT solver with ACS (Scheme 1). Nany (SAT) and Nany (ACS) are the numbers
of instances solved by Copris and ACS respectively. NSAT is the number of instances solved by the Copris solver
only and NACS is the number of instances solved by ACS only. The last two columns give the number of instances
for which the solution time with Copris is less than the average solution time with ACS, and vice versa, for instances
which were solved by both algorithms.
mation about the puzzle (that is, “tips” for its solution),
embedding only the game rules. We argue, therefore, that
ACS oers a promising method for the rapid solution of
such puzzles. Future work will focus on the development
of a general-purpose pencil puzzle solver, incorporating
Nurikabe and other games, and investigation of their use
as benchmarks for nature-inspired optimization.
References
[1] Daniel Andersson. Hashiwokakero is NP-complete.
Information Processing Letters, 109(19):1145–1146,
2009.
[2] Alex Bellos. Puzzle Ninja. Guardian Books/Faber
& Faber, 2017.
[3] Merve Caylı, Ayşe Gül Karatop, Ahmet Em-
rah Kavlak, Hakan Kaynar, Ferhan Türe, and
Esra Erdem. Solving challenging grid puzzles
with answer set programming, 2007. Available
at http://research.sabanciuniv.edu/5086/1/puzzles-
nal.pdf.
[4] Mehmet Celik, Halit Erdogan, Firat Tahaoglu,
Tansel Uras, and Esra Erdem. Comparing ASP
and CP on four grid puzzles. In Proceedings of the
16th International RCRA workshop (RCRA 2009):
Experimental Evaluation of Algorithms for Solv-
ing Problems with Combinatorial Explosion, Reggio
Emilia, Italy, 2009.
[5] Jean-Paul Delahaye. The science behind Sudoku.
Scientic American, 294(6):80–87, 2006.
[6] Xiu Qin Deng and Yong Da Li. A novel hybrid ge-
netic algorithm for solving Sudoku puzzles. Opti-
mization Letters, 7(2):241–257, 2013.
[7] Marco Dorigo and Mauro Birattari. Ant colony op-
timization. In Encyclopedia of Machine Learning,
pages 36–39. Springer, 2011.
[8] Marco Dorigo and Gianni Di Caro. Ant colony op-
timization: a new meta-heuristic. In Proceedings
of the 1999 Congress on Evolutionary Computation
(CEC), volume 2, pages 1470–1477. IEEE, 1999.
[9] Marco Dorigo and Luca Maria Gambardella. Ant
colony system: a cooperative learning approach to
the Traveling Salesman Problem. IEEE Trans-
actions on Evolutionary Computation, 1(1):53–66,
1997.
[10] Marco Dorigo, Vittorio Maniezzo, and Alberto Col-
orni. Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Sys-
tems, Man, and Cybernetics, Part B (Cybernetics),
26(1):29–41, 1996.
[11] Jake Gunther and Todd Moon. Entropy minimiza-
tion for solving Sudoku. IEEE Transactions on Sig-
nal Processing, 60(1):508–513, 2012.
[12] James M Hereford and Hunter Gerlach. Integer-
valued particle swarm optimization applied to Su-
doku puzzles. In IEEE Swarm Intelligence Sympo-
sium (SIS), pages 1–7. IEEE, 2008.
[13] Markus Holzer, Andreas Klein, and Martin Kutrib.
On the NP-completeness of the Nurikabe pencil puz-
zle and variants thereof. In Proceedings of the 3rd
International Conference on FUN with Algorithms,
pages 77–89, 2004.
6
[14] Markus Holzer, Andreas Klein, Martin Kutrib, and
Oliver Ruepp. Computational complexity of Nurik-
abe. Fundamenta Informaticae, 110(1-4):159–174,
2011.
[15] Jerey Horn. Solving a large sudoku by co-evolving
numerals. In Proceedings of the Genetic and
Evolutionary Computation Conference Companion,
GECCO ’17, pages 29–30, New York, NY, USA,
2017. ACM.
[16] Angela Janko and Otto Janko.
Nurikabe instances. Available at
https://www.janko.at/Raetsel/Nurikabe/.
[17] Zahra Karimi-Dehkordi, Kamran Zamanifar, Ah-
mad Baraani-Dastjerdi, and Nasser Ghasem-
Aghaee. Sudoku using parallel simulated annealing.
In International Conference in Swarm Intelligence
(ICSI), pages 461–467. Springer, 2010.
[18] Graham Kendall, Andrew Parkes, and Kristian Spo-
erer. A survey of NP-complete puzzles. ICGA Jour-
nal, 31(1):13–34, 2008.
[19] Rhyd Lewis. Metaheuristics can solve Sudoku puz-
zles. Journal of Heuristics, 13(4):387–401, 2007.
[20] Huw Lloyd and Martyn Amos. Solving sudoku
with ant colony optimisation. arXiv preprint
arXiv:1805.03545, 2018.
[21] Manuel López-Ibáñez, Thomas Stützle, and Marco
Dorigo. Ant colony optimization: A component-wise
overview. Handbook of Heuristics, pages 1–37, 2016.
[22] Timo Mantere. Improved ant colony genetic algo-
rithm hybrid for Sudoku solving. In Third World
Congress on Information and Communication Tech-
nologies (WICT), pages 274–279. IEEE, 2013.
[23] Brandon P McPhail. The complexity of puzzles: NP-
completeness results for Nurikabe and Minesweeper.
Senior Thesis, Reed College, 2003.
[24] Alberto Moraglio and Julian Togelius. Geometric
particle swarm optimization for the Sudoku puz-
zle. In Proceedings of the 9th Annual Conference on
Genetic and Evolutionary Computation (GECCO),
pages 118–125. ACM, 2007.
[25] Nysret Musliu and Felix Winter. A hybrid approach
for the Sudoku problem: using constraint program-
ming in iterated local search. IEEE Intel ligent Sys-
tems, 32(2):52–62, 2017.
[26] Jaysonne A Pacurib, Glaiza Mae M Seno, and John
Paul T Yusiong. Solving Sudoku puzzles using im-
proved articial bee colony algorithm. In Fourth
International Conference on Innovative Computing,
Information and Control (ICICIC), pages 885–888.
IEEE, 2009.
[27] Katya Rodríguez-Vázquez. Ga and entropy objective
function for solving sudoku puzzle. In Proceedings
of the Genetic and Evolutionary Computation Con-
ference Companion, GECCO ’18, pages 67–68, New
York, NY, USA, 2018. ACM.
[28] Ibrahim Sabuncu. Work-in-progress: solving Sudoku
puzzles using hybrid ant colony optimization algo-
rithm. In 1st International Conference on Industrial
Networks and Intelligent Systems (INISCom), pages
181–184. IEEE, 2015.
[29] Krzysztof Schi. An ant algorithm for the Sudoku
problem. Journal of Automation, Mobile Robotics
and Intelligent Systems, 9, 2015.
[30] Carlos Segura, S Ivvan Valdez Peña, Sal-
vador Botello Rionda, and Arturo Hernández
Aguirre. The importance of diversity in the applica-
tion of evolutionary algorithms to the Sudoku prob-
lem. In IEEE Congress on Evolutionary Computa-
tion (CEC), pages 919–926. IEEE, 2016.
[31] Ricardo Soto, Broderick Crawford, Cristian Gal-
leguillos, Eric Monfroy, and Fernando Paredes. A
hybrid ac3-tabu search algorithm for solving Su-
doku puzzles. Expert Systems with Applications,
40(15):5817–5821, 2013.
[32] Naoyuki Tamura. Nurikabe solver in Co-
pris. Available at http://bach.istc.kobe-
u.ac.jp/copris/puzzles/nurikabe/.
[33] Zhiwen Wang, Toshiyuki Yasuda, and Kazuhiro
Ohkura. An evolutionary approach to Sudoku puz-
zles with ltered mutations. In IEEE Congress on
Evolutionary Computation (CEC), pages 1732–1737.
IEEE, 2015.
7
Chapter
Japanese pencil games have been the subjects of innumerable papers. However, some problems - like Sudoku - receive far more attention than others - like Nurikabe. In this paper we propose a novel algorithm to solve Nurikabe puzzles. We first introduce a sequential hybrid algorithm that we call Scattered Variable Neighborhood Search. We then propose a method of parallelizing this algorithm, examining the empirical benefits of parallelization. We conclude that our parallel implementation performs best in almost all scenarios.
Article
Full-text available
In this paper we present a new algorithm for the well-known and computationally-challenging Sudoku puzzle game. Our Ant Colony Optimization-based method significantly out-performs the state-of-the-art algorithm on the hardest, large instances of Sudoku. We provide evidence that – compared to traditional backtracking methods – our algorithm offers a much more efficient search of the solution space, and demonstrate the utility of a novel anti-stagnation operator. This work lays the foundation for future work on a general-purpose puzzle solver, and establishes Japanese pencil puzzles as a suitable platform for benchmarking a wide range of algorithms.
Conference Paper
In this paper, a genetic algorithm for solving Sudoku puzzles is presented. Objective function has been defined as maximization of an entropy function in order to get a solution of Sudoku by generating rows, columns and 3x3 sub-matrices containing each integer [1, 2, 3, 4, 5, 6, 7, 8, 9] without duplication, for the case of 9x9 grid puzzle. A permutation and row-crossover operators are designed to this problem. The proposed algorithm is tested on different instances of Sudoku: easy and multimodal Sudoku. Simulation results show a competitive performance for these two instances of Sudoku.
Conference Paper
Recently we introduced an approach to solving Sudoku problems with co-evolution [4]: Resource-defined Fitness Sharing for Sudoku (RFSS). The idea is to find a set of non-conflicting numerals such that every cell in the puzzle is "covered" by a numeral. Each numeral is a species, and species that don't compete (i.e., don't conflict, according to the rules of Sudoku) are cooperating. Using a well-known co-evolution algorithm, fitness sharing, we were able to co-evolve numerals to solve a large number of example puzzles. The algorithm is deterministic; there is no discovery operator such as crossover or mutation. It consists solely of selection on shared fitnesses. It assumes no knowledge of the domain, no strategies or heuristics for Sudoku, but only the basic rules. The algorithm is general enough to work on any exact cover problem.
Article
Sudoku is not only a popular puzzle but also an interesting and challenging constraint satisfaction problem. Therefore, automatic solving methods have been the subject of several publications in the past two decades. Although current methods provide good solutions for small-sized puzzles, larger instances remain challenging. This article introduces a new local search technique based on the min-conflicts heuristic for Sudoku. Furthermore, the authors propose an innovative hybrid search technique that exploits constraint programming as a perturbation technique within the iterated local search framework. They experimentally evaluate their methods on challenging benchmarks for Sudoku and report improvements over state-of-the-art solutions. To show the generalizability of the proposed approach, they also applied their method on another challenging scheduling problem. The results show that the proposed method is also robust in another problem domain.
Chapter
The indirect communication and foraging behavior of certain species of ants have inspired a number of optimization algorithms for NP-hard problems. These algorithms are nowadays collectively known as the ant colony optimization (ACO) metaheuristic. This chapter gives an overview of the history of ACO, explains in detail its algorithmic components, and summarizes its key characteristics. In addition, the chapter introduces a software framework that unifies the implementation of these ACO algorithms for two example problems, the traveling salesman problem and the quadratic assignment problem. By configuring the parameters of the framework, one can combine features from various ACO algorithms in novel ways. Examples on how to find a good configuration automatically are given in the chapter. The chapter closes with a review of combinations of ACO with other techniques and extensions of the ACO metaheuristic to other problem classes.
Conference Paper
Sudoku puzzle is a popular logic game since 2005. This puzzle is an NP-Complete problem which means that a very hard problem that is required deep and efficient algorithm to be solved. So it also draws attention of the scientists to develop methods and algorithm in order to solve Sudoku puzzles. In this study I tried to develop a hybrid algorithm which consist both analytical and heuristic steps to solve the Sudoku puzzle. The developed hybrid algorithm includes two analytical steps and one heuristically step. In the first analytical step, basic manual Sudoku solving methods are used to solve the puzzle. If puzzle is not solved completely than improvement analytical step is applied to solve the puzzle. If still puzzle is not solved completely then heuristic step applied to solve the puzzle completely. In the heuristic step ant colony optimization algorithm (ACO) will be applied to the puzzle. Experiments show that this hybrid ACO algorithm can solve the hardest Sudoku puzzles less than one second. As a result, this study shows that ACO is an effective method that can be applied to solve Sudoku puzzles.