Content uploaded by Tim Collins

Author content

All content in this area was uploaded by Tim Collins on Dec 04, 2014

Content may be subject to copyright.

This paper is a preprint (IEEE “accepted” status).

IEEE copyright notice. c

2014 IEEE. Personal use of this material is permitted. Permission from IEEE must

be obtained for all other uses, in any current or future media, including reprinting/republishing this material for

advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists,

or reuse of any copyrighted component of this work in other works.

Computer-Assisted Reconstruction of Virtual

Fragmented Cuneiform Tablets

Tim Collins

Sandra I. Woolley

Luis Hernandez Munoz

Electronic, Electrical and Systems Engineering,

University of Birmingham, Edgbaston,

Birmingham, B15 2TT, UK

Email: t.collins@bham.ac.uk

s.i.woolley@bham.ac.uk

LXH615@bham.ac.uk

Andrew Lewis

Digital Humanities Hub,

University of Birmingham, Edgbaston,

Birmingham, B15 2TT, UK

Email: AXL148@bham.ac.uk

Eugene Ch’ng

School of Computer Science

University of Nottingham Ningbo China

199 Taikang East Road, Zhejiang Ningbo,

315100 China

Centre for Creative Content and Digital Innovation,

University of Malaya,

50603 Kuala Lumpur, Malaysia

Email: eugene.chng@nottingham.edu.cn

Erlend Gehlken

Institut f¨

ur Arch¨

aeologische Wissenschaften

Goethe-Universit¨

at, Gr¨

uneburgplatz 1

60629 Frankfurt/Main, Germany

Email: gehlken@mailer.uni-marburg.de

Abstract—Cuneiform is one of the earliest known systems

of writing consisting of wedge-shaped strokes forming signs im-

pressed on clay tablets. Excavated cuneiform tablets are typically

fragmented and their reconstruction is, at best, tedious but more

often intractable given that fragments can be distributed within

and between different collections. Digital archives relevant to

cultural heritage, such as the Cuneiform Digital Palaeography

Project [1]–[4] and the Cuneiform Digital Library Initiative [5],

[6], now make richly annotated artefact media available to wide

populations. Similarly, developments in computer-aided three-

dimensional reconstruction methods offer the potential to assist

in the restoration of broken artefacts. In this paper, a system

for providing computer assistance for identifying and orientating

matching fragment pairs is described. Metrics to grade the quality

and likely signiﬁcance of potential matches are also described.

Finally, the results from experiments with scans of laboratory

fabricated tablet fragments and genuine fragmented cuneiform

tablets are reported.

Keywords—Computational geometry, Iterative methods, Cost

function, Search problems, Computer graphics

I. INTRODUCTION

Unlike jigsaw puzzles of thousands of pieces, which com-

puters can now easily solve [7], artefact fragments such as

broken pottery (potsherds) or cuneiform tablet fragments are

more complex three-dimensional objects and may belong to

an unknown number of complete or incomplete “puzzles”

whose surviving pieces may be eroded and may no longer ﬁt

well together. Computer-aided reconstruction of archaeological

fragments has been an active area of research in recent years

although most published work has been speciﬁc to the joining

of potsherds [8]. A survey of computational reconstruction

The authors gratefully acknowledge the support of The Leverhulme Trust

research grant (F000 94 BP), and the multidisciplinary support of the Univer-

sity of Birmingham’s Digital Humanities Hub.

methods [9] observes that while approaches are sophisticated

and computationally efﬁcient, none claim to be ready for

deployment as archaeological tools. Impediments to automated

reassembly, aside from the practical difﬁculties associated with

obtaining three-dimensional scan sets, include the extremely

difﬁcult search problems, the lack of surface information inclu-

sion with object geometry and, signiﬁcantly, the resolution of

issues associated with large numbers of false-positive matches.

More recently, a hybrid human-computer approach has been

proposed to reﬁne computer and archaeological knowledge

bases with geometry, texture and feature knowledge to resource

match performance [10]. Tested on sculptural fragments this

approach demonstrated good performance.

Adopting a hybrid human-computer approach, we have

designed a virtual collaborative environment for the recon-

struction of cuneiform tablets [11]. A complete description

of the environment is detailed in a separate publication [12].

The design involves computer assistance on a number of

levels from simple quantitative join-quality feedback and semi-

automated alignment of nearby fragments (a form of “snap to

best ﬁt”) through to fully automated searching and joining of

fragments. Though, even in the latter case, the computer assists

human effort; human intervention is necessary to validate

candidate matches and eliminate false-positives.

Cuneiform fragment matching differs from problems such

as mosaic or potsherd matching in that tablet fragments have

signiﬁcant free-form variation in shape in all three dimensions.

However, to reduce the search problem, some assumptions

can be made about the expected size, shape, texture and

surface curvature properties of the completed tablet. In making

such assumptions, we can create additional criteria for the

assessment of match quality beyond those generally applied

in free-form 3D reconstruction algorithms [13]–[15].

We propose several criteria for match quality assessment:

the distances between pairs of sampled points on the adjoining

surfaces, the interior area of the adjoining surfaces, the degree

of interlocking of the join, the continuity of the written surfaces

either side of the join, and the continuity of the slope of

the written surfaces either side of the join. These criteria are

used both for iteratively ﬁnding optimal matching fragment

orientations and also for ranking candidate matches; false

positives should be rejected and the most promising joins given

priority when presenting to users for validation. Typically, in

previously published reconstruction algorithms, match quality

is assessed by a more limited set of criteria such as measures of

inter-surface distances and size of overlapping area [14], [15]

but we have found, in this application, that the introduction

of additional criteria aids in the prioritisation process required

prior to human validation.

When a match is attempted between a pair of fragments,

an iterative process is used to ﬁnd the optimal translations

and rotations needed to minimise a cost function designed to

quantify the match quality. This process is repeated across

a search-grid of initial conditions in order to guarantee the

globally optimal match is identiﬁed. During the iterations,

the translations and rotations are applied to the model-view

matrix used by the graphical processing unit (GPU) of the

computer. This efﬁciently yields uniformly sampled depth

maps of the fragment from the irregularly sampled model data

of the archived scan [16]. The cost function to be minimised is

calculated through analysis of the depth maps of the two frag-

ment surfaces. Although the use of GPU-generated projective

depth maps has been reported in previously published related

work [14], [17], the iterative algorithm and the associated cost

function presented here are novel and, while developed for

cuneiform fragments, could be applicable for other fragmented

artefacts.

II. MATCHING PROC ES SI NG

A. Pre-processing

Upon attempting to match a pair of fragments, six degrees

of movement must be considered. If one of the fragments

is considered to be ﬁxed in space, the other may require

translation in three dimensions as well as rotation by the three

Euler angles in order to form a join. One of the degrees

of freedom can be eliminated if a restriction is imposed

that in any potential solution, the fragments must touch one

another but must not intersect. With ﬁve degrees of freedom

remaining, the search space is still prohibitively large for an

exhaustive search and an iterative approach is required. Due

to the intricate geometry of fragment surfaces, convergence

to a globally optimal solution cannot be easily guaranteed

so multiple iterations must be performed using a search grid

of different initial conditions. The ﬁrst part of our algorithm

involves some pre-processing applied to the fragment scans in

order to reduce the scale of this search.

When a tablet fragment is scanned, the ﬁrst part of our

process is to calculate the minimum-volume oriented bounding

box [18] of the vertices and to reorientate the vertex coordi-

nates so that the bounding box is aligned parallel with the x,

yand zaxes. For most cuneiform fragments of interest, we

expect at least one of the surfaces to contain script and, after

Fig. 1. Two virtual cuneiform tablet fragments with oriented bounding boxes.

The fragments, W 18349 (above) and Wy 777 (below) [19], are part of a letter

written in the Neo-Babylonian empire (626-539 BC, around the time of King

Nebuchadnezzar) and were found in the ancient city of Uruk (near modern-day

Basra, Iraq).

reorientation, this surface, being relatively ﬂat, will be roughly

parallel with one side of the bounding box. This side can be

identiﬁed using the statistical properties of the surface textures

or, more reliably, manually at the time that the fragment is

scanned. After identiﬁcation the fragment is rotated so that

the inscribed surface faces forwards as illustrated by the two

fragment scans shown in ﬁgure 1.

After these rotations, only the non-script sides of the

fragment are considered for potential matches. Furthermore,

when considering a pair of fragments, only the orientations of

the boxes with the inscribed surfaces facing forwards will be

considered. This reduces the number of possible permutations

to consider from 144 (each of the six faces of one box

presented to each of the six faces of the other with four possible

rotations of 0◦,90◦,180◦and 270◦) to just 16 (four faces of

one box presented to four faces of the other with no rotations

allowed). Also, because it is usually possible to determine the

direction of writing on the fragments, the number of possible

combinations can be further reduced to just four.

B. Pair-wise Matching

For each pair of faces, one of the fragments must be

translated and rotated to ﬁnd the best matching position. Three

rotations are allowed corresponding to the three Euler angles.

Translations are made in the left-right direction (xoffset) and

in the front-back direction (yoffset). An up-down translation (z

offset) is also made but this is calculated from the rotations and

the xand yoffsets such that the restriction that the fragments

must touch but not intersect is satisﬁed. There are, therefore,

only ﬁve independent positional parameters (three angles and

two offsets) that require optimisation.

Fig. 2. Example illustration of the depth map calculation performed by the GPU. (left) Depth map of the top face of the ﬁrst fragment, (centre) depth map

of the bottom face of the second fragment, (right) joined fragments and their summed depth map, z, after min(z) has been subtracted. The uniform, near zero,

depth values of the summed depth map indicate a constant distance between point-pairs and, therefore, a good quality match.

The approach we have adopted to optimise these ﬁve pa-

rameters for a given pair of fragments is based on the analysis

of two-dimensional projective depth maps of the surfaces to

be joined. The calculation of depth maps (also known as

z-buffers) is a part of the hidden-surface removal algorithm

performed whenever the GPU of a computer renders a three-

dimensional object. GPUs contain specialised processing units

within their architecture to efﬁciently perform the geometric

calculations needed to generate a z-buffer and the results can

be accessed via OpenGL [16]. This allows us to generate

a uniformly sampled grid of depths of the surface of the

join from the irregularly sampled vertex data of the object

model. In operation, the vertex and face data of each fragment

are stored in array buffers in GPU memory. Depth maps are

then generated on each iteration by manipulating the model-

view matrix of one fragment before rendering the depth data

to an off-screen framebuffer object. Similar approaches have

been reported in previous work [14], [17] but the subsequent

analysis of the depth map data presented here is novel.

Due to the intricate geometry of the surfaces of the join,

convergence to the globally optimal solution cannot be guar-

anteed by iteratively optimising all ﬁve of the rotation and

translation parameters. However, we have found that if the

xand yoffsets are ﬁxed, the three rotations can be reliably

estimated iteratively. This is due to the orientation applied

during the pre-processing which ensures that at least two of

the three angles are relatively close (within a few degrees) to

the optimal solution because of the approximate alignment of

the planes of the inscribed surfaces.

Although it is still necessary to systematically search

through a two-dimensional grid of possible xand yoffsets,

the reduction in computational complexity from the original

ﬁve-dimensional search space is signiﬁcant. For each potential

x, y offset in the search, the iterative estimation of the optimal

rotations uses a Nelder-Mead optimisation [20]. The optimi-

sation aims to minimise a cost function which is calculated

at each iteration using the algorithm outlined in the following

pseudo-code:

function cost(x, y, φ, θ, ψ )

fragment0.rotate(φ, θ, ψ)

fragment0.translate(x, y)

z←fragment0.depthmap() + fragment1.depthmap()

// Bring fragments into contact without intersecting

z←z−min(z)

// Calculate average cost function of all sampled points

c←0

for n= 0 to N−1do

c←c+f(zn)

end for

return c/N

xand yare the offsets which are ﬁxed for each optimisation;

φ, θ and ψare the Euler rotation angles which are to be opti-

mised. The GPU calculates the depth maps for each fragment

by rendering an orthographic projection forming a uniform

grid of sampled distances from a ﬁxed observation plane to

the surface of the fragment. The rotations and translations are

applied to the model-view matrix of one of the fragments prior

to generating its depth map. The two depth maps are summed

and the minimum summed depth value subtracted from all

depths, thus satisfying the condition that the fragments must

touch but not intersect. For each element of the summed depth

map, z, the residual offset, zn, is converted to a cost-per-

sample by the function f(zn)to assess the goodness of ﬁt.

This function is averaged over the Ndepth samples to give

the overall cost function. Examples of the depth maps formed

from scanned laboratory fabricated tablet fragments are shown

in ﬁgure 2 and the geometry of this process is illustrated by the

side view in ﬁgure 3. Note that in both ﬁgures the fragments

form very good matches and the summed depth map shown in

ﬁgure 2 has a near-uniform value close to zero over the entire

surface of the join.

The reliability of the offset search and of the iterative

rotation optimisation are highly dependent on the function,

f(z). Examples of functions used in similar applications

include the Mean Square (MS) of the distances [21] and the

Largest Common Pointset (LCP) metric [14]. Our initial testing

Fig. 3. Side view of the depths used to calculate the cost function. Note that

for a perfect match, all depth-pairs sum to the same value.

Fig. 4. The piecewise linear cost function used in our experiments. The

non-intersecting constraint means that zcan never be negative, hence only

positive values are shown.

with laboratory fabricated tablet fragments gave unsatisfactory

results with these methods. The MS cost function did not

perform well when a signiﬁcant proportion of the points in

the depth maps cannot meet due to chips and erosion of the

surfaces. Such features are over-emphasised by the squaring

process and lead to poor convergence and large numbers of

false-positive matches. The LCP approach avoids this problem

but requires a relatively ﬁne depth map resolution to converge

reliably. A linear cost function was also employed to attempt

to suppress the problems encountered with the mean square

approach and gave improved results but still had a tendency to

produce false-positive matches. The cost function we propose

as an alternative is a Piecewise Linear (PL) function designed

as a compromise between a linear function and the LCP

approach. The function, f(z), for the PL function is of the

form:

f(z) = zfor z≤

γz +(1 −γ)for z > (1)

where is a threshold distance equivalent to the one used in

the LCP metric [14] and γ1is a constant. The form of

the function is illustrated in ﬁgure 4. In our experiments, the

values used were = 2 units (1 mm) and γ= 0.025.

The advantage of the PL cost function over the MS and

linear cost functions is that the impact of point-pairs that are

greater than apart is reduced. The reasoning is that these

point-pairs are so far apart that they are not, actually, part of the

matching surface and should not count towards the matching

metric. The small gradient, γ, helps to ensure convergence

when large numbers of points begin the optimisation with a

distance greater than . The advantage of the PL approach

over LCP is that the linear portion of the function in the

region z≤helps to ensure convergence and suppresses false-

positives, especially when the depth map is coarsely sampled.

Figure 5 compares the four cost functions under consideration

using the pair of fragments shown in ﬁgure 1. All except the

proposed PL function exhibit signiﬁcant false-positive peaks.

All of the plots in ﬁgure 5 exhibit multiple local minima

making convergence to the optimal solution using an iterative

process difﬁcult to guarantee. Our approach is to locate the

optimal offset using a multi-resolution search algorithm. The

ﬁrst pass searches the entire range of possible offsets over a

uniform grid of sample points with a relatively coarse 0.5 mm

separation. Subsequent passes reduce the extent of the search

to one quarter of the previous area, centred on the best result

from the previous pass, and reﬁne the resolution by a factor

of two so the total number of sample points remains the same.

This process continues down to a resolution of approximately

10 microns. Subsequent reﬁnement has not demonstrated ob-

servable improvement in the join with any of the fragment pairs

tested to date. This search process is outlined in the following

pseudo-code:

function search(xrange , yrange )

// Set initial centre of search and search resolution, d

x0←0;y0←0;d←dinit

Nx←xrange ÷2d

Ny←yrange ÷2d

while d > dmin do

for n=−Nxto Nxdo

x←x0+nd

for m=−Nyto Nydo

y←y0+md

costn,m =solve(x, y)

end for

end for

(nmin, mmin )←arg min

n,m

costn,m

// Set search centre for next pass

x0←x0+nmind

y0←y0+mmind

d←d÷2

end while

return (x0, y0)

where xrange and yrang e deﬁne the extent of the search area,

dinit is the search resolution for the ﬁrst pass (0.5 mm in our

case) and the function, solve(x, y), performs the Nelder-Mead

iterative rotational optimisation, using offsets xand y, and

returns the cost function for the optimal solution.

C. Match Ranking

The pair-wise matching technique described in the previous

section can be applied on a number of levels from a semi-

automated alignment of nearby fragments (a form of “snap to

best ﬁt”) through to fully automated searching and joining of

fragments. Regardless of the application, human intervention

will always be required following matching in order to validate

the computer generated match and eliminate false-positives.

Efforts to aid this process have already been described in

section II-B in the shape of the novel proposed piecewise-linear

cost-function. This has been shown to signiﬁcantly reduce the

incidence of false-positive matches.

Fig. 5. The four cost functions under consideration shown as normalised functions of xand yoffset. Piecewise Linear (PL), Linear (L), Largest Common

Pointset (LCP) and Mean Squares (MS) functions are shown.

Fig. 6. (i) Small overlapping join area (only four points are closer than ): a

low priority match. (ii) Larger join area: likely to be of greater signiﬁcance.

When the fully automated matching system is used, it is

desirable to prioritise the potential matches so that the human

validator can examine the most promising results ﬁrst. Ranking

different potential matches could be done simply on the basis

of the ﬁnal cost function of the join but this does not take

into account other factors that could indicate that one matched

pair is a more signiﬁcant candidate than another and is worth

investigating with a higher level of priority. In addition to the

measure of the distance between pairs of points on the depth

maps (section II-B), the quality or signiﬁcance of a match can

also be graded based on the following criteria:

•the overlapping area of the adjoining surfaces

•the degree of interlocking of the join

•the continuity of the written surfaces either side of the

join

•the continuity of the slope of the written surfaces

either side of the join

The overlapping area of the adjoining surfaces is estimated by

the number of points in the summed depth maps that have a

separation less than the threshold, . This metric is the same

as the common pointset size as used in the LCP cost function.

The greater the number of points, the larger the overlapping

area and, therefore, the more signiﬁcant the join is likely to

be. This estimation is illustrated in ﬁgure 6.

The degree of interlocking of a join is estimated by ﬁxing

the rotation angles, φ, θ, ψ, at their optimal values but moving

the offsets, xand y, by small amounts in each direction, i.e.

x←x±δx and y←y±δy (see ﬁgure 7). The average

increase in the cost function caused by these small translations

Fig. 7. (i) A join with tight interlocking before and after the translation by

δx. (ii) A loosely interlocking join; the average depth after translation is much

smaller in this case.

corresponds to the degree of interlocking; the tighter the join is

interlocked, the larger the average increase in the cost function

will be.

The last two criteria estimate the degree of any discon-

tinuity of the newly joined inscribed surface in terms of the

surface depth (ﬁgure 8i) and in the surface gradient (ﬁgure 8ii).

These discontinuities are jointly estimated by taking a second

depth map, this time of the joined tablets together viewed from

the front, and calculating the average value of the second-

derivative of depth with respect to distance. For a perfectly

smooth tablet and a perfect join, this will be zero across the

entire exterior surface.

The metrics described in this section have been tested using

scans of the real tablet fragments shown in ﬁgure 1 as well as

scans of laboratory fabricated fragments such as those shown

in ﬁgure 2. Unsurprisingly, the most signiﬁcant predictor of

match quality is the cost function used for the iterative rotation

optimisation. Even when a join is only partial due to damage to

the tablet fragments, the cost function still returns a low value

for correct joins by virtue of the piecewise-linear cost function.

The next most signiﬁcant metric is the overlapping area (the

common pointset size). This does not give extra information

about the quality of the join but does give an indication of

how signiﬁcant the match is likely to be.

The degree of interlocking does not provide information

about the quality or likely signiﬁcance of a match but does

provide information about the likelihood that a match is a false-

positive. Smoother surfaces with low degrees of interlocking

such as the edges of a tablet are more likely to ﬁnd relatively

well ﬁtting false-partners than more detailed surfaces that

interlock tightly. As such, when a tightly interlocking match

is discovered it is more likely to be a true-match and should

be given a higher level of priority.

Surface continuity has proved to be the least signiﬁcant of

the metrics under consideration. This is due, in part, to the

problems in reliably estimating this metric given the irregular

nature of the tablet surface. However, with further reﬁnement

to the algorithm used to calculate this metric, we believe it

Fig. 8. (i) A join exhibiting a discontinuity in surface depth. (ii) A join

exhibiting a discontinuity in surface gradient despite being continuous in

surface depth. (iii) A continuous join in terms of both depth and gradient.

will be possible to form a more robust estimator of surface

continuity, helping to identify false positive matches that have

escaped detection using the other criteria.

III. RES ULTS

The method and cost function described have been de-

veloped and tested using laboratory fabricated clay tablet

fragments. Figure 9 shows an example of a test tablet that had

been broken into four fragments before 3D scanning and pro-

cessing using the method described. In this case, the matches

were chosen based on the cost function alone and the tablet

completely reconstructed without human intervention. Note

that even though the join between the top two fragments is

quite badly chipped, the optimal orientation was still estimated

correctly. This is possible because of the use of the piecewise

linear cost function which places a reduced emphasis on the

join area suffering from the chipping when compared with the

linear or least squares functions.

Scans of the real tablet fragments shown in ﬁgure 1 have

also been tested. The resulting matched fragment pair is shown

in ﬁgure 10. This is the ﬁrst example of a pair of 3D scanned

cuneiform tablet fragments to be successfully joined by an

automated algorithm with no human intervention beyond the

initial scanning and pre-processing stages.

IV. CONCLUSIONS

This paper presented a proposed 3D matching algorithm

designed speciﬁcally for fragmented cuneiform tablets. The

novel piecewise linear cost function employed by the algorithm

has proven successful in matching scans of both laboratory

fabricated test fragments as well as fragments of real cuneiform

Fig. 9. A laboratory fabricated clay tablet, broken into four fragments,

scanned and completely reconstructed using the proposed matching process.

tablets. This is the ﬁrst reported example of successful auto-

mated joining of scanned cuneiform tablet fragments.

Several metrics, in addition to the cost function, have

been proposed to aid in the ranking of potential matches.

Although the reconstruction shown in ﬁgure 9 was performed

using the cost function alone, this was only possible because

of the small number of fragments involved. With a larger

library of fragment scans to draw from, the likelihood of

false positives and the need for human validation becomes

increasingly important. This is when the additional match

quality criteria become valuable because they allow joins to

be ranked not only by the match quality, but also by the likely

signiﬁcance of the match.

Presently, although the fragment rotations are calculated

by an iterative optimisation process, the fragment offsets are

established by searching over a uniform grid. We plan to

improve the efﬁciency of the process by further optimisation

of the xy offset search using a second layer of iterative

optimisation (possibly employing some of the additional match

Fig. 10. The scanned tablet fragments, Wy 777 and W 18349 [19],

successfully joined using the proposed piecewise linear cost function.

quality criteria). We also plan to acquire further scans of real

tablet fragments in order to further reﬁne the algorithm and

validate the importance of the match quality metrics.

Although further reﬁnements are planned, even in its cur-

rent form the proposed matching algorithm has the potential

to beneﬁt the Assyriology research community. There are

huge numbers of unmatched fragments in museums around the

world and, without an automated approach, the vast majority

of these would probably never be joined.

ACKNOWLEDGMENTS

For the data acquisition with the 3D scanner of the

Heidelberg Graduate School of Mathematical and Computa-

tional Methods for the Sciences at the Interdisciplinary Center

for Scientiﬁc Computing, Heidelberg University, we would like

to thank Sonja Speck, Hubert Mara and Susanne Kr¨

omker.

The scanned tablets are published with the kind permission

of the German Archaeological Institute, Berlin.

The pre-processing computations described in this paper

were performed using the University of Birmingham’s

BlueBEAR HPC service, which provides a High Performance

Computing service to the University’s research community.

See http://www.birmingham.ac.uk/bear for more details.

For assistance and the ﬁring of the laboratory fabricated test

tablets, we would like to thank The Potters Barn, Sandbach,

Cheshire, UK (http://www.thepottersbarn.co.uk/).

REFERENCES

[1] The Cuneiform Digital Palaeography Project. [Online]. Available:

http://www.cdp.bham.ac.uk/

[2] S. I. Woolley, T. R. Davis, N. J. Flowers, J. Pinilla-Dutoit, A. Living-

stone, and T. N. Arvanitis, “Communicating cuneiform: The evolution

of a multimedia cuneiform database,” The Journal of Visible Language,

Special Edition on Research in Communication Design, vol. 36, no. 3,

pp. 308–324, 2002.

[3] S. I. Woolley, N. J. Flowers, T. N. Arvanitis, A. Livingstone, T. R.

Davis, and J. Ellison, “3D capture, representation and manipulation of

cuneiform tablets,” Proc SPIE (Three Dimensional Image Capture and

Applications IV), vol. 4298 (0277), pp. 103–110, 2001.

[4] T. N. Arvanitis, T. R. Davis, A. Livingstone, J. Pinilla-Dutoit, and S. I.

Woolley, “The digital classiﬁcation of ancient near eastern cuneiform

data,” British Archaeological Review, BAR International Series, p. 1075,

2002.

[5] CDLI (Cuneiform Digital Library Initiative). [Online]. Available:

http://cdli.ucla.edu/

[6] A. Lewis and E. Ch’ng, “A photogrammetric analysis of cuneiform

tablets for the purpose of digital reconstruction,” International Journal

of Heritage in the Digital Era, vol. 1, no. 1, pp. 49–53, 2012.

[7] A. Jacob, “Jigsaw-solving computer kicks puny humans aside,” New

Scientist, vol. 214, no. 2869, p. 24, 2012.

[8] M. Kampel and R. Sablatnig, “3D puzzling of archeological fragments,”

in 9th Computer Vision Winter Workshop Piran, Slovenia, 2004, pp. 31–

40.

[9] A. R. Willis and B. D. Cooper, “Computational reconstruction of ancient

artifacts,” IEEE Signal Process. Mag., vol. 25, no. 4, pp. 65–83, 2008.

[10] A. Ad ´

an, S. Salamanca, and P. Merch´

an, “A hybrid human-computer

approach for recovering incomplete cultural heritage pieces,” Computers

and Graphics, vol. 36, no. 1, pp. 1–15, 2012.

[11] E. Ch’ng, A. Lewis, R. E. Gehlken, and S. I. Woolley, “A theoretical

framework for stigmergetic reconstruction of ancient text,” in Visual

Heritage in the Digital Age. Springer, 2013, pp. 43–65.

[12] E. Ch’ng, S. I. Woolley, E. Gehlken, A. Lewis, L. H. Munoz, and

T. Collins, “The development of a collaborative virtual environment

for 3D reconstruction of cuneiform tablets,” in 20th International

Conference on Virtual Systems and Multimedia (VSMM), 2014.

[13] Q.-X. Huang, S. Fl ¨

ory, N. Gelfand, M. Hofer, and H. Pottmann,

“Reassembling fractured objects by geometric matching,” ACM Trans-

actions on Graphics (TOG), vol. 25, no. 3, pp. 569–578, 2006.

[14] C. S. Belenguer and E. V. Vidal, “Archaeological fragment character-

ization and 3D reconstruction based on projective GPU depth maps,”

in 18th International Conference on Virtual Systems and Multimedia

(VSMM), 2012, pp. 275–282.

[15] S. Winkelbach and F. M. Wahl, “Pairwise matching of 3D fragments

using cluster trees,” International Journal of Computer Vision, vol. 78,

no. 1, pp. 1–13, 2008.

[16] OpenGL software development kit. [Online]. Available:

https://www.opengl.org/sdk/

[17] G. Papaioannou, E.-A. Karabassi, and T. Theoharis, “Virtual archaeolo-

gist: Assembling the past,” Computer Graphics and Applications, IEEE,

vol. 21, no. 2, pp. 53–59, 2001.

[18] J. O’Rourke, “Finding minimal enclosing boxes,” International journal

of computer & information sciences, vol. 14, no. 3, pp. 183–199, 1985.

[19] E. Gehlken, S. I. Woolley, T. Collins, and E. Ch’ng, “Automated joining

of cuneiform tablet fragments,” Nouvelles Assyriologiques Br`

eves et

Utilitaires [NABU], awaiting publication, 2014.

[20] J. A. Nelder and R. Mead, “A simplex method for function minimiza-

tion,” The Computer Journal, vol. 7, no. 4, pp. 308–313, 1965.

[21] G. Palmas, N. Pietroni, P. Cignoni, and R. Scopigno, “A computer-

assisted constraint-based system for assembling fragmented objects,” in

Digital Heritage International Congress. The Eurographics Associa-

tion, 2013, pp. 529–536.