ArticlePDF Available

A complete system for the intelligent interpretation of engineering drawings

Authors:

Abstract and Figures

. Converting paper-based engineering drawings into CAD model files is a tedious process. Therefore, automating the conversion of such drawings represents tremendous time and labor savings. We present a complete system which interprets such 2D paper-based engineering drawings, and outputs 3D models that can be displayed as wireframes. The system performs the detection of dimension sets, the extraction of object lines, and the assembly of 3D objects from the extracted object lines. A knowledge-based method is used to remove dimension sets and text from ANSI engineering drawings, a graphics recognition procedure is used to extract complete object lines, and an evidential rule-based method is utilized to identify view relationships. While these methods are the subject of several of our previous papers, this paper focuses on the 3D interpretation of the object. This is accomplished using a technique based on evidential reasoning and a wide range of rules and heuristics. The system is limited to the interpretation of objects composed of planar, spherical, and cylindrical surfaces. Experimental results are presented.
Content may be subject to copyright.
IJDAR (1999) 2: 120–131 International
Journal on IJDAR
Document Analysis and Recognition
c
Springer-Verlag 1999
A complete system for the intelligent interpretation
of engineering drawings
Pierre M. Devaux, Daniel B. Lysak, Rangachar Kasturi
Department of Computer Science and Engineering, The Pennsylvania State University, 220 Pond Laboratory, University Park,
PA 16802, USA; e-mail: kasturi@cse.psu.edu
Received December 2, 1998 / Revised June 18, 1999
Abstract. Converting paper-based engineering draw-
ings into CAD model files is a tedious process. Therefore,
automating the conversion of such drawings represents
tremendous time and labor savings. We
present a complete system which interprets such 2D
paper-based engineering drawings, and outputs 3D mod-
els that can be displayed as wireframes. The system per-
forms the detection of dimension sets, the extraction of
object lines, and the assembly of 3D objects from the ex-
tracted object lines. A knowledge-based method is used
to remove dimension sets and text from ANSI engineer-
ing drawings, a graphics recognition procedure is used
to extract complete object lines, and an evidential rule-
based method is utilized to identify view relationships.
While these methods are the subject of several of our
previous papers, this paper focuses on the 3D interpre-
tation of the object. This is accomplished using a tech-
nique based on evidential reasoning and a wide range of
rules and heuristics. The system is limited to the inter-
pretation of objects composed of planar, spherical, and
cylindrical surfaces. Experimental results are presented.
Key words: 3D Drawing interpretation – Document
image analysis – Graphics recognition – Image process-
ing – Engineering drawings
1 Introduction
A number of researchers [9,18,15,13,12] have reported
3D object reconstruction from a given set of 2D views.
Such 3D interpretation systems were not evaluated using
scanned input data. They either require CAD-accurate
vector data, or assume that the separation of object lines
from dimension sets has already been completed by a
preprocessor and that the layout of the views in the
drawing is known. While some researchers have reported
techniques that can interpret objects using only 2 views
[8], most necessitate at least three views. The method
Correspondence to: R. Kasturi
described below requires only 2 views, and makes use of
a series of novel rule-based techniques for the reconstruc-
tion of solid objects.
Since 3D interpretation from 2D paper-based engi-
neering drawings is a problem with important industrial
applications, we began developing techniques to recog-
nize dimension set components [7], extract object lines
[6], and assemble 3D objects from vectorized object lines.
The method utilized to assemble 3D objects [10] can ac-
commodate auxiliary views in addition to the standard
6-view orthogonal set, and the view layout need not be
known a priori. Results of our efforts to achieve a com-
plete, integrated system for the intelligent interpretation
of engineering drawings are reported here.
This system assumes that the engineering drawings
are prepared as per the ANSI [1] drafting standard. A
typical engineering drawing created using the ANSI stan-
dard is shown in Fig. 1. The contents of such a drawing
can be grouped into two major classes:
Object lines representing orthographic projections of
3D objects.
Dimensioning lines, geometric dimensioning and tol-
erancing information (GD&T), areafills, and associ-
ated text (known as dimension sets) which provide
exact definitions of object dimensions.
Since dimension sets play an important role in engi-
neering drawings, their recognition is a key component
in any machine drawing understanding system [4].
The first phase of the system is segmentation, which
includes a specialized text/graphics separation algorithm.
In the second phase, vectorization and feature extraction
methods are used to detect text and graphics primitives.
Critical-point features are extracted by means of the k-
curvature [14,17] method. The final phase consists of as-
sembling the 3D object from the extracted object lines.
This includes the segmentation and identification of the
views, matching of the significant features across views,
and construction of the outer surfaces of the object. Since
the focus of this paper is the recognition and creation of
the 3D object, the first two phases of the system will be
very briefly described in the following section.
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 121
Fig. 1. Scanned engineering drawing prepared in accordance
to the ANSI drafting standards
2 Extraction of dimension sets and object lines
In order to efficiently interpret engineering drawings, it
is necessary to separate text from graphics. The methods
used in [7] are capable of detecting and extracting dimen-
sioning lines, arrowheads and complete leaders, leader
pairs, multi-segment leaders, and arrow tails. In addi-
tion, they remove witness lines and the text associated
with dimensioning lines. An example engineering draw-
ing is shown in Fig. 2a, along with the extracted text
blocks and dimensioning lines in Fig. 2b.
After the removal of dimension sets, section lines, and
centerlines, only dashed hidden lines and solid lines rep-
resenting physical object lines remain. These two types
of object lines are recognized and identified using the
methods described in [7]. Figure 3a illustrates the re-
sult of hidden (dashed) line detection for the drawing in
Fig. 2a. After the extraction of dashed lines, the only
remaining lines represent visible object lines or extrane-
ous noise. The extraneous segments are usually due to
unclassified segments of dashed lines, and are discarded.
The result of object line extraction is shown in
Fig. 3b. Continuous connected sets of lines may com-
prise several segments, such as L-shaped corners and
T-junctions. Such lines are processed further to detect
critical points using a modified k-curvature method [5].
After this final step, all lines are adequately segmented
and labeled such that they cannot be broken down into
smaller individual (non-colinear) segments.
The resulting line segments are then finally catego-
rized according to their types (solid or dashed) and ge-
ometric properties. Straight lines, for example, are de-
scribed solely by their starting and ending paper coor-
dinates, while additional descriptors are used for arcs,
circles, and ellipses. By using a very minimal set of de-
scriptors for each line type, the exchange of data from the
pre-processing phase of the system to the interpretation
phases is kept to a minimum. In fact, any pre-processor
that can extract the line segments can be used, as the
segments are not required to be in any particular order.
Fig. 2a. Typical engineering drawing with dimensioning lines
and text, and bextracted arrows, text blocks, and leaders
3 Three-dimensional interpretation of ob jects
Once a scanned engineering drawing has been reduced
to a set of vectorized line segments representing object
lines, the actual drawing views must be reconstructed
and their relationships must be determined. In this sys-
tem’s implementation, one of the goals is to establish
a 3D framework upon which the object’s surfaces can
be ’attached’ — much the same way as a facade is at-
tached to the steel framework of a building. To obtain
this framework, 2D features are matched across the
drawing’s views, thereby producing a set of features in
3D space.
For purposes of clarity, certain terms will pertain to
2D while others will be reserved for 3D. Line will be
used to describe any 2D straight line, arc, or circle, while
edge will be used to identify a 3D line, circle, arc, or
ellipse. Edges can be thought of as the steel girders in the
building analogy. Nodes will be used to describe 2D line
junctions, while a 3D edge junction will be referred to as
avertex. Using the building imagery once more, a vertex
is where steel girders are bolted to each other. Finally,
the term face will be used to describe a 3D bounded,
122 P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings
Fig. 3a. Hidden/dashed lines detected in drawing in Fig. 2a,
and bsolid lines and critical points detected from the same
drawing
uniform surface that is surrounded by edges on all sides,
just as an outer wall of a building. A set of faces can be
assembled into an enclosure, which defines the complete
piecewise outer surface of the object.
The method used in this system [11] relies on the
Dempster-Shafer theory of evidence to establish relation-
ships between views (i.e., parent-child view-pairs, such as
the front view and right-side view-pair). This allows for
auxiliary views in the drawing, along with slight mis-
alignments between views in the drawing itself. This
method also utilizes a set of 2.5D coordinate systems
(one for each view) which provides an intermediate step
during the conversion of 2D drawing-based coordinates
to 3D object-based coordinates (Fig. 4). Once the view
relationships are known, the conversion between the
2.5D coordinates and the 3D coordinates consists of a
simple matrix multiplication. A complete discussion of
this process is provided in [11].
The creation of 3D vertices is a critical step in the
interpretation of the complete object represented by the
engineering drawing. With all of the view relationships,
matching nodes, and line information established by the
methods described in [11], it is then possible to fit edges
between vertices and faces to these edges. Finally, a com-
plete enclosure is generated which accurately represents
Fig. 4. Intermediate 2.5D coordinates are used to simplify
the conversion between drawing-based coordinates and 3D
object-based coordinates. A minimum of two views are re-
quired to project matched nodes into 3D vertices
Fig. 5. Edges of an object with cylindrical faces
the 3D object. The generation of edges between vertices
is explained in detail in the following section.
3.1 Generating edges
Once a set of candidate vertices has been generated,
edges are created between them using the 2D lines as
cues. As mentioned above, the faces are then fitted to
this framework to construct an enclosure [16,3]. To avoid
redundant edges, an edge must begin and end at a ver-
tex; there cannot be any intermediate vertices between
the start and end points of the edge.
For polyhedral objects, an edge indicates a disconti-
nuity of the surface normal. For curved surfaces, an edge
may be present although the surface normal is actually
continuous. This concept is easily visualized by picturing
the edges of a cylinder. The surface normal is continu-
ous all around the cylinder (except at the flat ends), but
edges are present in the cylinder’s engineering drawing.
These edges are represented as lines ab,cd,ef, and gh in
the drawing in Fig. 5. Note that 3D vertices are iden-
tified by capital letters, while small letters denote 2D
nodes. Edges are created along the above lines, and they
are identified by AB, CD, EF, and GH.
If edge EF is examined, it is clear that the surface
normal between the face enclosed by EFDC and the face
enclosed by EFBA is not discontinuous. The edge is cre-
ated anyway, since it is necessary for the generation of
faces. Such edges are termed virtual edges, while edges
at a surface discontinuity are called rea l edges. Edges
cannot be identified as real or virtual at the time they
are contructed. In fact, the same edge may be real in one
trial enclosure and virtual in another, depending on the
faces that are attached to it.
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 123
The generation of edges is performed using a set of
rules outlined below. Since it is inefficient to examine
all possible pairs of vertices, only pairs which belong
to parent-child or sister-sister view-pairs are examined
in the edge creation process. Parent-child view-pairs are
views which are adjacent to each other (such as the front
and right-side views of a typical drawing), while sister-
sister view-pairs are made up of views which have the
same parent (i.e., the right-side and top views). In the
discussion below, the edge type is either straight, circu-
lar, or elliptical, while the term edge parameters refers
to the mathematical descriptors necessary to represent a
circular or elliptical edge in 3D space. The same concepts
apply to 2D line types and parameters.
Rule E1: If, in each view of the pair, there exists a line
(or a continuous sequence of lines of the same type and
with the same parameters) connecting the 2D node that
corresponds to vertex 1 to the node that corresponds to
vertex 2, then a candidate edge from vertex 1 to vertex
2 is created. The edge type and parameters are deter-
mined from the corresponding line types. Straight, cir-
cular, or elliptical edges may be generated, as per the
additional rules listed in the following section. Excep-
tion: No edge is generated for straight lines in a parent-
child view-pair if both are parallel to the view-pair angle
[11], or in sister-sister view-pairs if they are both per-
pendicular to the angle from the common parent view.
This exception prevents the generation of edges that, for
polyhedral objects, tend to traverse continuous planar
surfaces diagonally (See Fig. 6).
Rule E2: If vertex 1 and vertex 2 map to the same node
in one view and there is a straight line (or a continuous
sequence of straight lines all with the same slope) con-
necting the corresponding nodes in the other view, then
a straight edge is created from vertex 1 to vertex 2.
Rule E3: If there is a circular line (or a continuous se-
quence of circular lines all with the same parameters)
from the node corresponding to vertex 1 to the node
corresponding to vertex 2 in one view, and one of the
vertices maps to a node in the other view which is ad-
jacent to a circular line, then a circular edge is created
from vertex 1 to vertex 2 in the plane perpendicular to
the viewing direction of the first view.
The above rules are not sufficient to generate complete
edges. Rule E1 needs to be expanded upon, and both
rules E1 and E3 produce edges that require additional
descriptors. Rule E2 is rather trivial since it always cre-
ates straight edges.
3.1.1 Edge parameters All edges that are either circular
or elliptical require certain parameters. Straight edges
only require the starting and ending vertices, but more
complex edges necessitate additional descriptors to spec-
ify their exact position in three dimensions. Table 1 lists
the needed parameters for these edge types. It is also nec-
essary to state the means by which the edge types are
determined when applying Rule E1. Five additional ‘P’
Fig. 6. Sample drawing illustrating the generation of edges
using Rule E1
rules are therefore outlined below, and all rules assume
that the viewing directions are orthogonal:
Rule P1: Both lines are straight:
edge is straight.
Rule P2: Line in one view is curved, line in the other
view is straight and perpendicular to the projection of
the viewing direction of the first view (See Fig. 6 for the
definition of a viewing direction projection):
edge is of the same type as the curved line.
Rule P3: Line in one view is circular, the line in the other
view is straight and not perpendicular to the viewing
direction of the first view:
edge is elliptical.
minor axis is twice the radius of the circular line; ma-
jor axis is determined by the projection of the viewing
direction of the first view as seen in the second view.
center is found from the center point in the first view,
constrained along the straight line seen in the second
view.
unit normal to the plane containing the edge is per-
pendicular to the straight line in the second view and
perpendicular to the viewing direction for that view.
minor axis direction is the viewing direction of the
second view.
Rule P4: Line in one view is elliptical, line in the other
view is straight and not perpendicular to the viewing
direction of the first view:
a general elliptical edge is assumed, but if the major
and minor axes are determined to be equal, the ellipse
becomes a circle.
center is found from the center point in the first view,
constrained along the straight line seen in the second
view.
unit normal to the plane containing the edge is per-
pendicular to the straight line in the first view and
perpendicular to the viewing direction for that view.
124 P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings
Table 1. Edge parameters
Edge type Unit vector 3D coordinate Radius Major and Unit vector
normal to plane of center point minor in direction
containing edge of circle or axes of minor
ellipse containing axis
the edge
Circular  
Elliptical  
Rule P5: Both lines are circular (with the same radius):
edge is elliptical.
minor axis is equal to the radius of the circular lines;
major axis equal to 2 times the minor axis.
center is determined from the centers of the two lines
the same way a 2D point seen in both views is con-
verted to a 3D vertex.
unit normal to the plane containing the edge is the
normalized sum or difference of the viewing directions
of the two views (i.e., 45from both). Whether the
sum or difference is used depends on how the arc
portions match up between the views.
minor axis direction is perpendicular to both viewing
directions and can be obtained as their cross product.
Certain cases for elliptical and circular lines are not im-
plemented, although additional rules could be
constructed to handle them. For example, the case where
an elliptical line in one view matches a circular or ellipti-
cal line in the other view cannot be interpreted with this
current system. Similarly, the axes of any elliptical lines
must be aligned with the orthogonal viewing angles. Fi-
nally, the rules for constructing curved edges assume that
the viewing directions of the two views are orthogonal.
This is always the case in parent-child view-pairs, but
sister-sister view-pairs may not have orthogonal viewing
directions if one of the views is an auxiliary view.
3.2 Generating faces
When generating edges, all possible candidate edges con-
sistent with the 2D lines seen in the various views are
constructed, and only those needed in the final 3D model
are actually used. Faces, however, are constructed on an
as-needed basis as an enclosure is assembled. This is due
to the large number of candidate faces that would be gen-
erated and also because many of those candidates would
not even be considered in the assembly of any enclosure.
As mentioned earlier, a face is a portion of a surface
surrounded by a set of edges that lie in that surface and
form a closed boundary. In the current system, only pla-
nar, cylindrical, and spherical surfaces are considered. To
construct a face, two edges that meet at a common ver-
tex are usually sufficient to define the surface containing
that face. Note, however, that if the first two edges are
of the same type and have the same parameters (for ex-
ample, two colinear straight lines), additional edges may
be necessary to define the surface. Once the surface has
been defined, edges are added around the boundary until
it is complete. To avoid redundancy, we construct only
minimal faces which do not contain any smaller faces,
except in the case of faces with holes. In those instances,
the faces will be made up of a number of minimal faces.
A more complete description of face contruction is
presented later, including the procedures for selecting
the starting edges, as this is closely tied to the assembly
of the enclosure. We note here that two different methods
are used depending on whether or not the face is visible
from any of the views.
Each face is uniquely identified by the edges that
make up its boundary and by a set of parameters that
define the surface that contains those edges. In addition,
the orientation of the face with respect to the viewing
directions (that is, whether the inside or the outside sur-
face is seen) is important to assure consistency as the
faces are assembled into an enclosure. The parameters
for each face are as follows:
1: A list of edges around the boundary of the face in
counterclockwise sequence, as seen from the outside of
the object. In addition, each edge is assigned a direc-
tion (clockwise or counter clockwise) with respect to the
surface boundary in order to maintain consistency with
its normal unit vector (See Table 1). The direction of
the edge is also used later in the enclosure generation
process.
2: The parameters that describe the surface in three di-
mensional space that contains the face are:
surface type (planar, spherical, or cylindrical with
circular cross-section),
for planar surfaces, a unit vector normal to the sur-
face in the direction of the exterior of the object, plus
the perpendicular distance to the origin,
for spherical surfaces, the radius and the coordinates
of the center point,
for cylindrical surfaces, the radius, the coordinate of
a point on the axis, and a unit vector in the direction
of the axis,
for both spherical and cylindrical surfaces, an indica-
tion of whether or not the face is hollow (i.e., whether
the inside of the object is outside the surface, as when
the face is on the boundary of a hole through the ob-
ject.)
3: For each view of the drawing, an indicator of whether
or not the outside of the face is in the viewing direction.
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 125
4: For each view in the drawing, an indicator of whether
or not the surface is parallel to the viewing direction
(planar and cylindrical surfaces only).
4 Generating the enclosure
The final stage in the interpretation of the drawing con-
sists of fitting surfaces to the candidate edges which have
been found in the previous section. It was apparent from
the node matching in [11] that the number of candi-
date vertices exceeded the number of actual vertices in
the object and, likewise, the same phenomenon exists
for candidate edges. This implies that the surface fitting
is not a very straightforward process; there are usually
many possible faces that can fit between different combi-
nations of edges. The procedures described in this section
are therefore dependent upon the use of evidence for the
generation of the most plausible enclosure.
4.1 Enclosure description
The process of assembling the enclosure can be repre-
sented by a tree in which each node corresponds to a
face. Each branch represents a generally incomplete en-
closure, and the set of nodes from the leaf to the root
represents the set of faces in that enclosure. The branch
with the highest confirming evidence (i.e., the one that
best conforms to the original 2D drawing) is further ex-
panded by adding additional faces. When a complete
enclosure is obtained that is consistent with the original
drawing, a solution is declared.
For each possible enclosure, a list of edges used in
that enclosure is maintained. Several parameters are as-
sociated with each edge that describe how the edge is
used in that particular enclosure and how it will be seen
in the completed wireframe model of the object. It is im-
portant to understand that though an edge cannot phys-
ically be contained in more than two faces, an enclosure
with an edge that is contained in only one face cannot
enclose a finite volume. Therefore, each edge must be
contained in exactly two faces for the enclosure to be
valid. In the following discussion, the term hidden face
refers to a face that is not directly visible in the views
of the drawings (i.e., there are three hidden faces in a
standard, three-view engineering drawing of a cube, and
three visible faces). The edge parameters are given be-
low:
1: Status code to describe the current state of the edge
in the enclosure construction process. The states are as
follows:
edge closed (the edge is contained in exactly two
faces).
edge open (the edge is contained in only one face),
and no attempt has been made to add another face
at this edge.
edge open, new enclosures have been attempted by
adding visible faces at this edge but not hidden faces.
edge open, new enclosures have been attempted by
adding both visible and hidden faces at this edge.
2: Flag to indicate whether a closed edge is real or vir-
tual. In some cases, open edges can also be identified as
real.
3: An indicator for each view describing how the edge is
seen in that view. There are four possibilities:
not seen (i.e., a virtual edge that is not seen as a sil-
houette, or an edge hidden behind some other edge).
straight edge seen endwise as a point.
seen as a solid line (i.e., a real edge or silhouette
virtual edge that is directly seen from the viewing
direction).
seen as a dashed line, such as a real or virtual edge
that is hidden behind some face given the current
viewing direction.
4: A pointer to the face to the left of the edge, as seen
from the outside of the object. Note that the edge is
oriented counter clockwise in the boundary of that face
(i.e., in the same direction as the boundary) as seen from
outside the object.
5: A pointer to the face to the right of the edge as seen
from the outside of the object.
4.2 Enclosure construction
The construction of the enclosure tree is begun by ar-
bitrarily choosing, in one of the views, a minimal area
(ignoring any dashed lines) as a template for the con-
struction of a possible face. If the area is minimal and
does not contain any other smaller area(s), it must corre-
spond to a face of the object that is directly visible from
that viewing direction. Therefore, the inside/outside ori-
entation of the face that corresponds to this visible area
[11] can be established. The face then becomes the root
node of the enclosure tree. In the general case, there may
be several faces that correspond to the same area, and
each face becomes a root node for a separate enclosure
tree. At this point, each face is the foundation for a sep-
arate enclosure, and each of those enclosures contains
only one face. In addition, every edge is attached to only
one face and is therefore “open” for further expansion.
The best enclosure, according to the rules of evidence
described below, is selected for further expansion. An
open edge is selected, and an attempt is made to con-
struct a set of new faces attached to that edge. This
step takes us one level deeper in the tree, as each new
face becomes a new leaf and represents a new (partial)
enclosure. The process is repeated until a complete en-
closure [2,19] is obtained, and this enclosure is declared
as a solution to the drawing. Figure 7 depicts the algo-
rithm which is followed for the addition of faces to the
enclosure.
4.3 Geometric constraints
For each open edge of an enclosure, several possible faces
can be generated, especially in the case of complex draw-
ings. Since each face essentially produces a new enclo-
sure, the number of possible enclosures can become enor-
mous. The search for a satisfactory enclosure is therefore
126 P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings
Fig. 7. Algorithm for the construction of enclosures
constrained by both the use of evidential rating of the
enclosures, and by constraining the types of faces which
are generated for open edges. The types of faces which
can be generated are limited using the following geomet-
ric constraints. These constraints are checked at three
points in the process of adding faces: when a face is be-
ing constructed, when the face is being added to the old
enclosure to create a new one, and when the visibility of
an edge in the enclosure is being determined.
Rule G1: If an edge of the face being constructed lies
inside or pierces some previously existing face of the en-
closure, reject the face being constructed.
Rule G2: If an edge of the face being constructed is
already in the enclosure and closed (contained in two
faces), reject that face.
Rule G3: If two edges of the face lie in the same surface
and intersect each other, reject the face.
Rule G4: If a vertex (other than the starting vertex) is
visited more than once by following a path around the
boundary of the face, reject the face.
Rule G5: If a newly completed face is cylindrical, ex-
amine the face from the point of view along the axis so
that the axis projects to a point. Proceeding around the
boundary of the face, if the 2D projection of the bound-
ary (as seen from along the axis) encloses the point that
is the projection of the axis, reject that face. See Fig. 8
for an illustration of this rule.
Fig. 8. Illustration of Rule G5
Fig. 9. Illustration of Rule G8. Edges that are assigned a
direction in one face must have an opposite direction in the
other adjacent face
Rule G6: If an edge of the enclosure lies inside (but not
on the boundary of) or pierces the new face, then reject
the new face.
Rule G7: If a new face intersects a previous face of the
enclosure and the intersection is in the interior of both
faces, then reject the new face.
Rule G8: If an edge of a new face is already in the en-
closure and it has the same orientation (clockwise or
counter clockwise) in the boundary of the new face as
it does in the boundary of the face of the enclosure, then
reject the new face. Remember that each boundary has
a counterclockwise ‘direction’ when seen from outside
the object. Therefore, an edge at the intersection of two
faces must have opposite directions in each boundary.
See Fig. 9 for an illustration of this rule.
Rule G9: If a new face has the same boundary edges and
the same surface characteristics as a previous face in the
enclosure, then reject that new face.
Rule G10: If, along the common edge between two ad-
jacent faces, the unit vector perpendicular to the edge,
tangent to the surface of the face, and pointing to the
interior of the face from the boundary is the same for
both faces, then the enclosure is rejected.
Rule G11: If an edge of the enclosure that must be seen
as a solid line from certain viewing directions does not
appear in any of the corresponding input views (or does
not appear as a solid line), then the enclosure is rejected.
Rule G12: If a planar or cylindrical face projects to a line
on the outer boundary of the view in a certain viewing
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 127
direction, and the side of the face that represents the
outside of the object projects into the interior of the
view, then reject the enclosure.
4.4 Selecting the best enclosure
As stated earlier, only the most likely enclosures are ex-
panded during the enclosure construction process. The
enclosure certainty factors range from 1 to +1, and
they are computed based upon how the visibility of the
enclosure edges, as seen from the various viewing direc-
tions, agrees with the input data. The Dempster-Shafer
theory of evidence is used to rank the enclosures. There
are three categories of evidence: confirming evidence,
disconfirming evidence, and uncommitted evidence. The
three sum to 1.0, and all of the evidence is initially un-
committed. A certainty factor is computed by calculat-
ing the difference between confirming and disconfirming
evidence.
The rules of evidence, along with their weights, are
given below. While these heuristic rules are used to speed
up the interpretation process and avoid a brute-force
method of evaluating all possible enclosures, they are
by no means set in stone; their values could deviate sub-
stantially from those given without affecting the inter-
pretation of many drawings. However, the weights given
below have been tested experimentally and yield good re-
sults. Note that many of these rules are dependent upon
whether or not an edge is real and can be seen from
a particular direction, and whether or not it is directly
visible from that direction.
Rule EV1: Each enclosure receives a moderate amount
of confirming evidence, since it has at least passed the
geometric constraints described in the previous section.
(0.2)
Rule EV2: For each straight edge that is seen endwise
from some viewing direction where the edge maps to a
node point in the corresponding view, add a very small
amount of confirming evidence. (0.001)
Rule EV3: For each edge of the enclosure and each view-
ing direction, if that edge should be seen as a solid line
in that viewing direction and there is no correspond-
ing line in the input data for that view, add a mod-
erate amount of disconfirming evidence. While it may
seem that such a case should be rejected outright, this
rule actually accommodates incorrectly drawn engineer-
ing drawings that can still be interpreted by the system.
(0.3)
Rule EV4: For each edge of the enclosure and each view-
ing direction, if that edge should be seen as a dashed line
in that viewing direction but there is no corresponding
line in the input view, add a moderate amount of dis-
confirming evidence. (0.2)
Exception: If the edge of Rule E4 is straight and maps
to a node with two incident lines in some other view
and is seen as a line (either solid or dashed) in a third
view, then the disconfirming evidence is reduced to a
very small amount. (0.02)
Rule EV5: If there are nlines in a drawing (all views)
and the edges of the enclosure correctly correspond to r
of these, then add confirming evidence equal to 0.4×r
n.
Rule EV6: If the enclosure is complete (all edges are
closed) and if, of the nlines in all of the views, there are
wthat correspond to some edge but are of the wrong vis-
ibility and there are efor which there is no corresponding
edge, then add disconfirming evidence equal to 5×(e+w)
n
(but not to exceed 1.0).
Rule EV7: When an enclosure is expanded at an open
edge, add a very small amount of disconfirming evidence
to the old enclosure (0.001), in order to allow the new
enclosure to be expanded further.
Rule EV8: When an enclosure is expanded at an open
edge and either visible or hidden faces have been added
at that edge, check to see if there is a new enclosure with
a greater certainty factor. If there is none, then add a
moderate amount of disconfirming evidence to the cur-
rent enclosure (0.2). Also add disconfirming evidence for
each parent enclosure, reducing the confirming evidence
by two-thirds at each upper level of the tree.
Rule EV9: If all the open edges of a partial enclosure
have been expanded with visible and hidden faces, then
set the evidence for this enclosure to 1.0 (i.e., all dis-
confirming). The enclosure cannot be expanded further,
which implies that it is either complete or incorrect. Fig-
ure 7 shows how a complete enclosure and solution is
found.
4.5 Constructing visible faces
There are two procedures for constructing faces that are
added to an enclosure: one for faces which are visible in
a view, and another for faces that are hidden. The differ-
ence between the two is simple; a visible area can be used
as a sort of template for the generation of a visible face,
while there is no template available for hidden faces. In
addition, the inside/outside orientation for visible faces
is already known since the outside is visible from the
viewing direction. The procedure for constructing hid-
den faces is described in detail in the next section.
It is important to note that when a visible area is
chosen to construct a visible face, several visible faces
may actually be generated in the process. Each of these
spawn a new enclosure. In the case of non-polyhedral
objects, a visible area may generate a set of visible faces
(i.e., a half sphere viewed from above looks like only one
visible area, but it actually consists of 4 visible faces
separated by 4 virtual edges).
Consider the procedure for constructing a visible face
given a designated starting edge. In each view where the
edge corresponds to some line, a 2D area adjacent to
that line can be identified. The other lines that bound
the area can then be used to select edges for the bound-
ary of the new faces, thus guaranteeing that the face is
minimal. Once there are enough edges to define a sur-
face, subsequent edges needed for the completion of the
boundary must also lie in that surface. This condition
128 P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings
Fig. 10. Pseudo-code routine for the gener-
ation of visible faces
Fig. 11. Pseudo-code routines for the gener-
ation of hidden faces
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 129
Fig. 12a. Engineering drawing and bcorrectly interpreted
cube
can be used to prune the list of possible faces and avoid
a combinational explosion. Recursive C pseudo-code for
this routine is shown in Fig. 10.
4.6 Generating hidden faces
While the process of generating hidden faces is concep-
tually simpler than that used to create visible faces, two
differences remain. First, there is no visible area to act
as a template when selecting edges for the boundary of
the face. This means that all edges must be considered
at each vertex, as long as the surface has not yet been
uniquely defined. Second, the hidden face must be mini-
mal and cannot contain smaller faces in the same surface.
This is accomplished by further restricting the choice of
edges which can be added to the boundary after the sur-
face has been uniquely defined.
Given an open edge, the generation of hidden faces
is accomplished in two steps. The first consists of defin-
ing a surface using a pair of edges, one being the open
edge and the other being an incident edge at one of the
vertices of the open edge. If a unique surface cannot be
defined using only those two edges, additional edges are
added until a unique surface is obtained. The face is
then completed by recursively adding the leftmost edge,
as seen from outside the object, that lies in the surface.
Fig. 13a. Engineering drawing and bcorrectly interpreted
object
Using the leftmost edge assures that the face which is
constructed is minimal, as described above.
Figure 11 contains the C pseudo-code for the two
separate steps of the hidden face generation process.
The routine define hidden surface finds the two edges
required to define a face, while complete hidden face
generates the face from the two aforementioned edges.
5 Experimental results
A wide variety of engineering drawings have been inter-
preted by this system, including drawings with odd view
layouts and moderately complex features. The follow-
ing figures consist of the scanned engineering drawings
which were fed to the system and the 3D wireframe ob-
jects which were then generated by it. Figure 15 consists
of various additional objects which were successfully in-
terpreted.
Figure 12 contains a drawing of a very simple cube.
However, the view layout is unconventional since the root
view is actually the rightmost view of the drawing. It is
adjacent to the auxiliary view and the front view, even
though it seems that the auxiliary view should be ad-
jacent to the front view. This unconventional view lay-
out is still recognized correctly by the system, and the
130 P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings
Fig. 14a. Engineering drawing and bcorrectly interpreted
object
complete cube’s wireframe is generated. This example
provides significant proof that the system can correctly
interpret drawings without any prior knowledge of the
view arrangement or the number of views in the draw-
ing.
Figure 13 consists of a moderately intricate drawing
with 4 views. The middle view is obviously the root view,
while the two rightmost views are auxiliary views. The
3D object consists of a block with angled sides and a
large thru-slot cut across the front. This example illus-
trates the system’s ability to make use of auxiliary views
during the vertex and enclosure creation processes. Since
dashed lines are missing from the left-side view, the sys-
tem is required to use the information provided in the
top-right auxiliary view to correctly interpret the object.
Figure 14 contains the engineering drawing of an ob-
ject with both spherical and cylindrical surfaces. The
lower-left portion of the object consists of half of a cylin-
der, while the lower-right portion consists of a quarter
sphere. Both surface types are correctly interpreted.
CPU times were obtained for the above drawings us-
ing an Intel 486 50 MHz machine with 16 MB of memory.
The drawings in Figs. 12 and 14 required approximately
20 seconds to process (interpretation phases only, ex-
cluding pre-processing of the scanned image), while the
drawing in Fig. 13 required approximately 180 seconds.
All objects in Fig. 15 required 30 seconds, plus or minus
15%.
Fig. 15. Other interpreted objects, including the object in
Fig. 2 (top left)
6 Conclusion
The examples above demonstrate this system’s ability
to interpret a wide variety of engineering drawings with
variable view layouts and different surface types. Draw-
ings containing many hidden faces are also correctly in-
terpreted. Minor modifications to this system could pro-
vide the capability to output multiple possible interpre-
tations of the same drawing, instead of only the most
likely one [12]. However, the current interpretation sys-
tem does not and cannot interpret certain types of ob-
jects in its current state. These are listed below:
1. Objects with very thin sections, such as objects made
of sheet metal. The system allows for slight misalign-
ments between views, and incorrect node matches and
vertices may be generated when line junctions are too
close together.
2. Objects represented by inconsistent views. Drawings
with missing lines in certain views may still be inter-
preted if the missing line junctions are present in another
view.
3. Objects with complex surfaces that are neither spher-
ical nor cylindrical. These objects cannot be interpreted
since only straight, circular, and elliptical line segments
can be used as input to the system. (Filleted surfaces
can be interpreted, since silhouette nodes are created).
While the third limitation of this system somewhat re-
stricts the types of objects that can be interpreted, there
is no inherent problem with expanding the system to ac-
cept more complex surfaces. Rules and geometric con-
straints for such surfaces simply have not been devised
nor implemented at this time.
P.M. Devaux et al.: A complete system for the intelligent interpretation of engineering drawings 131
References
1. ANSI Y14.5.: Dimensioning and Tolerancing; ANSI
Y14.2M.: Line Conventions and Lettering. The Amer-
ican Society of Mechanical Engineers, New York, 1982
2. N. Badler, R. K. Bajcsy.: Three dimensional representa-
tions for computer graphics and computer vision. Com-
puter Graphics, 12(3): 153–160, 1978
3. R. Bajcsy, F. Solina.: Three-dimensional object repre-
sentation revisited. Proc. International Conference on
Computer Vision, June 1987, pp. 264–291
4. D. Dori.: A syntactic/geometric approach to recognition
of dimensions in engineering drawings. Computer Vi-
sion, Graphics, and Image Processing, 47:271–291, 1989
5. G. Gallus, P. W. Neurath.: Improved computer chromo-
some analysis incorporating preprocessing and boundary
analysis. Phys. Med. Biol., 15(3):435–445, 1970
6. C. P. Lai.: Knowledge-based understanding of engineer-
ing drawings. Ph.D. Thesis, Dept. of Electrical and Com-
puter Engineering, The Pennsylvania State University,
1993
7. C. P. Lai, R. Kasturi.: Detection of dimension sets in
engineering drawings. IEEE Trans. on Pattern Analysis
and Machine Intelligence, PAMI-16, 8:848–854, 1994
8. R. Lequette.: Automatic construction of curvilinear
solids from wireframe views. Computer-Aided Design,
20(4), 1988
9. D. B. Lysak, R. Kasturi.: Interpretation of engineer-
ing drawings of polyhedral and non- polyhedral objects.
Proc. of the 1st International Conference on Document
Analysis and Recognition, Saint-Malo, France, pp. 79–
87, 1991
10. D. B. Lysak., R. Kasturi.: Interpreting the views in an
engineering drawing. Proc. IEEE Conference on Com-
puter Vision and Pattern Recognition, pp. 598–599, 1993
11. D. B. Lysak, P. M. Devaux, R. Kasturi.: View labeling
for automated interpretation of engineering drawings.
Pattern Recognition, 28(3):393–407, 1994
12. H. Masuda, M. Numao.: A cell-based approach for
generating solid objects from orthographic projections.
Computer-Aided Design, 29(3):177–187, 1997
13. V. Nagendra, U. G. Gujar.: Computers & Graphics,
12(1):111–114, 1988
14. L. O’Gorman.: An analysis of feature detectability from
curvature estimation. Proc. IEEE Conference on Com-
puter Vision and Pattern Recognition, pp. 235–240, 1988
15. K. Preiss.: Constructing the solid representation of engi-
neering projections. Computers and Graphics, 8(4):381–
389, 1984
16. A. A. G. Requicha.: Representations for rigid solids:
Theory, Methods, and Systems. ACM Computing Sur-
veys, 12(4), 1980
17. C. H. Teh, R. T. Chin.: On the detection of dominant
points on digital curves. IEEE Trans. on Pattern Analy-
sis and Machine Intelligence, PAMI-11, 8:859–872, 1989
18. M. A. Wesley, G. Markowski.: Fleshing out projections.
IBM J. Res. Develop., 25:934–953, 1981
19. S. Xie, T. W. Calvert.: CSG-EESI: A new Solid repre-
sentation scheme and a conversion expert system. IEEE
Trans. on Pattern Analysis and Machine Intelligence,
PAMI- 10, 2:221–234, 1988
Pierre Devaux is currently a
Senior Applications Engineer
with Dassault Syst`emes, a lead-
ing CAD/CAM/CAE software
company based in Paris, France.
He specializes in a 2.5D drafting
package that enables the conver-
sion of scanned paper drawings
to parameterized, intelligent vec-
tor elements for further conver-
sion to 3D solids. He attended
Clarkson University in Potsdam,
NY, where he received a BSc
degree with distinction in Com-
puter Engineering. He received his MSc in Electrical Engi-
neering from the Pennsylvania State University in 1995. His
research interests include 2D-to-3D conversion, image pro-
cessing, and pattern recognition.
Daniel B. Lysak is with the
Manufacturing Systems Depart-
ment at the Penn State Uni-
versity Applied Research Lab
where he is involved with electro-
optics metrology and manufac-
turing techniques. His research
interests include image under-
standing, 3D interpretation, and
automated process control, as
well as spectroscopy and lidar
remote sensing. He received the
BSEE degree from Lehigh Uni-
versity in 1965, the MSEE degree
from Syracuse University in 1972, and the PhD degree in
computer engineering from Penn State in 1991.
Rangachar Kasturi is a Pro-
fessor in the Computer Sci-
ence and Engineering Depart-
ment of the Pennsylvania State
University. He joined Penn State
in 1982 after completing his
graduate studies at Texas Tech
University (Ph.D., 1982 and
M.S.E.E., 1980). He received a
B.E. (Electrical) degree from
Bangalore University in 1968.
Before entering graduate school
he had worked as a research and
development engineer with sev-
eral companies in India for ten years. He was the Editor-
in-Chief of the IEEE Transactions on Pattern Analysis and
Machine Intelligence 1995–98, and of Machine Vision and Ap-
plications journal, 1993–94. He is a coauthor of the text book
Machine Vision, coeditor of the tutorial texts, Computer Vi-
sion: Principles and Applications and Document Image Anal-
ysis, and a coeditor of the books, Image Analysis Applica-
tions and Graphics Recognition: Methods and Applications.
He has directed many projects in document image analysis,
object detection, image sequence analysis, and video index-
ing. His earlier work in image restoration resulted in the de-
sign of many optimal filters for recovering images corrupted
by signal-dependent film-grain noise. He is a Fellow of the
IEEE and a Fellow of the International Association for Pat-
tern Recognition.
... Notons que si l'information taille est effectivement exploitable dans tous les cas, le second point concernant le regroupement en chaînes peut, quant à lui, paraître discutable, en particulier sur les documents de notre étude puisque les caractères isolés sont fréquents. C'est pourquoi, comme , Fletcher 1988, Lu 1998, Shimotsuji 1992, Lai 1994, Devaux 1999, Langrana 1997 Le matching d'image : une telle approche consiste à comparer directement l'image inconnue à une base d'images préalablement étiquetées, en utilisant une distance parmi les différentes distances entre images ayant été développées dans la littérature [Di Gesù 1999] [Klette 1987]. Si cette approche est très fréquemment utilisée dans le cas de systèmes d'OCR dédiés à la reconnaissance de documents structurés, en s'affranchissant d'éventuels problèmes de taille par une normalisation, elle devient problématique lorsque la contrainte de multi-orientation intervient. ...
Article
This thesis tackles the problem of technical document interpretation applied to France Telecom Documentation. This subject is on the crossroad of different fields like signal or image processing, pattern recognition, artificial intelligence, man-machine interaction and knowledge engineering. Indeed, each of these different fields can contribute to build a reliable and efficient document interpretation device. In this interdisciplinary context, this thesis is divided in two main parts. The first part is considering an original method used to detect and recognise multi-scaled and multi-oriented patterns like symbols or characters. The theoretical basis of this method is given by the Fourier-Mellin transform. It allows recognising isolated patterns but also, in some cases, connected patterns. The approach also allows the estimation of shape's movement parameters. Tools that have been developed in this context are evaluated regarding the state of the art in optical characters recognition. Obtained results with this original method are really competitive. The second part is focusing the theme of technical document analysis under the point of view of knowledge engineering. The aim is to show the feasibility and relevance of a "knowledge based approach" in the context of technical document interpretation. An external and explicit knowledge model, a distributed agent-based software architecture and several user interfaces give the main concepts of this approach. A first implementation using these concepts is shown through a presentation of a system named "NATALI v2". This implementation has good reliability and adaptability properties.
... Graphical symbols are generally 2D-graphical shapes, including their composition in the highest level of conceptual information. Overall, it plays a crucial role in a variety of applications such as automatic interpretation and recognition of circuit diagrams (10; 11), engineering drawings and architectural drawings (12)(13)(14)(15), line drawings (16), musical notations (17), maps (18), mathematical expressions (19), and optical characters (20)(21)(22)(23). Graphics is often combined with text, illustration, and color. ...
Chapter
The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The domain covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. The chapter, in general, contains research problems, state-of-the-art methods that convey basic steps as well as prominent concepts or techniques and research standpoints/directions that are associated with graphical symbol recognition.
... In addition to quantity surveying, the recognition is also useful for other applications, such as 4D modeling, virtual reality, and graphical retrieval system. We note that there has been extensive research on the recognition and 3D reconstruction of mechanical parts from engineering drawings [6] [7] [8] [9] [10] [11] [12] [13]. Due to the differences between engineering and architecture drawings, we will conclude these methods are not suitable for our intended problem [1] [2]. ...
Article
Current methods for recognition and interpretation of architectural drawings are limited to either low-level analysis of paper drawings or interpretation of electronic drawings that depicts only high-level design entities. In this paper, we propose a Self-Incremental Axis-Net-based Hierarchical Recognition (SINEHIR) model for automatic recognition and interpretation of real-life complex electronic construction structural drawings. We design and implement a series of integrated algorithms for recognizing dimensions, coordinate systems and structural components. We tested our approach on more than 200 real-life drawings. The results show that the average recognition rate of structural components is about 90%, and the computation time is significantly shorter than manual estimation time.
Article
Full-text available
Digital transformation is omnipresent in our daily lives and its impact is noticeable through new technologies, like smart devices, AI-Chatbots or the changing work environment. This digitalization also takes place in product development, with the integration of many technologies, such as Industry 4.0, digital twins or data-driven methods, to improve the quality of new products and to save time and costs during the development process. Therefore, the use of data-driven methods reusing existing data has great potential. However, data from product design are very diverse and strongly depend on the respective development phase. One of the first few product representations are sketches and drawings, which represent the product in a simplified and condensed way. But, to reuse the data, the existing sketches must be found with an automated approach, allowing the contained information to be utilized. One approach to solve this problem is presented in this paper, with the detection of principle sketches in the early phase of the development process. The aim is to recognize the symbols in these sketches automatically with object detection models. Therefore, existing approaches were analyzed and a new procedure developed, which uses synthetic training data generation. In the next step, a total of six different data generation types were analyzed and tested using six different one- and two-stage detection models. The entire procedure was then evaluated on two unknown test datasets, one focusing on different gearbox variants and a second dataset derived from CAD assemblies. In the last sections the findings are discussed and a procedure with high detection accuracy is determined.
Chapter
This chapter discusses a detailed study on several different (but, major) structural approaches for graphical symbol recognition, retrieval, and spotting. It first provides a quick review of the common methods used in both approaches. In this framework, a comprehensive idea on graph-based graphical symbol recognition techniques is explained, where the use of spatial relations is focused. In other words, effect of spatial relations (under the purview of graph-based pattern recognition) is analyzed by taking a series of tests on graphical symbol recognition, retrieval, and spotting.
Chapter
Following the discussion we have made in Chap. 1, in this chapter, graphics recognition: graphical symbol recognition, retrieval and spotting will be discussed. It first provides a clear definition of graphical symbols and tells us where does graphics recognition lie in the DIA. It also extends its discussion to graphics recognition contests (major competitions, organized by international scientific committee in cooperation with the international association for pattern recognition (IAPR)) that have been happening since 90s to see whether the real-world problems have been addressed. Not stopping there, it also quickly explains research standpoints from the author’s point of view.
Conference Paper
One of the key difficulties in graphics recognition domain is to work on complex and composite symbol recognition, retrieval and spotting. This paper covers a quick view on complex and composite symbol recognition, which is inspired by real-world industrial problem. Considering it as a pattern recognition problem, three different approaches: statistical, structural and syntactic are taken into account. It includes fundamental concepts or techniques and research standpoints or directions derived by a real-world application.
Chapter
This chapter is dedicated to the analysis and the interpretation of graphical documents and, as such, builds upon many of the topics covered in other parts of this handbook. It will therefore not focus on any of the technical issues related to graphical documents, such as low-level filtering and binarization, primitive extraction and vectorization as developed in Chaps. 4 (Imaging Techniques in Document Analysis Processes) and 15 (Graphics Recognition Techniques), or symbol recognition, for instance, as developed in Chap. 16 (An Overview of Symbol Recognition). These tools are put in a broader framework and threaded together in complex pipelines to solve interpretation questions. This chapter provides an overview of how analysis strategies have contributed to constructing these pipelines, how specific domain knowledge is integrated in these analyses, and which interpretation contexts have been contributed to successful approaches.
Article
In this paper, the problem of 3D-object model reconstruction from engineering drawing projections is analysed, and its main stages are shown. Image vectorisation and entity recognition is mentioned briefly, the main focus being editing or the parameterisation of vectorised drawings and 3D object model reconstruction from vectorised ED projections. Vectorised drawing, as a rule, do not exactly correspond to sizes and other features (touching, parallelity, perpendicularly, symmetry, collinearity, etc.) being available on the initial drawing, and this ED vector model is not suitable for direct use in CAD systems. That is why the parameterisation stage is introduced and considered in detail. An algorithm for 3D-object reconstruction from the vectorised and parameterised drawing is proposed. The algorithm is based on the detection of volumetric solid-state object components (primitives), and performing theoretic-set operations with the components. Practical experience in realising these stages is shown.
Article
Full-text available
Representing complex three-dimensional objects in a computer involves more than just evaluating its display capabilities. Other factors are the uses and costs of the representation, what operations can be performed on it and, ultimately, how useful it is for computer recognition or description or three-dimensional objects. Many of the questions which are posed arise from the joint consideration of computer graphics and computer vision, and a specific representation hierarchy is proposed for complex objects which makes them amenable to display, manipulation, measurement, and analysis.
Article
Full-text available
In an earlier paper, the authors presented an algorithm for finding all polyhedral solid objects with a given set of vertices and straight line edges (its wire frame). This paper extends the Wire Frame algorithm to find all solid polyhedral objects with a given set of two dimensional projections. These projections may contain depth information in the form of dashed and solid lines, may represent cross sections, and may be overall or detail views. The choice of labeling conventions in the projections determines the difficulty of the problem. It is shown that with certain conventions and projections the problem of fleshing out projections essentially reduces to the problem of fleshing out wire frames. Even if no labeling is used, the Projections algorithm presented here finds all solutions even though it is possible to construct simple examples with a very large number of solutions. Such examples have a large amount of symmetry and various accidental coincidences which typically do not occur in objects of practical interest. Because of its generality, the algorithm can handle pathological cases if they arise. This Projections algorithm, which has applications in the conversion of engineering drawings in a Computer Aided Design, Computer Aided Manufacturing (CADCAM) system, has been implemented. The algorithm has successfully found solutions to problems that are rather complex in terms of either the number of possible solutions or the inherent complexity of projections of objects of engineering interest.
Article
An algorithm is presented that finds all solids that fit with a set of two or three bidimensional drafting views, a problem that has many applications in CADCAM. The views are orthogonal projections of the wireframe, and hidden parts are not removed. The solids must be bounded by planar, cylindrical, conical or toroidal surfaces. Axes of cylindrical, conical or toroidal surfaces must be parallel to the directions of projections and must intersect in straight lines or circles. The algorithm first builds an intermediate three-dimensional wireframe, then a heuristic is used to find edges between tangent surfaces that are usually not drawn. Faces are then built from this wireframe and the algorithm sorts them to construct a solid.
Article
A method is presented for segmenting engineering drawings into views and identifying the corresponding view points. A set of 2.5D view-based coordinate systems is introduced as an intermediate between the 2D drawing-based system and the 3D object-based coordinates, and a formal technique is developed for constructing transformation matrices between coordinates. The method accommodates auxiliary views in addition to the standard orthogonal set, and the number of views and their positions need not be known a priori. Drawings with moderate errors in line placement and view alignment can also be handled. A rule based approach, using evidential reasoning, is applied for labeling the views.
Article
The topic of generation of a solid 3-D object from given 2-D orthographic views has been studied for a number of years. In this paper we comment on the eleven papers published between 1973 to 1984 on this topic. Relevant features of these algorithms have been given in a comprehensive table form. A categorization tree based upon the various capabilities has been included.
Article
A method of interest when creating a solid model in the computer is to construct the solid data from engineering projections. This paper presents that construction process as a problem of heuristic search to find a set of consistent constraints to the solution space and shows results of applying the method to bodies with plane and cylindrical faces.
Article
An algorithm for recognizing dimensions in engineering machine drawings that employs a syntactic/geometric approach along with a specific deterministic finite automation (DFA) is presented and demonstrated. First, the problem of distinguishing object from interpretation lines is addressed. Then, dimension-sets, their components, kinds, and types are defined and illustrated. A DFA called the dimension-set profiler is used to determine the profile of a given dimension-set. The resulting profile is used to obtain a sketch which is parsed by the dimensioning grammar to yield a conceptual web. This web, in turn, is converted into a geometric web by substituting components labeling nodes by their line description. These line descriptions are compared to the lines in the actual dimension-set. A certain degree of redundancy is introduced to ascertain valid recognition.
Article
This paper describes an efficient method for converting orthographic projections into solid models based on non-manifold topology and the assumption-based truth maintenance system (ATMS), and then describes an error recovery method for incorrect orthographic projections. A combination of non-manifold modelling and ATMS achieves excellent performance for conversion problems. In our method, all solid candidates are maintained by a cellular model using non-manifold topology. Since a combination of cells in a cellular model determines the shape of a solid, solid models that match orthographic projections can be derived by solving constraints between cells and orthographic projections. A sufficient group of constraints can be expressed as a set of Boolean equations, which can be solved efficiently by using ATMS. Our method can even be applied to incorrect draftings. In actual design, many draftings contain human errors, but conventional methods cannot handle such incorrect draftings. We discuss methods for detecting inconsistent lines that should not have been included, or missing lines that should have been included.
Article
of such systems are symbol structures (representations) designating "abstract solids" (subsets of Euclidean space) that model physical solids. Representations are the sources of data for procedures which compute useful properties of objects. The variety and uses of systems embodying,representations of solids are growing rapidly, but so are the difficulties in assessing current designs, specifying the characteristics that future systems should exhibit, and designing systems t9 meet such specifications. This paper resolves many of these difficulties by providing a coherent view, based on sound theoretical principles, of what is presently known about the representation of solids. The paper is divided into three parts. The first introduces a simple mathematical framework for characterizing certain important aspects of representations, for example, their semantic (geometric) integrity. The second part uses the framework,to describe and compare all of the major knownschemes,fo~ representing solids. The third part briefly surveys extant geometric modeling systems and then applies the concepts developed in the paper to the high-level design of a multiple*representation geometric modeling system which exhibits a level of reliability and versatility supermr to that of systems currently used in industrial computer-aided design and manufacturing. Keywords and Phrases: CAD/CAM, computational geometry, computer graphics, design