Page 1

0-7803-8185-8/03/$17.00 © 2003 IEEE

1

A Novel Boolean Self-Organization Mapping Based on Fuzzy

Geometrical Expansion

Narendra S. Chaudhari1 and D. Wang2

School of Computing Engineering, Block N4-2a-32, Nanyang Technological University, Singapore 639798

Emails: 1: asnarendra@ntu.edu.sg; 2: wangdi_2001@hotmail.com

Abstract

We propose a novel Self-Organization Mapping algorithm

for Boolean neural networks (BSOM) based on geometrical

expansion. Our proposed BSOM algorithm possesses

generalization capability. Compared with traditional Self

Organization Mapping (SOM)

algorithm is based on geometrical expansion, not gradient

descent. BSOM algorithm memorizes more vectors in a

hidden neuron, not only an exemplar in the center of SOM

cell. Finally BSOM algorithm needs less number of

iterations and simple training equations. Test results are

given on simple Boolean functions, and a randomly

generated Boolean function with 10 variables.

Key words: Self-Organizing Map (SOM), Binary neural

network (BNN), geometrical learning, Expand and

Truncate Learning (ETL), Boolean SOM (BSOM).

1. Introduction

Binary neural networks (BNNs) have attracted great

attention in the recent past. One of the reasons for the

popularity of BNNs is the integral values for weights in

many BNN construction methods make them attractive for

VLSI designs. They have found applications in data

mining and classification.

Many novel training algorithms have been proposed in the

last few years for binary neural nets. Donald L. Gray and

Anthony N. Michel devised Boolean-Like Training

Algorithm (BLTA) in 1992 [1] based on original principles

of Boolean algebra with extension. BLTA does well in

memorization and generalization, but too many hidden

neurons are needed. Jung Kim and Sung Kwon Park

proposed Expand-and-Truncate Learning (ETL) algorithm

based on geometrical concepts in 1995 [2]. Their

algorithm is mainly based on the concept of expanding the

“set of included true vertices” (SITV) using conversion of

false vertices to true vertices and vice versa, at appropriate

stages of their algorithm. When compared with the

traditional backpropagation method of constructing neural

networks, ETL uses less training time and guarantees

convergence. Atsushi Yamamoto and Toshimichi Saito

improved ETL by modifying some vertices in SITV as

“don’t care” [3], and they called their method as improved

ETL (IETL). Fewer neurons are needed in IETL. ETL

and IETL begin with selecting a true vertex as the core

vertex for SITV. In both these methods, the number of

hidden neurons needed depends on the choice of the core

vertex and the order of vectors passing though the network

in training process. In addition, ETL and IETL need to

algorithms, BSOM

search a lot of training pairs for determining each neuron in

hidden layer. Ma Xiaomin introduced the idea of weighted

Hamming distance hyper-sphere in 1999 [4], which

improved the representation ability of each hidden neuron,

hence improved the learning ability of binary neural

networks. In his later research in 2001 [5], based on the

idea of weighted Hamming distance hyper-sphere, he

proposed Constructive Set Covering Learning Algorithm

(CSCLA). Each input-output pair passes the net once in the

training process by CSCLA. Hence CSCLA can be used

to on-line learning. But CSCLA is only proposed to

include vertices with Hamming distance one from the core

in a hidden neuron, not vertices with Hamming distance

more than one. So the resulting neural networks are not

simple and effective enough. D. Wang and N. S. Chaudhari

proposed Multi-Core Learning (MCL) algorithm in 2003

[6,7] which begins with several core vertices. MCL needs

fewer hidden neurons than BLTA, ETL, IETL and CSCLA,

and gives simpler equations to compute the values of

weights and thresholds.

In general, ETL, IETL and CSCLA have no ability of

generalization, while BLTA needs more hidden neurons. S.

Gazula and M.R. Kabuka [8,9] designed two supervised

pattern classifiers based on Boolean neural networks:

nearest-to-an-exemplar (NTE) classifier and Boolean

k-nearest neighbor (BKNN) classifier, both of which have

the ability of generalization. These two classifiers use the

idea of Radius Of Attraction (ROA) based on geometrical

concept. The training is implemented by memorization of

the training vectors (exemplars) and generalization is

implemented by the ROAs.

All of above algorithms belong to supervised learning.

Current unsupervised learning algorithms are based on

gradient descent, which needs more number of iterations,

resulting in a long training time. Not much work is done

on the design of efficient Self-Organization Mapping (SOM)

algorithms especially for Boolean mapping.

In this paper, we propose a novel Boolean SOM (BSOM)

algorithm based on geometrical expansion. Compared with

ETL, IETL and CSCLA, BSOM algorithm has the ability of

generalization; contrasting with traditional SOM algorithms,

BSOM algorithm is based on geometrical expansion, not

gradient descent. BSOM algorithm memorizes more vectors

in the training process, not only exemplars in centers of

SOM cells; finally BSOM algorithm needs less iterations,

and simple training equations.

2. Preliminaries

A set of 2n binary patterns (0, 1)n, each with n bits, can be

considered as an n-dimensional unit hypercube (the variable

space). Each pattern is located on one vertex of the

2C3.7

ICICS-PCM 2003?

15-18 December 2003?

Singapore

Page 2

2

hypercube. We can obtain the exhyperspere of this unit

hypercube with radius n/2. All patterns lie on the surface

of this exhyperspere. This exhyperspere is defined as

reference hypersphere (RHS) by Kim and Park [2]. We

can separate a subset of these 2n patterns by an

(n-1)-dimensional hyperplane if this subset is linearly

separable.

A subset that is linearly separable is defined as follows: if

there exists an (n-1)-dimensional hyperplane, such that any

vector in the subset lies on one side of the hyperplane

(including the hyperplane), and any vector outside the

subset lies on the other side of the hyperplane, this subset is

linearly separable. In addition, as an extension of this

definition, Kim and Park introduced the following theorem

[2]: a Boolean function f is linearly separable if and only if

there exists a hypesphere such that all true vertices lie

inside or on the hypersphere, and all false vertices lie

outside, (or, vice versa). This theorem converts the

judgment criterion of linear separability from finding a

separating hyperplane to finding a separating hyperphere.

We visualize linear separablility of 2-dimensional inputs in

Fig. 1.

Fig. 1. Visualization of 2-Dimensional Inputs

SOM algorithms cluster similar vectors automatically by

unsupervised learning. The criterion to cluster vectors is

based on the similarities between vectors. To measure the

similarity between two vectors, Hamming distance is

popularly used for binary vectors. For example, NTE

classifier and BKNN classifier [8,9] use Hamming distance

as the criterion of similarity.

Hamming distance is defined as:

n

j

k

k

aa

1

where DHM is the Hamming distance between ai and aj (ak

and ak

To define the similarity measure for our use, we make use

of the following measure (square of Euclidean distance):

n

j

k

a

1

This measure allows us to geometrically visualize the

situation in terms of hyperspheres (however, in our final

algorithmic formulation, it does not have much role).

Because ak

DHM (ai, aj)= ∑

⊕

=

k

i

,

i

i are the kth bits of vectors ai, and aj respectively).

DE (ai, aj)= ∑

−

=

k

i

ka

2

)(

i is 0 or 1, DHM (ai, aj) equals to DE (ai, aj).

Based on DE (and hence, on Hamming distance), the

similarity between any two vectors can be defined as:

sij=n – DHM (ai, aj),

where sij is the similarity between any two vectors, and n is

the dimension of the input vectors.

BSOM algorithm clusters the vectors with high similarity sij.

Vectors included in a hyperphere (can be separated by the

intersect hyperplane of RHS and this hyperphere), have

larger values of similarities, and can be represented by a

single hidden neuron. BSOM replaces each cell in normal

SOM algorithms by a hyperphere (equivalently, a

hyperplane, or a hidden neuron). Each hyperphere shifts its

center and expands to include as many vectors as possible

within the similarity restrictions in the training process. We

give this BSOM algorithm and the restriction rules in the

next section.

3. Binary Self-Organization Mapping

Based on Geometrical Expansion

3.1 BSOM algorithm

Suppose a set of vectors A={ak}, k=1, 2 ,…, K, where K

is the total number of training vectors. We further denote

ak = {a1k, a2k, …,ank}, where aik is the ith bit of the kth

vector ak. We find the hypersphere (hidden neuron)

including the cluster of vectors with the largest value of

similarity with the current tested vector ai, (ai stands for the

tested vector, and aj stands for vectors in the representing

hypersphere): MAX{ sij= n-DHM(ai, aj) }. It corresponds to

finding the existing hypersphere including the cluster of

vectors with the minimal value of Hamming distance with

the testing vector, because n is constant, i.e., MIN{DHM(ai,

aj)}.

Algorithm 1: Boolean SOM (BSOM) construction.

Notations: q stands for the current number of hidden

neurons; Wj stands for the parameter vector of the hidden

neuron j; K is the total number of training vectors; p stands

for the iteration time of Boolean SOM algorithm.

q = 1;

for (p = 1,… max iterations) & (sample set not empty), do

for (k=1,…K) and (sample k not removed) do

for (j=1,…q)

test the jth hidden neuron

if f(Wj, ak) > threshold1.

ignore this sample, remove ak from the sample set;

(continue for testing of next sample – new k value)

if f(Wj, ak) > threshold2.

expand hidden neuron j to include ak ; then remove

ak from the sample set;

if f(Wj, ak) > threshold3.

store ak for use in the next circle then remove ak

from the sample set;

else set q = q+1, and train a new hidden neuron q.

next j;

next k;

next p.

End of Algorithm 1- BSOM construction.

Page 3

3

In BSOM, each hidden neuron represents a hypershpere: we

define the gravity (center) of hypershpere j as:

gravityj=(C1j/C0j, C2j/C0j, …,Cnj/C0j);

where Cij= ∑

=

j

C

i

ij

a

0

1

where C0j is the number of vectors included

in hypersphere j.

The square of radius of hypersphere j is expressed as the

largest Hamming distance between the gravity and all

vectors included in hypersphere j. The new added vector is

just outside hypersphere j before expansion, and on the

surface of hypersphere j after expansion. Thus, square of

radius is equal to DE(C0j, the new added vector). The

hypersphere j is the smallest which includes the new vector

after expansion. The corresponding separating hyperplane

(hyperplane j) is the intersection (or with very little offset)

of RHS and hypersphere j. This situation is explained in

figure 3 below.

(a)

(b)

(c)

Fig. 3 The Process of Geometrical Expansion. We express the

n-dimensional hypersphere as a cirle, and (n-1)-dimensional hyperplanes

as lines. All vectors lie on the surface of the RHS.

We express the n-dimensional hypersphere as a cirle,

and (n-1)-dimensional hyperplanes as lines. All vectors lie

on the surface of the RHS. Fig.3 (a)-(c) explain how a

hypersphere expands, and how a separating hyperplane

shifts. Fig.3 (a) shows a hypersphere which includes three

vertices, and another vertex is located in its claim region.

So this vertex causes the expansion of this hypersphere.

The expansion result is shown in Fig.3 (b). After expansion

this hypersphere includes the new vertex. Another vertex is

located out of its boundary region. So a new hypersphere

is generated to represent it, which is shown in Fig.3 (c).

Hyperplane j is determined by its slope Wj and position

Threshold1j. The slope vector Wj is:

Wj=(w1j, w2j, …,wnj),

where, wij=2Cij- C0j .

The thresholds are defined as:

Threshold1j=

∑

=

i

∈

n

ikik

neurona

aw

jk

1

max

.

In BSOM algorithm, three thresholds are defined:

threshold1>threshold2>threshold3. Further the activation

function, f(Wj, ak)= ∑=

i

w

1

Theorem 1 [2].

For all vectors in hypersphere j, f(Wj, ak)≥threshold1j, and

for all vectors out of hypersphere j, f(Wj, ak)<threshold1j.

Threshold2j defines the match region. Vectors within this

region can be directly included by slight expansion of the

corresponding hyperplane (or. slight shifting of its center,

where we call this shifting as movement, or excursion).

Threshold3 j defines the claim region. Vectors within this

region are considered to be included in hidden neuron j in

the near future. Vectors out of claim region require new

hidden neurons to represent them. Threshold2 and

Threshold3 are computed by:

Threshold2= Threshold1-ξ2, Threshold3= Threshold1-ξ3,

whereξ2 andξ3 are larger than zero. We now illustrate

our method, with special emphasis on the use of thresholds.

Select a vector ak∈A, and compute the value of f(Wj, ak). If

f(Wj, ak)≥threshold1, it means that ak has been covered by

hidden neuron j, and need not be trained again.

If threshold2≤f(Wj, ak)<threshold1, it means that ak is not

covered by hidden neuron j, but can be included in this

hidden neuron by an immediately little expansion of hidden

neuron j. Then we shift the center of hidden neuron j to a

proper position and expand hidden neuron j a little to

exactly as big as to include ak.

If f(Wj, ak)≥threshold3, that means , ak is too far (measured

by Hamming distance) from the vector cluster to be

included in hidden neuron j. If we expand neuron j to

include ak, neuron j is to include much more other vectors

between ak, and hypersphere j, which will cause over

generalization. So, if for all current hidden neurons f(W,

ak)> threshold3, a new hidden neuron is added to the

network to represent ak.

If threshold3≤f(Wj, ak)<threshold2, ak is not so near as to

be included in hidden neuron j immediately, but it is more

possible to be included after a little expansion to include

some near vectors. If there exists a hypershpere (hidden

n

iia

.

Page 4

4

neuron) which is promising to include ak, then ak will be

left to the next circle to be tested. In the following part,

we discuss how to determine these parameters in training

process in detail.

When f(Wj, ak) ≥ threshold3 or threshold2 ≤ f(Wj,

ak)<threshold1, we need to revise the parameters of Wj and

the thresholds.

When f(Wj, ak)≥threshold3, a new hidden neuron is added

to the net to represent the new coming vector ak. Then the

generating hypersphere is centered at ak =(a1k, a2k, …,ank),

and the radius is zero. This hypersphere (in fact a vertex ak)

is large enough to include ak. The corresponding hyperplane

passes through vertex ak. The slope of this hyperplane Wj is

(2a1k-1, 2a2k-1, …,2ank-1). And Threshold1j is equal to

∑=

i

ik

a

1

.

If threshold2≤f(Wj, ak)<threshold1, we shift the center of

hypersphere j to a proper position according to the position

of ak and expansion of hidden neuron j as big enough as to

include ak. The center of a hypersphere is always the

gravity of the vectors included in that hypersphere. And the

radius of that hypersphere is the distance between the

gravity and the furthest vector included in that hypersphere.

We can verify the new added vector ak is the furthest vector.

The value of Threshold2 depends on the required precision

and the degree of generalization. The larger the required

degree of precision, the larger Threshold2, vice versa.

The value of Threshold3 depends on the requirement of the

neural net complexity and the learning speed. Fast and

complex system determines a high value of Threshold3,

vice versa.

Given a set of vectors (a existing hidden neuron),

Threshold1 is fixed. Determining Threshold2 and

Threshold3 equals to determiningξ2 andξ3, which also

equals to determining the precision, the degree of

generalization, the net complexity and the learning speed.

The larger hypersphere j is, the less possible we expect it to

expand, because the expansion of larger hyperplanes could

introduce over generalization. We observe that, ifξ2 andξ3

are set as fixed values, the expansion possibility decreased

with the expansion process. The augment is omitted here.

Here ξ2 and ξ3 are fuzzy variables which will be

determined by fuzzy concepts.

4. Datasets and Experimental Results

Example 1. Suppose we wish to separate the following

set of vertices: {010011,011011,011010,101011,100011,

100010,011101,011100,010100,101100,100101,100100}.

n

Fig 3. Neural Net Structure for Example 1.

Here we setξ2=1,ξ3=2. The learning process needs 1

learning circle, or at most 2 learning circles according to

different order of vector to pass the learning system. The

resulting network architecture and parameters are shown in

Fig. 3 and Table 1. In this case, an exact result is

obtained.

Table 1. Weights and Thresholds for Example 1 (ξ2=1,ξ3=2).

neuron w1j w2j w3j w4j w5j w6j threshold

j

1 -3 3 1 -3 3 1 7

2 -3 3 1 3 -3 -1 6

3 3 -3 -1 -3 3 1 6

4 3 -3 -1 3 -3 -1 5

Whenξ2=2,ξ3=3, two or three hidden neurons are needed

according to different order of vector to pass the learning

system. Further, only one learning circle is needed. The

weights are thresholds are given in Tables 2 and 3 below.

Table 2. Weights and Thresholds for above Example 1 (ξ2=2,ξ3=3).

(Sequence 1).

neuron w1j w2j w3j w4j w5j w6j threshold

j

1 -6 6 2 0 0 0 6

2 8 -8 0 0 0 0 8

3 -1 -1 -1 -1 -1 -1 0

Table 3. Weights and Thresholds for above Example 1 (ξ2=2,ξ3=3).

(Sequence2).

neuron w1j w2j w3j w4j w5j w6j threshold

j

1 -6 6 2 0 0 0 6

2 6 -6 -2 0 0 0 6

Example 2. We also tested our method on 10-dimensional

binary vectors with a random (uniform) distribution. We

generated the following 100 vectors out of 1024 possible

combinations: { 0000101001,0000100011,0010111110,1110000100,

1011100001,0101101100,0011010110,1010101110,0101010010,11100100

00,1001001001,0111110001,1011110001,0110111011,1011101001,011110

1011,1110110011,1010100110,1011011011,0100111100,1010000111,0100

001100,1100111110,0010011001,0100100100,0001011110,0000001101,01

00011100,0100000110,0110110111,0101000111,0011011110,0110110011,

0100010010,1101001101,0111001000,0001000011,1010111011,10100010

11,1010100110,0000011111,0100000011,1001011010,1001111101,010000

1001,1000111000,1100100101,1000011111,1001011101,1011010100,1111

001011,1111111100,1110010110,1111110101,1001000101,1000111011,10

00010011,1000001101,1110001001,1100001010,0000011100,1111011011,

1010101110,1100110010,0100100000,0110011010,1101010000,10111011

10,1101000000,0001111000,1100110110,0011111101,1000010010,100100

1001,1100110010,1111110110,1010011110,0101111101,1101001001,0111

011100,0010101101,0101001111,1000010100,0111110010,0101000100,1

001000000,1101100110,0011010000,1001101011,1011000100,100011000

0,1010110111,0000110010,0000111011,0110100001,0000100010,1011110

110,0000100010, 0110010001,0010011101 }. If ξ2=5, ξ3=6,

fourteen hidden neurons are needed. The results are shown

in Table 4. Whenξ2=6,ξ3=7, eight hidden neurons are

needed. The results are shown in Table 5. We tested the

Page 5

5

results by the remaining vectors (924 out of 1024). We

found that there are some uncovered vectors whenξ2 and

ξ3 are small, and there is little overlap whenξ2 andξ3 are

large. Similar vectors always go to the same cluster.

In addition, only few learning iterations (normally less than

ten) are needed, which are much less than the ones needed

in traditional SOM algorithm.

Table 4. Weights and Thresholds for 10-dimensional binary vectors. in

Example 2 with parameter values :ξ2=5,ξ3=6.

neuron

w1j w2j w3j w4j w5j w6j w7j w8j w9j w10 j threshold

j

1 -4 -8 -10 -4 -4 -6 6 -4 -2 10 8

2 1 -7 7 -1 3 1 1 9 7 -9 27

3 -2 6 6 0 2 8 -8 -8 0 2 22

4 6 -8 2 -6 -2 6 2 -2 6 8 24

5 -5 7 -9 -7 1 -1 -1 7 -3 -9 7

6 1 5 -3 -5 -5 3 -3 -3 7 -5 11

7 0 -4 -4 4 4 6 6 2 -6 2 20

8 4 2 4 4 2 4 -2 4 -2 -2 22

9 3 5 3 5 -3 -5 7 -5 -3 5 20

10 7 -3 -3 7 -1 -7 -5 -1 -3 -1 11

11 -5 1 1 -5 5 1 -3 -3 3 1 9

12 -1 1 3 -3 -1 -1 1 5 -5 1 9

13 3 -3 -3 1 -3 5 -3 -3 -3 -5 5

14 -2 0 -2 2 -2 0 2 2 2 0 8

Table 5. Weights and Thresholds for 10-dimensional binary vectors. in

Example 2 with parameter values :ξ2=6,ξ3=7.

neuron

w1j w2j w3j w4j w5j w6j w7j w8j w9j w10 j threshold

j

1 -2 -12 -2 -6 -2 -6 6 -8 -6 16 6

2 2 -2 10 2 -2 2 0 12 0 -12 27

3 -3 9 5 -3 1 5 -3 -11 7 5 20

4 -2 6 -10 -8 -2 -2 0 8 -4 -8 6

5 0 -2 -6 2 -8 0 4 8 4 6 22

6 4 -2 -6 6 -2 2 2 -6 -6 -6 10

7 -1 1 1 3 3 3 1 3 -3 3 17

8 1 -3 -1 -1 1 1 -5 -1 3 -3 4

5. Concluding remarks

We proposed a novel Boolean SOM (BSOM) algorithm

based on geometrical concepts. Three regions are defined

by geometrical concepts. The learning process is based on

the judgment of which region the new coming vector

belongs to. These three regions (match, claim, and outside)

have the same center, but different radius. Match region

and claim region shift and expand in the training process

until all vectors are covered.

The main differences of BSOM from previous methods are:

(i) In our algorithm, there is a possibility of

generalization (this is to contrast it with the earlier

methods like, ETL, IETL and CSCLA).

(ii) In contrast to the traditional SOM, BSOM

algorithm memorizes more vectors in the training

process, not only exemplars in centers of SOM cells.

(iii) BSOM algorithm is based on geometrical expansion,

not gradient descent.

(iv) BSOM algorithm needs less iterations, less training

time, and simple training equations.

(v) Due to its fast learning speed and ability to model the

problem specific information in terms of fuzzy

parameters, we expect that BSOM will find many

applications, including possibility of its adaptation

in on-line learning in the near future.

Our two parameters, Threshold2 and Threshold3, depend on

the required precision, the degree of generalization, the

requirement of neural net complexity and the learning

speed. To determine Threshold2 and Threshold3, we need

to evolve a systematic strategy to determineξ2 andξ3.

They can be fuzzy variables, and their exact formulation

needs further investigations.

REFERENCES

1. Donald L. Gray and Antbony N. Michel, “A training

Algorithm for Binary Feed forward Neural Networks,”

IEEE Tran. Neural Networks, Vol.3, No.2, pp.176-194,

Mar. 1992.

2. Jung H. Kim and Sung-Kwon Park, “The Geometrical

Learning of Binary Neural Networks,” IEEE Tran.

Neural Networks, Vol. 6, No.1, pp.237-247, Jan. 1995.

3. Atsushi Yamamoto and Toshimichi Saito, “An

improved Expand-and-Truncate Learning,” IEEE Tran.

Neural Networks, Vol.2, pp.1111-1116, Jun. 1997.

4. M.Xiaomin, Yang Yixian, and Z.Zhang, “Research on

the learning algorithm of binary neural networks,”

Chinese Journal of Computers, Vol.22, No.9,

pp.931-935, Sept. 1999.

5. Ma Xiaomin, Yang

“Constructive Learning of Binary Neural Networks

and Its Application to Nonlinear Register Synthesis”,

In, Proceedings of International Conference on Neural

Information Processing (ICONIP’01), Vol. 1, pp.

90-95 (2001).

6. D. Wang and N.S. Chaudhari, “A Multi-Core Learning

Algorithm for Binary Neural Networks”, In,

Proceedings of International Joint Conference on

Neural Netowrks (IJCNN’03), Vol. 1, pp.450-455,

Portland, USA, (21-24 July 2003).

7. D. Wang and Narendra S. Chaudhari, “Binary Neural

Network Training Algorithms Based On Linear

Sequential Learning,” to appear in International

Journal of Neural Systems (IJNS) (2003).

8. S. Gazula and M.R. Kabuka, “Design of supervised

classifiers using Boolean neural networks”, IEEE

Trans. Pattern Analysis and Machine Intelligence, Vol.

17, No. 12, IEEE, USA, pp. 1239-1246, (Dec.1995).

9. M.R. Kabuka, “Comments on “Design of supervised

classifiers using Boolean neural networks”,” IEEE

Trans. Pattern Analysis and Machine Intelligence, Vol.

21, No. 9, IEEE, USA, pp. 957-958, (Sept.1999).

10. Georgios C. Anagnostopoulos

Georgiopoulos, “Category region as new geometrical

concepts in Fuzzy-ART and Fuzzy-ARTMAP”,

Neural Networks, 15, pp1205-1221 (2002).

11. Sung-kwon Park and Jung H. Kim, “A Liberalization

Technique For Linearly Inseparable Pattern,” Proc. of

1991 Twenty-Third

pp.207-211, Mar. 1991.

12. Sung-kwon Park and Jung H. Kim, “Geometrical

Learning Algorithm for multiplayer Neural Network

in a binary Field,” IEEE Trans. Computers, Vol.42,

No.8, 988-992 (Aug. 1993).

Yixian, Zhang Zhaozhi,

and Michael

Southeastern Symposium,