Content uploaded by Resconi Germano

Author content

All content in this area was uploaded by Resconi Germano on Apr 09, 2014

Content may be subject to copyright.

Control of the Minimum Action Reasoning

Soft computing by multi dimension optic geometry

GERMANO RESCONI

Faculty of Mathematics and Physics , Catholic University, Brescia , Italy

resconi@numerica.it

Abstract

Given a n+1 dimensional space S with

1 2

( , , ,...., )

n

y x x x S∈

, we use the model ( hyper-plane)

in S

1 1 1 2 1 2

( , ,...., ) ,..... ( , ,...., )

n q q n

y f x x x f x x x

β β

= + +

to control the transformation of the

points

{ }

1 1 2 2 1 2 1 2

( , , ,...., ), ( , , ,...., ),....,( , , ,...., )

n n k n

Y y x x x y x x x y x x x=

into

{ }

1 1 2 2 1 2 1 2

' ( ', , ,...., ),( ', , ,...., ),...., ( ', , ,...., )

n n k n

Y y x x x y x x x y x x x=

where the values y’ have

the minimum distance from y. The algorithm to compute the parameters

1 2

( , ,...., )

q

β β β

and

the values y’ is denoted minimum action reasoning. The operation is the geometric projection

of y into the hyper-plane in S. With the different models or hyper-planes we can control many

different geometric transformations as reflection , rotation , refraction. With a chain of

transformations we generate the minimum path in S that joins one point to another

( geodesic ). One ray in the space S is controlled by models as a special environment that guides

the ray to have the task with minimum distance. The minimum action reasoning can be used to

create software by models for different applications The coordinates of the space S can be real

numbers , logic values , fuzzy sets or any other set that we can define. Classical linear or non-

linear regression is part of this minimum action reasoning. Also classical logic, many value

logic and fuzzy logic are included in the minimum action reasoning.

Keywords

Minimum action, reasoning , geodesic, soft-computing, extension of linear regression , projection

operator , fuzzy number variables, geometry of fuzzy reasoning , reflection , rotation , refraction,

geometrical optics of logic.

1. Introduction

The paper studies the possibility to implement minimum action reasoning in a multidimensional

space S. The first step of the paper is to create an algorithm to project a random set of points y in S

into a set of points y’ that are in the best model within a family of models. The algorithm chooses

among all models in the family the best model for which y and y’ have the minimum distance. The

action to choose the best model is named minimum action (geodesic ). Any chain of minimum action

is denoted reasoning and therefore the algorithm is denoted minimum action reasoning. More

complex geometric transformations are possible as projection, reflection , rotation , refraction and so

on in the space S. For a family of linear models we can use the geometric projection to compute the

best linear parameters in the linear regression. We can also extend the linear regression to non -

linear regression or to fuzzy number transformations. The space S can be a space of logic values , as

classical logic values , many value logic and fuzzy logic.

1

2 Linear regression ,projection operator, minimum action reasoning

Specialized literature on regression analysis (Gujarati 2003 [2]) and more generally on linear and non

linear models (Ryan 1997 [2] ) offered many solutions to study the dependence between two types of

variables y and {x1,x2,...,xp } where y is a quantitative dependent variable and {x1,x2,...,xp } are

independent variables. Regression analysis studies the dependence of y with respect to the variables

{x1,x2,...,xp }when we have samples of y and samples of {x1,x2,...,xp }. This requires the choice of a

suitable model and the related parameters estimation. Given the generic model:

y = f ( x1,...,xp ; β )+ε (1)

the statistical regression aims to find the set of unknown parameters so that

%

°

( , ,..., ; )

1 2

yf x x x p

β

=

(2)

Where

%

y

are the values of y in agreement with the model and with the minimum errors respect to the

given samples of y. The term ε indicates the deviation of y from the model. The most widely used

regression model is the Multiple Linear Regression Model (MLRM), as well as the Least Squares

(LS) is the most widespread estimation procedure. In the MLRM the dependent variable y would be

expressed as the weighted sum of the independent variables {x1,x2,...,xp}, with the unknown

parameters

{β1,β2,...,βp }

Formally we have the hyper-plane

......... ,

0 1 ,1 p

y x x

n n p n

n

β β β ε

= + + + +

(3)

where β0 is the parameter related to intercept term. In a matrix form the model is expressed as:

y=X β+ε

where

1 ...

1,1 1,

1 1

1

1 ...

2,1 2,

2 2

2

, , ,

... ...

... ... ... ... ...

1 ...

,1 ,

q q

p

x x p

y

x x

yp

y X

yx x

q q p

ε

βε

β

β ε

ε

β

= = = =

(4)

LS is based on the minimization of the sum of squared deviations:

min ( ) ( )

T

D y X y X

T

where () is the matrix transpose

β β

β

= − −

(5)

The optimal solution β of the minimization problem is obtained in this way

2

( ) ( )

( ) ( ) ( )

T

D y X y X

T T T T

y y y X X y X X

β β

β β β β

= − − =

− − +

To compute the minimum value we make the derivatives of the previous form

T T

DT T T T T

y X X y X X X X

j j j j j

β β β β

β β

β β β β β

∂ ∂ ∂ ∂ ∂

= − − + +

∂ ∂ ∂ ∂ ∂

where

10

... ...

1

and , 0 ... 1 0 ... 0

0

j j

1

...

...

0

p

T

jT

v v

j j

j

β

ββ β

ββ β

β

β

∂ ∂

= = = = =

∂ ∂

+

We have

0

D

j

T T T T T T T

for y Xv v X y v X X X Xv

j j j j

β

β β

∂=

∂

+ = +

But because we have the following scalar property

( )

T T T T

P A B A B B A= = =

the previous expression can be written as follows :

( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

T

T T T T T

T T

v X y v X y X y v y Xv

jj j

j

T

T T T T T T T T T

T T

v X X v X X X X v X X v X X v

jj j j j j

j

β β β β β

= = =

= = = =

We have

2

2

2 2

T T T T T

y Xv v X y v X y

j j j

T T T T T T

v X X X Xv v X X

j j j

and

T T T T

v X y v X X

j j

β β β

β

+ =

+ =

=

3

whose solution is

1

( )

T T

X y X X

T T

X X X y

β

β

=−

=

(6)

For the previous solution for the optimal condition we obtain

1

( )

T T

y X X X X y Qy

ε ε

−

= + = +

(7)

We remark that the operator

1

( )

T T

Q X X X X

−

=

is a projection operator for which

2 1 1 1

( ) ( ) ( )

T T T T T T

Q X X X X X X X X X X X X Q

− − −

= = =

Geometric image of the projection operator

Figure 1 Projection of the vector y into the plane of two dimensions X = ( x1 , x2 ).

Example 1

1 1 1

1 2 , 2

1 3 2

X y

= =

Parameters

2

3

1

2

T -1 T

= (X X) X y

β

=

Projection vector

(1- Q)y

Qy = z

x1

x2

s 1

s 2

s 3 y

4

3 1(1) 2

3 2 2

1 1 3 1 5

12

( ) 1 2 (2)

1 2 2 2

1 3 3 1 3

2(3)

2 2

T T

Qy X X X X y

+

−

= = = + =

+

The values in the projection vector are samples of the best fit of this linear form

Graph in figure 2 with points in the vector y with the previous linear model

0 1 2 3 4

0.5

1

1.5

2

2.5

3

2.667

0.667

R

h 1,

Y

k

40 R

h 0,

t

k

,

Figure 2 Best fit linear form with original point y.

The projection of y into the plane X is the minimum path for which the three values in y ( cluster of

points ) move to the final point on the straight line. Now for

1

( )

( )

T T

z X X X X y Qy

and

p I Q y

−

= =

= −

We have that z is orthogonal to y in fact

5

1 2

2 3

y x

= +

( ) [( ) ] ( )

1 1

( ( ) ) ( )

2

( ) ( ) 0

T T T T

z p QY I Q Y Y Q I Q Y

but

T T T T T T

Q X X X X X X X X Q

So

T

Q I Q Q I Q Q Q Q Q

= − = −

− −

= = =

− = − = − = − =

Now the projection is a segment line from y to its projection Q y which is on a straight line

( minimum distance ) . The movement from y to Q y is the minimum action whose points are

, 1

1,

,

p

y z where k

kk

when

k y z p y

k

when

k y QY z

k

= + ≥

= = + =

→ ∞ = =

Graphic image for k =1 , k=2 , k=3, k=4

Figure 3 The set of points represented by squares are the initial values y. The black points are

minimum action from y to Q y which are represented by rhombus.

The algorithm by which we move from the initial points y to the projection Q y with the inter-media

values ( movement ) is denoted minimum action reasoning.

6

Figure 4 When we change the initial points y , the movement of the points changes as we can see in

this figure.

We show in figure 5 the projection Q y , and the movement on the segment line ( 1 – Q ) y by which

we move from y to Q y.

Figure 5 The minimum action reasoning movement from y to the projection Q y by a straight line or

ray

Example 2

Given the non - linear family of models

( ) ( )

1 1 2 2

2

( ) 1, ( ) ( 4 2)

1 2

y f x f x

f x f x x x

β β

= +

= = − + −

we compute the basis vectors : one for

0

2

β

=

and the other for

0

1

β

=

so we have the two

dimensional plane

(1- Q)y

Qy = z

X 1

X2

S 1

S 2

S 3 y

Minimum action

reasoning path

7

( ) ( )

1 2

1 1 1

2 1 2

3 1 1

x f x f x

x

Xx

x

=

==

=

In a short form

1 1

1 2

1 1

X

=

For the random value of y given by the vector

1

2

2

y

=

we have the parameters

1

1

2

T -1 T

= (X X) X y

β

=

and the projection operator

1

1 (1) 3

22

1

y 1 (2) 2

23

1

1 (1) 2

2

T -1 T

z X = X(X X) X y Q

β

+

= = = + =

+

The vector y is a linear combination of the colon vectors in X.

1

2 2

( 4 2) 1 ( 4 2)

1 2 2

y x x x x

β β

= + − + − = + − + −

2

2

2

1

1 ( 1 4 2) 3

2

(1) 1.5

2

1

(2) 1 ( 2 4(2) 2) 2 2

2

(3) 3 1.5

1

1 ( 3 4(3) 2) 2

2

y

y y

y

+ − + −

= = + − + − = =

+ − + −

8

1 1 1

1

1

Because

( ( ) ) ( ( ( ) ) ) ( ( ) )

( ( ) ) 0

so

( ( ) ) is orthogonal to y

T T T T T T T T

T T

T T

I y y y y y I y y y y y I y y y y y

and

I y y y y y

I y y y y

− − −

−

−

− = − = −

− =

−

Numerically we have the three vectors orthogonal to y

25 6 9

34 17 34

6 9 6

1

( ( ) ) 1 2 3

17 17 17

9 6 25

34 17 34

T T

p I y y y y p p p

− −

−

= − = − − − =

− −

The previous column vectors are orthogonal to y. In a graphic way we have

.

Figure 6 The tree vectors

1

( ( ) )

T T

I y y y y

−

−

orthogonal to the vector y

2

,

we have

[( ) ] ( ) 0

because

( ) 0

and

[( ) ] 0

Because

T

Q Q Q Q

T T T

I Q y Qy y I Q Qy

T

I Q Q

T

I Q y Qy

= =

− = − =

− =

− =

9

y = (1.5,2,1.5)

p2

p1

p3

So ( I – Q ) y is orthogonal to Q y. Numerically we have

1

2

[( ) ] 0

1

2

I Q y

−

− =

In a graphic way we have

Figure 7 We show in the three white triangles the input data y ; the black points are Q y , the white

squares are the points ( I – Q) y orthogonal to the black points Q y.

For the (7) we obtain

01234

1

−

0

1

2

3

X

k

D

k

C

k

F V

h

( )

k 1

+

k 1

+,

k 1

+,

V

h

,

10

1

( ) (1 )

( ) ( ) (1 )

2

(1 ) (1 ) ( ) 0

T T

y X X X X y y Qy Q y

and

T T

Qy Qy Q y

T T T T

y Q Q y y Q Q y y Q Q y

ε

ε

−

− = − = − =

= − =

− = − = − =

The error ε is perpendicular to the optimal condition Q y. We remark also that

( ) (1 ) 0Q y Qy Q Q y Q

ε

− = − = =

The projection of the error is equal to zero and therefore

1

( ) ( ) ( )

T T

Q y X X X X y Qy Q Qy

ε ε ε

−

+ = + = + =

(8)

The projection operator Q projects the variable y + ε into Q y where the error is eliminated.

3. Metric G in the parameter space

Let’s prove that the minimum action reasoning can be written as follows :

min ( )

y X

T T T

P X X G

T

E X y

β

β β β β

β

=

= =

=

(8)

where P is the minimum distance ( geodesic ) in the space of the parameters β with a metric G. The

form E = XT y is the constrain in the minimum action reasoning that is invariant for the projection

operator Q. In fact we have

XT Q y = XT y = E (9)

Proof :

To solve the minimum problem with constrains we use the Lagrange multipliers and thus

( )

( ) ( )

T T

D G E X y

T T T

G E X X G E G

β β λ

β β λ β β β λ β

= + − =

+ − = + −

Now we compute the derivative in this way

0

T

DT

G G G

j j j j

β β β

β β λ

β β β β

∂ ∂ ∂ ∂

= + − =

∂ ∂ ∂ ∂

that can be written as follows

11

2

T T T

v G Gv Gv Gv

j j j j

β β β λ

+ = =

For which

2T

λ β

=

and

( ) 2 ( )

2 2 2

T T

D G E G

T T T T T

G E G E G

β β β β β

β β β β β β β β

= + − =

+ − = −

We have also that

2( ) ( ) ( ) ( ) ( )

2 ( )

T T T T T

y X y X y y y X X y X X

T T T T T T T T

y y E E G y y E G y y D

ε β β β β β β

β β β β β β β β

= − − = − − +

= − − + = − + = −

If D(β) assumes the minimum value also the error ε2 assumes the minimum value.

Example

2 2

2 2 (2 2 2 )

1 1 2 1 2

1 2 2

2( 2 ) 0

1 2

1

1

2( 2 ) 0

2 1

2

2

D E E

DE

DE

β β β β β β

β β

β

β β

β

= + − + +

∂= − − =

∂

∂= − − =

∂

So we have

21 2

1

22 1

2

E

E

β β

β β

= +

= +

or

1 0 1 0 2

1 1 1 2

( 1 1 1 1 ) 2

2 2 2 1

0 1 0 1

1

,

1

T

E

E

or

E G G E

T

y X XG X Y QY

β β β

β β β

β β

β

+

= = +

−

= = −

= = =

In a general case we have that the minimum condition is

12

2 ( )

2 2 0

T T

DT

E G G

j j j j

T T

v E v G

j j

β β β

β β

β β β β

β

∂ ∂ ∂ ∂

= − + =

∂ ∂ ∂ ∂

− =

The solution is

E G

β

=

for which we have

1 1 T

G E G X y

β

− −

= =

(10)

And for the minimum action reasoning for β we have the projection operator

1

T

y X X G X y Qy

β

−

= = =

(11)

The (11) is the minimum action reasoning by projection operator. Now for the error ε, for which

Q ε = 0, we obtain

1

( )

T

y X X G X y

Qy Q Qy

β ε

ε

−

= = + =

+ =

(12)

The projection operator separates the variable y from its error ε. The elimination of the error ε from

the original variable y in the projection operation gives the meaning of the optimal condition for β.

We remark that the minimum action reasoning is generated by a conditional minimum with

constrain (8) without the computation of the variance.

Example 3

Given the column space

1 0

1 1

0 1

X

=

We have

13

1 0

1 1 1

1

1 1

2 2 1 2

2

0 1

3 3 2

1 0 1 0 1 0 1 0

1 1 1 1

1 1 1 1 ( 1 1 ) 1 1

2 2 2 2

0 1 0 1 0 1 0 1

2

( ) ( ) 1 2

q y

q Q y

q y

T

T

T

P

T

Qy Qy q q

β

ββ β

ββ

β β β β

β β β β

= = = +

= =

= = +

2 2 2 2 2 2 2

( ) 2 2 2

1 1 2 2 1 2 1 2

3

q

β β β β β β β β

+ = + + + = + +

So in the space of the samples q the form P is a simple quadratic form and the geometry is the

traditional Euclidean space . In the space of the parameters β the form P is a quadratic form with a

cross ( non –Euclidean space ) term that gives the dependence between the two vectors

1 0

1 , 1

1 2

0 1

x x= =

In a graphic way we have

Figure 8 Non orthogonal reference space in the two dimensional plane generated by the vectors x1 ,

x2 as non Euclidean space. The space ( q1 , q2 ,q3 ) is the sample space.

We remark that the unitary transformation U for which we have

1

( ) ( )

T

U U

and

T T T T

P U UG U UG G

β β β β β β

=

= = =

(13)

is a transformation for which P is invariant. When P assumes the minimum value, the change of the

parameters β and Gβ by U does not change the minimum value P. The transformation U is the

unitary transformation that gives all the possible parameters for which we have the minimum value

for P. This is similar to the least action in physics. Any change of the reference does not change the

least action property in mechanics.

x 1

x2

q 1

q 2

q3

14

4. Minimum action reasoning in the fuzzy number space.

In this chapter we suggest a new representation of the classical fuzzy inference process [1] ,[2],[3] by

extension of the linear regression to minimum action reasoning by projection operators and to

variables or coordinates whose values are fuzzy numbers.

Let

1 2

( , , ,...., )

n

y x x x S∈

where

( , , ,...., ) .....

1 2 1

y x x x S U U U

n y x xn

∈ = ⊗ ⊗ ⊗

Each universe Uj is a domain whose values are fuzzy numbers. The fuzzy model is

1 1 1 2 1 2

( , ,...., ) ,..... ( , ,...., )

n q q n

y f x x x f x x x

β β

= + +

where the basis functions are functions whose independent variables and dependent variables are

fuzzy numbers.

1 1 2 1

1 1 2 1

1 2

( , ,...., )

( , ,...., )

.........................................

( , ,...., )

n

n

q n q

f x x x y U

f x x x y U

f x x x y U

= ∈

= ∈

= ∈

Where U is a collection of fuzzy numbers. In the minimum action reasoning in the real numbers we

have

1 ...

1,1 1,

1

1 ...

2,1 2,

2,

... ... ... ... ...

1 ...

,1 ,

q

x x p

y

x x

yp

y X

yx x

q q p

= =

When we substitute the ordinary numbers with the fuzzy numbers

,i j

A

we obtain for X the matrix

( ) ( ) ... ( )

1,1 1,2 1,

( ) ( ) ... ( )

2,1 2,2 2,

( )

... ... ... ...

( ) ( ) ... ( )

,

,1 ,2

A x A x A x

p

A x A x A x

p

X x

A x A x A x

q p

q q

=

where

X

is the fuzzy connection matrix. We have also that for the random fuzzy number

( )

1

( )

2

( ) ...

( )

B y

B y

Y y

B y

q

=

15

In the same way in which we have computed the parameters β in chapter 2 , we can compute the

functional parameters β in the functional space of the fuzzy numbers

1

( , ) ( ( ) ( )) ( ) ( )

T T

x y X x X x X x Y y

β

−

=

(14)

Where

2( ) ( ) ( ) ... ( ) ( )

,1 ,1 ,2 ,1 ,

2

( ) ( ) ( ) ... ( ) ( )

,2 ,1 ,2 ,2 ,

( ) ( ( ) ( ))

... ... ... ...

2

( ) ( ) ( ) ( ) ... ( )

, ,1 , ,2 ,

A x A x A x A x A x

j j j j j n

j j j

A x A x A x A x A x

j j j j j n

T

G x X x X x j j j

A x A x A x A x A x

j n j j n j j n

j j j

∑ ∑ ∑

∑ ∑ ∑

= =

∑ ∑ ∑

Given the functions (14) we can build the best output of fuzzy numbers with the expression

1

( , ) ( ) ( , ) ( )( ( ) ( )) ( ) ( )

T T

Y x y X x x y X x X x X x X x Y y

β

−

= =

where

( , ) ( ) ( )

T

E x y X x Y y=

is the functional input that is connected with the coefficients (14) in the following way

1 1

( , ) ( ( ) ( )) ( , ) ( ) ( , )

T

x y X x X x E x y G x E x y

β

− −

= =

The functions

( , ), ( , )x y E x y

β

are dual functions for which we have

1

( , ) ( , ) ( ) ( , ) ( , ) ( ) ( , )

T T

P x y x y G x x y E x y G x E x y

β β

−

= =

And the (8) is

( , ) ( ) ( , ) ( , ) ( )

min ( , ) ( ( ) ( )) ( )

( , ) ( ) ( )

Y x y X x x y Q x y Y y

T T T

P x y X x X x G x

T

E x y X x Y y

β

β β β β

β

= =

= =

=

Now given

( , )x y

β

we have a fuzzy model by which given a set of fuzzy sets as values for the input

variables

( , ,...., )

1 2

x x xn

we can compute the associate fuzzy set for the output variable y. In fact we

have

16

( , ) ( ) ( , ) ( ) ........ ( , ) ( )

1 1

Q y x B y x y A x x y A x

p p

β β

= + +

5. Fuzzy logic and classical logic by projection operator

5.1 Classical Logic Models

For classical logic the AND , OR , IF...THEN , NOT ..... rules can be represented by the model

( , ) 1 2 3 4

z x y x y xy

β β β β

= + + +

In the numerical way we have

11

0 0 0 1 0

,

0 1 0 1 0

1 0 0 1 1

1 1 1 1

x y xy

X y

= =

For chapter 2 we have

1

21

3

4

1

1

( ) 2

1

T T

X X X y

ββ

βββ

−

−

−

= = =

The projection operator is

0 0 0 1 1 1

0 1 0 1 1 0

1 0 0 1 2 0

1 1 1 1 1 1

2

1 ( 2 ) 1 ( )

z Qy X

z x y xy x y

β

−

−

= = = =

= − + − = − −

For the logic operation y = A → B “if A then B” ,we have y(0,0) = 1 , y(0,1) = 1 , y(1,0) = 0 and

y(1,1) =1 . So

17

0 0 0 1 1

0 1 0 1 1

,

1 0 0 1 0

1 1 1 1 1

X y

= =

And the parameters are

1

21

3

4

1

0

( ) 1

1

T T

X X X y

ββ

βββ

−

−

= = =

So the model of inferential rule “if A then B” is the following

1 ( 1) 1z x xy x y= − + + = − +

5.2 Many Value Logic and Fuzzy Logic models

Given the possible values

1 2 1 2

( ) , ( )

2 2

x x y y

x y

µ µ

+ +

= =

where x and y are classical logic values one and zero ,we obtain the many value logic

1 1

( ) 0, ,1 , ( ) 0, ,1

2 2

x y

µ µ

∈ ∈

The general composition rule is

0 1 1 2 2 3 1 4 2 5 1 1 6 1 2 7 2 1 8 2 2

z x x y y x y x y x y x y

β β β β β β β β β

= + + + + + + + +

We can simplify and generalise the previous model in this way

3

2 4

1

3 4

1 2

z x y x y

j j j j

n n n

j j j

or

z x y xy

β

β β

β

β β β β

= + + +

∑ ∑ ∑

= + + +

where is the average value

Example 4

18

For the AND operation in the classical logic we have

1

0, 0, 0,

1 2 3 4 2

β β β β

= = = =

In many value model we have

1 1 2 2

( )

2

x y x y

z+

=

So we have the composition rule

1 1 2 2

1 1

0 1

2 2 2

0 0 0 0 0

1 1 1 1

0

( ) 2 2 2 2

1 1 1

0 0

2 2 2

1 1

1 0 1

2 2

x y x y

z

p q

µ

+

=

∧ =

that can be separate in two AND operations

1 1

0 1 0 1

2 2

0 0 0 0 0 0 0 0

min( , ) , min( , )

1 1 1 1 1

0 0 0

2 2 2 2 2

1 1

1 0 1 1 0 1

2 2

x y x y

q x y q x y

∧ ∧

= = = ≤

6. Minimal action reasoning for fuzzy sets

Given the triangular basis of fuzzy set

(1 )

( ) ( )

0

x u

i j x u u

i j

u

A x x

j i

j

x u u

i j

µ

−

− − ≤ ∆

∆

= = − > ∆

19

in a graphic way we have

0 0.5 1 1.5 2

0

0.2

0.4

0.6

0.8

1

1

0

M

h 0

,

M

h 1

,

M

h 2

,

M

h 3

,

M

h 4

,

20 x

h

Figure 9 Set of triangular fuzzy set

We have the connection matrix

( )

1

( )

2

( ) ...

( )

5

A x

A x

X x

A x

=

In a numerical way we have the five fuzzy sets A , one for each row. The five fuzzy sets are

presented in the matrix A, one for each column

A

0.1

0.2

0.6

1

0.6

0.2

0.1

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0.1

0.2

0.6

1

0.6

0.2

0.1

:=

Now given the trapezoid fuzzy set Y(x) whose values are Yp in figure 10

20

0 5 10

0

2

4

3

0

Y

p

100 p

Figure 10 Trapezoid fuzzy set

in a numerical way we have

0

1

2

3

3

3

3

2

1

0

0

Y

=

With the samples A of the five fuzzy sets and the input trapezoid fuzzy set Y , we can project Y into

the five sets as a five dimension plane embedded in the 11 sample space. So we have

1

0.303

0.605

1.785

3.232 3.026

2.911 0.151

( )( ( ) ( )) ( ) ( ) , ( , )

2.973 1.482

3.044 1.307

2.039 0.958

0.836

0.192

0.096

T T

QY X x X x X x X x Y y x y

β

−

−

= = =

21

In a graphic way QY ,whose numerical values are QAp in figure 11

0 5 10

0

2

4

3.232

0.096

QA

p

100 p

Figure 11 Projection of the trapezoid fuzzy set into the subspace of the fuzzy sets A

and QY = 3.019 A1 -0151 A2 + 1.482 A3 + 1.307 A4 + 0.958 A5 which is the best Transformation of

the trapezoid Y into Y’ that is the linear composition of the original basis fuzzy sets The parameters

β show the weight order of the fuzzy sets A given by the trapezoid input , for which we have

13.019

1.482

3

( ) 1.307

4

0.958

50151

2

A

A

A

order A

A

A

= =

−

7. Minimal action reasoning by oblique projection

In a graphic way we have the oblique projection

Figure 12 Oblique projection of y into B through A.

22

y

A

B

Y = B ( AT B )-1 AT y

K = A ( AT A )-1 AT y

Given the projection operator

1

( )

T T

Q y A A A A y

−

=

we see that the oblique projection operator P is a vector whose projection on A is Q y . For this

remark we can compute the form of the projection operator in this way

1

1

T -1 T T -1 T

Q y = A(A A) A y = A(A A) A P y

T -1 T T T T -1 T

= A(A A) A [B(A B) A ]y = A(A A) A y

so

T T

P = B(A B) A

−

−

And P is the oblique projection .

Example 5

Given the plane in three dimensions ( colon space )

1 1

1 2

1 3

A

=

The sample for y is

1

1 2

2

y

=

The orthogonal projection is

1

1.167

( ) 1 1.667

2.167

T T

y A A A A y

−

= =

Now for B = Z A we have the oblique projection

1 1

2 ( ) 1 ( ) 1

T T T T

y B A B A y ZA A ZA A y

− −

= =

So the orthogonal projection of z on the plane X is equal to the projection of y1 on the same plane

X. For

1 0 0

0 3 0

0 0 1

Z

=

we have

23

0.714

2 2.571

1.714

y

=

In figure 18 we see the transformation from y1 into YR = Q y1 and also into P y = y2

Figure 13 The original samples y1 are the rhombus points ; the model is the straight line , the points

on the straight line are the projections of the rhombus points on the straight line with the minimum

error. The square are the oblique projection of the samples y1

Example 6

We show the fuzzy sets A and the fuzzy sets B in this way

A

0.1

0.2

0.6

1

0.6

0.2

0.1

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0.1

0.2

0.6

1

0.6

0.2

0.1

:= B

0.1

0.2

0.6

0.6

0.6

0.2

0.1

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0.1

0.2

0.6

0.6

0.6

0.2

0.1

:=

In a graphic way the basis fuzzy sets A are

24

0 2 4 6 8 10

0

0.5

1

1

0

A

k 0

,

A

k 1

,

A

k 2

,

A

k 3

,

A

k 4

,

100 k

Figure 14 Set of fuzzy sets A

the basis of the fuzzy sets B = Z A is

0 2 4 6 8 10

0

0.2

0.4

0.6

0.7

0

B

k 0

,

B

k 1

,

B

k 2

,

B

k 3

,

B

k 4

,

100 k

Figure 15 Fuzzy sets B

By oblique projection from the trapezoid distribution yp we have QAp by the orthogonal projection

and after by oblique projection Q ( see figure 12 ) we obtain Qyp as we can see in figure 16.

0 5 10

0

2

4

y

p

p

0 5 10

0

2

4

QA

p

p

0 5 10

2

0

2

4

Qy

p

p

A B

y

Figure 16 Projection from y to B by A by oblique projection operator.

25

8. Metric G for mixed spaces A and B in minimal action reasoning

For the oblique operator where B = Z A we have the minimal condition

( )

min ( )

y B Z A

T T T

P A B G

T

E A y

β β

β β β β

β

= =

= =

=

(15)

where the metric of the parameter space is

T

G A B=

and the transformation from y to y’ is the

oblique projection

Proof :

To prove (15) we can repeat the same computation in (8) by Lagrange multipliers and obtain

( )

( )

T T

D G E A B

TG E G

β β λ β

β β λ β

== + − =

+ −

So

0 for =2

D

j

λ β

β

∂=

∂

2 ( )

0

T T

D G E G

D for E = G β

j

β β β β

β

= + −

∂=

∂

We remark that for the constraint E = AT y , we have the same property of the classical linear

regression with

E = AT y = (AT B) ( AT B )-1 AT y = AT ( B (AT B )-1 AT y ) = AT Q y

In conclusion we have the new type of projection operator to compute the parameters of the model

1

T

y B B G A y Qy

β

−

= = =

(16)

where Q is a projection operator. In fact

-1 T T -1 T

Q = B G A = B (A B) A

2 T -1 T T -1 T T -1 T

Q = B (A B) A B (A B) A = B (A B) A

(17)

26

9. Optical Geometry of Fuzzy reasoning

9.1 Reflection by projection operator

Now we know that the reflection operator is represented by the graph

Figure 17 Reflection of y in Ref y

As we can see in figure 22 we have

( )

( ) (2 )

y Qy I Q y

Ref y = Qy I Q y Qy Qy y Q I y

= + −

− − = + − = −

The reflection point of y is function of the projection operator Q.

We remark that

Ref(Ref (y )= (2Q - I)(2Q - 1)y = (4Q - 2Q - 2Q + I)y = y

and

QRef(y)= Q(2Q - I)y = (2Q - Q)y = Qy

Example 7

Given the plane in three dimensions ( colon space )

1 1

1 2

1 3

A

=

given the sample

O

y

Ref y

K

A

Q y

(I- Q )y

-(I- Q )y

27

1

1 2

2

y

=

for the reflection operator we have

4

3

4

1

(2 ) 1 (2 ( ) ) 1 3

7

3

T T

Ref y1 = Q I y A A A A I y

−

− = − =

Figure 18 the rhombus points are the original points , the dot points are the projection points and the

squares are the points generated by the reflection operator.

Example 8

For the two dimensional space of samples we have

cos( ) cos( )

,

sin( ) sin( )

y A

α β

α β

= =

28

1

( ) ( )( ( ) ( )) ( )

sin(2 )

2

cos( )

cos( ) cos( ) cos( ) cos( )

12

( )

sin( ) sin( ) sin( ) sin( ) sin(2 ) 2

sin( )

2

T T

Q A A A A

T T

β β β β β

β

β

β β β β

β β β β β β

−

=

−

= =

And the reflection operator is

cos(2 ) sin(2 )

( ) sin(2 ) cos(2 )

Ref( )= 2Q I

β β

β β β β

− =

−

Given y , we have the reflection

( ) ) ( )

cos(2 ) sin(2 ) cos( ) cos( )cos(2 ) sin( )sin(2 )

sin(2 ) cos(2 ) sin( ) cos( )sin(2 ) sin( ) cos(2 )

Ref( )y( ) = (2Q I y

β α β α

β β α α β α β

β β α α β α β

−+

= =

− −

Numerical example 9

= 60, = 45

α β

We have

3

cos(2(45)) sin(2(45)) cos(60) cos(30)

2

sin(2(45)) cos(2(45)) sin(60) sin(30)

1

2

Ref( )y( )

β α

= = =

−

9.2 Minimum path and reflections

Now given two points x and y , we want to find the minimum path between x and y that is reflected

in C by the model ( hyper-plane in S )

Figure 19 Reflection from x to y by C in A

x

y

Ref y

C

A

29

Now given the points

cos( ) cos( ) cos( )

, ,

sin( ) sin( ) sin( )

x y A

α ρ β γ

α ρ β γ

= = =

we compute the reflection point C

(2 )

cos(2 ) sin(2 ) cos( ) cos(2 ) cos( ) sin(2 )sin( )

sin(2 ) cos(2 ) sin( ) sin(2 ) cos( ) cos(2 )sin( )

Ref y Q I y

γ γ ρ β γ β γ β

ρ

γ γ ρ β γ β γ β

= − +

= =

− −

With the graph

Figure 20 Oblique projection and reflection

Now we compute the vector F whose origin is in x and the end in P x

(2 )

cos( ) cos(2 ) cos( ) sin(2 )sin( )

sin( ) sin(2 )cos( ) cos(2 )sin( )

F = x - Ref y x Q I y

α δ β δ β

ρ

α δ β δ β

= − − +

= −

−

The vector orthogonal to the vector F is

1

( )

T T

E I F F F F

−

= −

In fact we have

1

( ( ) ) 0

T T

EF I F F F F F

−

= − =

30

x

y

Ref y

P x = C

Column space A

constrain

E

Oblique

projections

F

With the expression of F we have

2 2 2

2

2

2 sin( ) 2 sin( ) sin( ) sin( 2 )

2 2

4sin( ) 2 1

2

[cos( ) cos( 2 )][(sin( ) sin( 2 )]

2cos( 2 ) 1

R R R

R R

E

R R

R R

α β α β

δ δ α β δ

α β δ

α β δ α β δ

α β δ

+ −

− − − + + −

+

+ − − +

=

− − + −

+ + − +

The point C is given by the oblique projection whose operator was computed in the previous chapter

1

( )

T T

C A E A E

−

=

or

2

cos( )[2cos(2 2 ) 2 cos( 2 ) 2 cos( 2 4 ) 2 cos( )]

sin( )[2 cos(2 2 ) 2 cos( 2 ) 2 cos( 2 4 ) 2 cos( )]

where

2[cos(2 ) cos( ) cos(2 3 )

cos( ) 2 cos( )

R R R

Det

CR R R

Det

Det R

R R

γ α β γ β γ α β γ α

γ α β γ β γ α β γ α

α γ γ β γ

α β γ α β γ

+ − − − + + − −

=

+ − − − + + − −

= − − + −

+ + − − − + +

2

cos( 3 ) cos( )R R

α β γ γ

+ − −

Example 10

Given

2 1

2 , 2

1 2

x y

= =

and

1 1

1 2

1 3

A

=

Now we want to compute the minimum action from x to y by C. Before we use the reflection

operator to obtain ref (y) see figure 25.

31

4

3

4

1

(2 ) (2 ( ) ) 3

7

3

T T

Ref y = Q I y A A A A I y

−

− = − =

after we compute the segment that joins x with the ref ( y ).

2

3

2

3

4

3

x - Ref y

=

−

after we compute the vectors E

5 1 1

6 6 3

1 5 1

1

)6 6 3

1 1 1

3 3 3

T

T

E = I - (x - Ref y)((x - Ref y) (x - Ref y) (x - Ref y)

−

−

= −

Because the determinant of E is equal to zero we choose only two colons that give us the plane

perpendicular to ( x – ref y ). So we have

5 1

6 6

1 5

6 6

1 1

3 3

E

−

= −

Now we are ready to project in an oblique way the three points x into the plane A to have the point

C. So we have

5

3

5

1

3

5

3

T

C = A(E A) Ex

−

=

where C is the wanted result. We show in 26 the vectors x , C , y.

32

Figure 21 Given the points x and y we compute the point C for which from x we can generate y as

we can see in figure 20

9.3 Rotation by reflection and projection

We know that the rotation is the composition of two reflections as follows

Ref( )Ref( ) = Rot(2( - ))

β α β α

When we establish this angle of rotation

β γ

−

we have

= 2( - ))

β γ β α

−

So

2

2 2

and

β γ β α

β γ β γ

α β

−= −

− +

= − =

We can decompose the rotation into two reflections. In fact we have

cos( ) sin( )

( )

( ) sin( ) cos( )

2

Ref( ) Ref

β γ β γ

β γ

αβ γ β γ

+ +

+

= =

+ − +

And now we have

33

2

cos(2 ) sin(2 ) cos( ) sin( )

sin(2 ) cos(2 ) sin( ) cos( )

cos( ) sin( ) ( )

sin( ) cos( )

Ref( )Ref( )=

=

Rot

β γ

β

β β β γ β γ

β β β γ β γ

β γ β γ β γ

β γ β γ

+

+ +

− + − +

− − −

= = −

− −

In a graphic way we have

Figure 22 Rotation by reflection

Remark

Each rotation can be written as a composition of projection operators

In fact we have

)

2 2

42 2

cos(2 ) sin(2 ) cos( ) sin( )

sin(2 ) cos(2 ) sin( ) cos( )

cos( ) sin( ) ( )

sin( ) cos( )

Ref( )Ref( ) = (2Q( ) - I)(2Q( ) I

I Q( )Q( ) 2Q( ) 2Q( )

=

Rot

β γ β γ

β β

β γ β γ

β β

β β β γ β γ

β β β γ β γ

β γ β γ β γ

β γ β γ

+ + −

+ +

= + − −

+ +

− + − +

− − −

= = −

− −

γ

β

2

β γ

+

34

With reflection and rotation we have all possible cases

All these operators can be decomposed into a chain of projection operators.

Because each complex rotation or orthogonal matrix in n dimensions can be decomposed in this way

1 2 1 2n n

Rot(θ , ,... )= Rot(θ )Rot( ),...Rot( )

θ θ θ θ

and because each rotation can be represented by two reflections we can decompose rach rotation into

2n reflections.

Example 11

In three dimensions we have

1 1

1 1 1

2 1

2

1 1

3 3 3

3 3

cos( ) sin( ) 0

sin( ) cos( ) 0

0 0 1

cos( ) 0 sin( )

0 1 0

sin( ) 0 cos( )

1 0 0

0 cos( ) sin( )

0 sin( ) cos( )

Rot(θ )=

Rot( )=

Rot( )=

θ θ

θ θ

θ θ

θθ θ

θ θ θ

θ θ

−

−

−

Each rotation can be decomposed into two reflections and four projection operators. Each three

dimensions complex rotation can be decomposed into 6 reflections and 6 projections.

9.4 Refraction as composition of projection operators

With

cos( ) cos( ) cos( )

, ,

sin( ) sin( ) sin( )

x A B

α β γ

α β γ

= = =

the first projection from x to y into the space A is

35

sin(2 )

2

cos( )

cos( ) cos( ) cos( ) cos( )

12

( )

sin( ) sin( ) sin( ) sin( ) sin(2 ) 2

sin( )

2

T T

QA

β

β

β β β β

β β β β β β

−

= =

So we have

sin(2 )

2

cos( ) 2

sin(2 ) 2

sin( )

2

cos( )

sin( )

A

y Q x

β

β

ββ

α

α

= =

And with

2

sin( )

2

cos( )

cos( ) cos( ) cos( ) cos( )

12

( ) 2

sin( ) sin( ) sin( ) sin( ) sin( ) 2

sin( )

2

T T

QB

γ

γ

γ γ γ γ

γ γ γ γ γγ

−

= =

we have

sin(2 ) sin(2 )

2 2

cos( ) cos( ) cos( )

2 2

sin(2 ) sin(2 ) sin( )

2 2

sin( ) sin( )

2 2

Q y Q Q x

B B A

γ β

γ β α

γ β α

γ β

= =

In a formal way we have

1 1

( ) ( )

T T T T

Q Q B B B B A A A A

BA

− −

=

Now we know that for any wave in optics we have the propagation rule or Eikonal ( field ) whose

propagation ray is always orthogonal to the tangent of the wave form. In conclusion when the form

of the wave changes the ray changes, but is always perpendicular to the tangent and so the movement

is a sequence of projection operator.

36

Figure 23 Propagation of the wave ray in geometric optics by projections into different tangent

planes in Y and Z.

In a graphic way we have

Figure 24 Wave front in the refraction as a chain of projection from x to A and from y to B = Z A

where Z is the operator by which we transform the fuzzy reference A into the fuzzy reference B.

In the physical refraction we have a chain of projections in two dimensional space. Now with the

morphogenetic projection in n dimensional space, we can simulate refraction in n dimensional

space. Refraction is a transformation of basis fuzzy set A into basis fuzzy set B = Z A.

Example 12

Given the basis A with five fuzzy sets and the basis B with again five fuzzy sets as we show as

follows

Y

X

Z

z

x

y

O

Wave front

A

Wave front

B= Z A

37

A

0.1

0.2

0.6

1

0.6

0.2

0.1

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

1

0.6

0.2

0

0

0

0

0

0

0.1

0.2

0.6

1

0.6

0.2

0.1

:= B

0.1

0.2

0.6

0.6

0.6

0.2

0.1

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0

0.2

0.6

0.6

0.6

0.2

0

0

0

0

0

0

0.1

0.2

0.6

0.6

0.6

0.2

0.1

:=

We have the refraction operator

Refraction =

1 1

( ) ( )

T T T T

Q Q B B B B A A A A

BA

− −

=

And we have the refraction result for fuzzy sets where the input is y p , the first orthogonal projection

is QAp and the final refraction result is Refraction

0 5 10

0

2

4

y

p

p

0 5 10

0

2

4

QA

p

p

0 5 10

0

2

4

Refraction

p

p

A B

y

Figure 25 Fuzzy inference process by refraction in 11 dimensions

10. Conclusion

In this paper we present the minimum action reasoning by which we can move in the

multidimensional space with projections , reflections , rotations and refractions. For each operation

we build a special multidimensional operator and a special control by models. Because the models

are surfaces we can build a set of surfaces by which we can project , reflect or orientate rays to

control the movement from the initial point to the final point. The idea is to simulate the best

movement or action to join two points with a path controlled by surfaces with the minimum distance

( geodesic ).The multidimensional space can be of any type to cover different applications as linear

and non linear regression , fuzzy transformations, inferential fuzzy logic and many other future

applications.

38

References :

[1] Chongfu H. , Yong S. , Towards efficient fuzzy information processing , Physica – Verlag ,

A. Springer –Verlag Company 2003

[2] Diamond P. Fuzzy Least squares, Information Sciences 46(3)141-157,1988

[3] Fatmi A.,Resconi G., A new computing principle, Il Nuovo Cimento Vol.101 B,N.2, 239-

242, - Febbraio 1988

[4] Klir G. Bo Yuan , Fuzzy sets and fuzzy logic, Prentice Hall PTR New Jersey 1995

[5] Nikravesh M., Intelligent computing techniques for complex systems, in Soft Computing and

Intelligent Data Analysis in Oil Exploration, 651-672, Elsevier, 2003

[6] Resconi G. , Nikravesh M., Morphic computing, IFSA 2007 World Congress Cancun,

Mexico,June 18-21,2007

[7] Resconi G. The morphogenetic systems in risk analysis, Proceedings of the International

Conference on Risk Analysis and Crisis Response, September 25-26 , 2007.161-165 , Shangai ,

China

[8] Resconi G , A.J. van der Wal , Morphogenetic neural network encode abstract rules data ,

Information Sciences 142. 249-273, 2002

[9] Resconi, G., Nikravesh M., Morphic computing : concepts and foundation , Section in

Nikravesh M. , Zadeh L.A. , Kacprzyk J. , “Forging the new frontiers: Fuzzy Pioneers I” Springer

– Verlag in the Series Studies in Fuzziness and Soft Computing July 2007.

[10] Resconi, G., Nikravesh M., Morphic computing : quantum and field , Section in Nikravesh

M. , .Zadeh L.A , Kacprzyk J. , “Forging the new Frontiers : Fuzzy Pioneers II” Springer, in the

Series Studies in Fuzziness and Soft Computing July 2007

[11 ]Resconi G. and L. C.Jain , Intelligent Agents, Springer 2004

[12] Zadeh L.A , L.Fu , K.S. Tanaka , K.A. Shimura , Fuzzy sets and their applications to

cognitive and decision processes. Academic Press New York 1975

[13] Zadeh L.A. Toward a generalized theory of uncertainty (GTU)-an outline, Information

Sciences , Volume 172 , Issues 1-2 , 9 June 2005 , 1-40

[14] Zadeh L.A. Is there a need for fuzzy logic? , Information Sciences , Volume 178 , Issue 13 ,

1 July 2008 , 2751 – 2779

39