Article

A Lattice-Computing ensemble for reasoning based on formal fusion of disparate data types, and an industrial dispensing application

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

By “fusion” this work means integration of disparate types of data including (intervals of) real numbers as well as possibility/probability distributions defined over the totally-ordered lattice (R,⩽) of real numbers. Such data may stem from different sources including (multiple/multimodal) electronic sensors and/or human judgement. The aforementioned types of data are presented here as different interpretations of a single data representation, namely Intervals’ Number (IN). It is shown that the set F of INs is a partially-ordered lattice (F,⪯) originating, hierarchically, from (R,⩽). Two sound, parametric inclusion measure functions σ:FN×FN→[0,1] result in the Cartesian product lattice (FN,⪯) towards decision-making based on reasoning. In conclusion, the space (FN,⪯) emerges as a formal framework for the development of hybrid intelligent fusion systems/schemes. A fuzzy lattice reasoning (FLR) ensemble scheme, namely FLR pairwise ensemble, or FLRpe for short, is introduced here for sound decision-making based on descriptive knowledge (rules). Advantages include the sensible employment of a sparse rule base, employment of granular input data (to cope with imprecision/uncertainty/vagueness), and employment of all-order data statistics. The advantages as well as the performance of our proposed techniques are demonstrated, comparatively, by computer simulation experiments regarding an industrial dispensing application.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We point out that due to (6) only half of the errors in Tables I, II and III should be considered as detailed in [23]. When all the data where used for training the average error was 10.39. 10 7.75 (F 9 , F 10 ) → F 11 11.18 (F 10 , F 11 ) → F 12 27.69 (F 11 , F 12 ) → F 13 17.33 Grand average 0. 20 22.90 We repeated the aforementioned experiments using a conventional Back Propagation Neural Network (BPNN) [30], [31] having the same architecture as the INNN shown in Fig.1. The latter was achieved by replacing an IN by a single number, namely the mean value of the IN's corresponding PDF. ...
... In addition, note that no ad hoc feature extraction was employed here. The latter is a remarkable advantage of 10 10.77 (F 9 , F 10 ) → F 11 5.74 (F 10 , F 11 ) → F 12 20.53 (F 11 , F 12 ) → F 13 6.28 Grand average 6.43 21.08 10 8.37 (F 9 , F 10 ) → F 11 10.30 (F 10 , F 11 ) → F 12 26.64 (F 11 , F 12 ) → F 13 16.80 Grand average 10.30 13.68 IN-based techniques in a wide range of applications. Furthermore, there are instruments for optimizing performance by parameter optimization of the real functions v(.) and θ(.) in equation (5). ...
... In addition, note that no ad hoc feature extraction was employed here. The latter is a remarkable advantage of 10 10.77 (F 9 , F 10 ) → F 11 5.74 (F 10 , F 11 ) → F 12 20.53 (F 11 , F 12 ) → F 13 6.28 Grand average 6.43 21.08 10 8.37 (F 9 , F 10 ) → F 11 10.30 (F 10 , F 11 ) → F 12 26.64 (F 11 , F 12 ) → F 13 16.80 Grand average 10.30 13.68 IN-based techniques in a wide range of applications. Furthermore, there are instruments for optimizing performance by parameter optimization of the real functions v(.) and θ(.) in equation (5). ...
... The discrete inclusion relation method estimates the class labels of inputs based on the discrete inclusion relation between an input with a determined class label and an input without a class label and includes techniques such as random forest (RF) and granular computing (GrC). In this paper, we mainly study the classification algorithm using GrC, especially GrC with the form of hyperbox granule, the superiority and feasibility of which are shown in references [4][5][6][7][8][9][10][11]. ...
... Operations between two granules are expressed as the equivalent form of membership grades, which are produced by the two triangular norms [15]. Kaburlasos defined the join operation and the meet operation as inducing granules with different granularity in terms of the theory of lattice computing [5,6]. Kaburlasos defined the fuzzy inclusion measure between two granules on the basis of the defined join operation and meet operation, and the fuzzy lattice reasoning classification algorithm was designed based on the distance between the beginning point and the endpoint of the hyperbox granule [7]. ...
... Since there are no data with the other class label lying in the join hyperbox granule [x 2 , x 2 ] ∨ G 1 = [4, 6,7,7], the G 1 is replaced by the join hyperbox granule, namely 6,7,7], as shown in Figure 3b. The third datum x6 with the same class label with G 1 is selected to generate atomic hyperbox granule [x 6 , x 6 ] = [5,9,5,9], which is joined with G 1 and forms the join hyperbox granule [x 6 , x 6 ] ∨ G 1 = [4, 6,7,9]. As there are no data with the other class label lying in the join hyperbox granule [x 6 , x 6 ] ∨ G 1 = [4, 6,7,9], G 1 is replaced by [x 6 , x 6 ] ∨ G 1 = [4, 6,7,9], namely G 1 = [4, 6,7,9], as shown in Figure 3c. ...
Article
Full-text available
Parametric granular computing classification algorithms lead to difficulties in terms of parameter selection, the multiple performance times of algorithms, and increased algorithm complexity in comparison with nonparametric algorithms. We present nonparametric hyperbox granular computing classification algorithms (NPHBGrCs). Firstly, the granule has a hyperbox form, with the beginning point and the endpoint induced by any two vectors in N-dimensional (N-D) space. Secondly, the novel distance between the atomic hyperbox and the hyperbox granule is defined to determine the joining process between the atomic hyperbox and the hyperbox. Thirdly, classification problems are used to verify the designed NPHBGrC. The feasibility and superiority of NPHBGrC are demonstrated by the benchmark datasets compared with parametric algorithms such as HBGrC.
... In general, organization process of objectives, granules, information granules are easily used to form granular computing algorithms, such as Jamshidi and Kaburlasos, 3 Kaburlasos and Pachidis, 4 Papadakis et al., 5 and Kaburlasos and Kehagias 6 represent a granule by a vector, obtain the granule set including the granules with different granularaties by the partial ordered relation between the two granules. ...
... GrC has been proposed and studied in many fields, including machine learning and data analysis. [3][4][5][6][12][13][14][15][16][17][18] In general, GrC is an emerging computing paradigm of information processing based on lattice computing theory. ...
... 16,17 Fuzzy lattice reasoning is proposed by Kaburlasos and his colleagues, and generate granules with different granularity by the meet operation and the join operation between the two granules. [3][4][5][6]18 The difference between the granular structure and fuzzy lattice reasoning is the different fuzzy relations. The granular structure mainly discusses the relations between object and attribute, and fuzzy lattice reasoning mainly discusses the partial order relations between two objects. ...
Article
Full-text available
Bottom-up and top-down are two main computing models in granular computing by which the granule set including granules with different granularities. The top-down hyperbox granular computing classification algorithm based on isolation, or IHBGrC for short, is proposed in the framework of top-down computing model. Algorithm IHBGrC defines a novel function to measure the distance between two hyperbox hgranules, which is used to judge the inclusion relation between two hyperbox granules, the meet operation is used to isolate the ith class data from the other class data, and the hyperbox granule is partitioned into some hyperbox granules which include the ith class data. We compare the performance of IHBGrC with support vector machines and HBGrC, for a number of two-class problems and multiclass problems. Our computational experiments showed that IHBGrC can both speed up training and achieve comparable generalization performance.
... Recall that an IN computed by algorithm CALCIN retains all-order data statistics (Kaburlasos et al., 2013a). In the aforementioned context, the capacity as well as the rich potential of INs, especially in industrial applications, has been demonstrated (Kaburlasos and Kehagias, 2014;Kaburlasos and Pachidis, 2014;Papadakis and Kaburlasos, 2010). ...
... INs have been used in an array of computational intelligence applications regarding clustering, classification and regression Kaburlasos and Pachidis, 2014;Kaburlasos and Papadakis, 2006;Kaburlasos et al., 2012Kaburlasos et al., , 2013aPapadakis and Kaburlasos, 2010;Papadakis et al., 2014). There is experimental evidence that a parametric, IN-based scheme can be optimized toward clearly improving performance. ...
... Previous works have frequently employed an inclusion measure (σ) function as an instrument for decision making in the lattice of INs (Kaburlasos and Pachidis, 2014;Kaburlasos et al., 2012Kaburlasos et al., , 2013aPapadakis et al., 2014). The interest of this work is in (fuzzy) nearest neighbor classification (Derrac et al., 2014). ...
Article
This work proposes an effective synergy of the Intervals׳ Number k-nearest neighbor (INknn) classifier, that is a granular extension of the conventional knn classifier in the metric lattice of Intervals׳ Numbers (INs), with the gravitational search algorithm (GSA) for stochastic search and optimization. Hence, the gsaINknn classifier emerges whose effectiveness is demonstrated here on 12 benchmark classification datasets. The experimental results show that the gsaINknn classifier compares favorably with alternative classifiers from the literature. The far-reaching potential of the gsaINknn classifier in computing with words is also delineated.
... Recall also that an employment of inclusion measure function σ(., .) for decision-making is called fuzzy lattice reasoning, or FLR for short [35]. ...
... We remark that both inclusion measures σ V and σ V . have been presented elsewhere [30], [31], [35], [42] based on a positive valuation function V in the lattice of generalized intervals rather than based on the (different) length function V in the lattice I of intervals as shown in this work. ...
... This section demonstrates an employment of our proposed techniques in a preliminary industrial application regarding liquid dispensing. The industrial problem as well as a software application platform, namely XtraSP.v1, and algorithm CALCIN have been detailed elsewhere [35]. ...
Article
Full-text available
A fuzzy inference system (FIS) typically implements a function f:BBRNrightarrowmathfrakTf: {BBR}^{N} rightarrow {mathfrak T}, where the domain set BBR{BBR} denotes the totally ordered set of real numbers, whereas the range set mathfrakT{mathfrak T} may be either mathfrakT=BBRM{mathfrak T} = {BBR}^{M} (i.e., FIS regressor) or mathfrakT{mathfrak T} may be a set of labels (i.e., FIS classifier), etc. This study considers the complete lattice (BBF,preceq)({BBF},preceq) of Type-1 Intervals’ Numbers (INs), where an IN F can be interpreted as either a possibility distribution or a probability distribution. In particular, this study concerns the matching degree (or satisfaction degree, or firing degree) part of an FIS. Based on an inclusion measure function sigma:BBFtimesBBFrightarrow[0,1]sigma : {BBF} times {BBF} rightarrow [0,1] we extend the traditional FIS design toward implementing a function f:BBFNrightarrowmathfrakTf: {BBF}^{N} rightarrow {mathfrak T} with the following advantages: 1) accommodation of granular inputs; 2) employment of sparse rules; and 3) introduction of tunable (global, rather than solely local) nonlinearities as explained in the manuscript. New theorems establish that an inclusion measure sigma is widely (though implicitly) used by traditional FISs typically with trivial (i.e., point) input vectors. A preliminary industrial application demonstrates the advantages of our propose- schemes. Far-reaching extensions of FISs are also discussed.
... Consider the set R of real numbers. It turns out that (R = R ∪ {−∞, +∞}, ≤) under the inequality relation ≤ between a, b ∈ R is a complete lattice with the least element −∞ and the greatest element +∞ [25]. ...
... For lattice (L, ≤), we define the set of (closed) intervals as τ(L) = {[a, b] | a, b ∈ L and a ≤ b}. We remark that (τ(L), ≤) is a lattice with the ordering relation, lattice join and meet defined as follows [25]: An isomorphic function ϕ from poset P to poset Q is a map if both "x ≤ y in P ⇔ ϕ(x) ≤ ϕ(y) in Q" and "ϕ is onto Q." Based on the positive valuation function v of lattice (L, ≤) and an isomorphic function θ : ...
... As a consequence, the degree of inclusion of an interval in another one in lattice (τ O (L), ≤) is computed as follows [25]: ...
Article
Full-text available
This paper describes an enhancement of fuzzy lattice reasoning (FLR) classifier for pattern classification based on a positive valuation function. Fuzzy lattice reasoning (FLR) was described lately as a lattice data domain extension of fuzzy ARTMAP neural classifier based on a lattice inclusion measure function. In this work, we improve the performance of FLR classifier by defining a new nonlinear positive valuation function. As a consequence, the modified algorithm achieves better classification results. The effectiveness of the modified FLR is demonstrated by examples on several well-known pattern recognition benchmarks.
... Based on this coding scheme the initial watermark information of 615 bit length (601 bits message plus zeropadding) was coded to a 1275 bit length message, which was embedded into the cover image. The induction of an IN from a population of real numbers was carried out as detailed in [19]. Typical genetic optimization techniques for image watermarking have been applied [4], [20]. ...
... Second, previous engagements of INs in a FIS have used an IN exclusively for representing a data distribution [5], [8], [9], [13], [15], [19]; moreover, any optimization regarded solely the parameters of the two functions v(.) and θ(.). Whereas, in this work we parameterized the interval-representation of an IN resulting in a substantial increase of both numbers of parameters to be optimized and of constraints to be satisfied. ...
... • Subsequently, Euler angles, ij, ș and ȥ are calculated in Python 3.7 according to Eq.(25)- (27). ...
... However, these calculations are complex and can be applied only if the configuration of the robot and the characteristics of the joint trajectories are known. In the latter context, the Lattice computing (LC) paradigm [26], [27] can be introduced in future work, for modeling the movement of a robotic arm aiming towards giving to the robotic arm the "intelligence" to adapt the characteristics of unknown trajectories, e.g. dynamically changing environments such as vineyards. ...
... Based on this coding scheme the initial watermark information of 615 bit length (601 bits message plus zeropadding) was coded to a 1275 bit length message, which was embedded into the cover image. The induction of an IN from a population of real numbers was carried out as detailed in [19]. Typical genetic optimization techniques for image watermarking have been applied [4], [20]. ...
... Second, previous engagements of INs in a FIS have used an IN exclusively for representing a data distribution [5], [8], [9], [13], [15], [19]; moreover, any optimization regarded solely the parameters of the two functions v(.) and θ(.). Whereas, in this work we parameterized the interval-representation of an IN resulting in a substantial increase of both numbers of parameters to be optimized and of constraints to be satisfied. ...
... They proposed an effective synergy of the Intervals' Number k-nearest neighbor (INknn) classifier, that is a granular extension of the conventional knn classifier in the metric lattice of Intervals' Numbers (INs), with the gravitational search algorithm (GSA) for stochastic search and optimization. Their proposed techniques are demonstrated, comparatively, by computer simulation experiments regarding an industrial dispensing application and the benchmark classification datasets [15,16]. ...
... The relations between the threshold of granularity and the evaluation for different GrCC. (16) 0.9761 (16) distance between arbitrary two artificial clusters and the diagonal elements are defined as infinity. ...
Article
Granular computing (GrC) is a frame computing paradigm that realizes the transformation between two granule spaces with different granularities. A comparative analysis of granular computing clustering is discussed in the paper. Firstly, a granule is defined as the form of vectors by the center and the granularity, especially, an atomic granule is induced by a point which has the granularity 0. Secondly, the join operator realizes the transformation from the granule space with smaller granularity to the granule space with lager granularity, and is used to form the granular computing clustering (GrCC) algorithms. Thirdly, the granular computing clustering algorithms are evaluated from the view of set, such as Global Consistency Error (GCE), Normalized Variation of Information (NVI), and Rand Index (RI). The superiority and feasibility of GrCC are compared with Kmeans and FCM by experiments on the benchmark data sets.
... An enabling technology toward making the robots smarter is Computational Intelligence (CI) (Kaburlasos, 2006;Kaburlasos & Papakostas, 2016). Within CI, lattice computing (LC) techniques (Kaburlasos & Pachidis, 2014;Kaburlasos & Papakostas, 2015;Kaburlasos, Papadakis & Papakostas, 2013) are especially promising as explained below. ...
... Lattice Computing (LC) is a novel information processing paradigm defined as "an evolving collection of tools and mathematical modeling methodologies with the capacity to process lattice-ordered data per se including logic values, numbers, sets, symbols, graphs, etc" (Kaburlasos & Papakostas, 2015). A number of LC models have already been reported mainly in clustering, classification and regression applications (Kaburlasos & Pachidis, 2014;Kaburlasos, Papadakis & Papakostas, 2013). The capacity of LC to deal with disparate types of data is especially promising in educational robot applications. ...
Conference Paper
Full-text available
Humans use both symbols and signs to communicate with one another. In particular, humans with learning difficulties emphasize signs. In recent years, there are mounting expectations for a seamless engagement of anthropomorphic robots in education. Our interest here is in blended learning applications involving anthropomorphic robots as assistants to human teachers toward improving education delivery to students with learning difficulties. In such context there is a need to transform symbols to signs. This paper presents preliminary application results regarding a transformation of symbols (i.e., spoken words) to signs in either the Greek or the Bulgarian sign languages by a NAO robot. We reveal advantages and disadvantages of the approach. We describe extensions toward bi-directional transformations between different physical languages and sign languages.
... GrC is a rich framework for the data analysis. A Human-centric Way [12,29] and Interval-based Evolving Modeling [4,30] are used to represent the collection of information granules for spatiotemporal data and heterogeneous data in time-varying systems, respectively. ...
... A Human-centric Way: A Human-centric Way of data analysis is often dealing with the data established by the user and distributed in space and time [12,29]. This is considered the representation of data in an interpretable way. ...
... GrC is a rich framework for the data analysis. A Human-centric Way [12,29] and Interval-based Evolving Modeling [4,30] are used to represent the collection of information granules for spatiotemporal data and heterogeneous data in time-varying systems, respectively. ...
... A Human-centric Way: A Human-centric Way of data analysis is often dealing with the data established by the user and distributed in space and time [12,29]. This is considered the representation of data in an interpretable way. ...
Article
Granular computing has attracted many researchers as a new and rapidly growing paradigm of information processing. In this paper, we apply systematic mapping study to classify the granular computing researches to discover relative derivations to specify its research strength and quality. Our search scope is limited to the Science Direct and IEEE Transactions papers published between January 2012 and August 2014. We defined four perspectives of classification schemes to map the selected studies that are focus area, contribution type, research type and framework. Results of mapping the selected studies show that almost half of the research focused area belongs to category of data analysis. In addition, most of the selected papers belong to proposing the solutions in research type scheme. Distribution of papers between tool, method and enhancement categories of contribution type are almost equal. Moreover, 39% of the relevant papers belong to the rough set framework. The results show that there is little attention paid to cluster analysis in existing frameworks to discover granules for classification. We applied five clustering algorithms on three datasets from UCI repository to compare the form of information granules, and then classify the patterns and define them to a specific class based on their geometry and belongings. The clustering algorithms are DBSCAN, c-means, k-means, GAk-means and Fuzzy-GrC and the comparison of information granules are based on the coverage, misclassification and accuracy. The survey of experimental results mostly shows Fuzzy-GrC and GAk-means algorithm superior to other clustering algorithms; while, c-means clustering algorithm shows inferior to other clustering algorithms.
... Lattice Computing was initially defined as "the collection of Computational Intelligence tools and techniques that either make use of lattice operators inf (infimum) and sup (supremum) for the construction of the computational algorithms or exploit Lattice Theory for language representation and reasoning" [12]. Lattice computing techniques have been used successfully in a number of applications including, industrial dispensing [13], structure identification [14], human facial expression recognition [15], face recognition using thermal infrared images [16], etc. ...
... Based on a population of data samples (features), an IN is induced by algorithm CALCIN described in the following [13,16]. ...
Article
Full-text available
We present a Computer Assisted Diagnosis (CAD) system for Alzheimer’s disease (AD). The proposed CAD system employs MRI data features and applies a Lattice Computing (LC) scheme. To this end feature extraction methods are adopted from the literature, toward distinguishing healthy people from Alzheimer diseased ones. Computer assisted diagnosis is pursued by a k-NN classifier in the LC context by handling this task from two different perspectives. First, it performs dimensionality reduction over the high dimensional feature vectors and, second it classifies the subjects inside the lattice space by generating adaptively class boundaries. Computational experiments using a benchmark MRI dataset regarding AD patients demonstrate that the proposed classifier performs well comparatively to state-of-the-art classification models.
... Consider the set R of real numbers. It turns out that ( ¯ R = R ∪ {−∞, +∞}, ≤) under the inequality relation ≤ between a, b ∈ R is a complete lattice with the least element −∞ and the greatest element +∞ [17]. ...
... is a lattice with the ordering relation, lattice join and meet defined as below [17]: ...
... Since information granules are partially/lattice-ordered, therefore, lattice computing is proposed for dealing with them (Kaburlasos, 2010). Recent work has extended the meaning of lattice computing to denote " an evolving collection of tools and methodologies that process lattice ordered data including logic values, numbers, sets, symbols, graphs, etc " (Kaburlasos and Theodore, 2011; Kaburlasos et al., 2013). In this paper we introduce a new granular computing classification algorithm named LCA-GRTFN based on generalised trapezoidal fuzzy numbers (TFNs) and lattice theory. ...
... For example, consider the set R of real numbers. It turns out that ( { , } , ) R R = −∞ +∞ ≤ ∪ under the inequality relation ≤ between a, b ∈ R is a lattice with the least element −∞ and the greatest element +∞ (Kaburlasos and Theodore, 2011). Note that, in this work, we use, 'straight' symbols ∨, ∧, < and ≤, for real numbers whereas 'curly' symbols , , ≺ and are employed for other lattice elements; for example, 0.2 ≤ 0.4 , whereas (0.2, 0.3, 0.4, 0.5; 1) (0.1, 0.2, 0.5, 0.6; 1). ...
Article
Granular computing and lattice computing are two popular topics in computational intelligence. Granular reasoning is a powerful paradigm for decision making with partially ordered information where the information could be even incomplete or uncertain. In order to implement this reasoning process, lattice theory provides the requirements for the operations that can be used to define a relation between granules and computing ever-changing granules. In this regards, we describe a new algorithm named LCA-GRTFN for Granular Reasoning capable of dealing with lattice of generalised trapezoidal fuzzy numbers. To assess the effectiveness of the proposed model, eighteen benchmark datasets are tested. The results are compared favourably with those from a number of state-of-the-art machine learning techniques published in the literature. Results obtained confirm the effectiveness of the proposed method.
... The pilot project SVtech of autonomous cooperative robots is focusing on the following basic viticultural operations: (i) cutting (see defoliation, pruning, and harvesting), (ii) spraying (precautionary), and (iii) tying. The aforementioned multiple operations and innovations are also supported using a new AI technology, called "lattice computing" [58][59][60], toward making the robots autonomous. ...
Article
Full-text available
The viticultural sector is facing a significant maturation phase, dealing with environmental challenges to reduce agrochemical application and energy consumption, while labor shortages are increasing throughout Europe and beyond. Autonomous collaborative robots are an emerging technology and an alternative to the scarcity of human labor in agriculture. Additionally, collaborative robots could provide sustainable solutions to the growing energy demand of the sector due to their skillful precision and continuous labor. This study presents an impact assessment regarding energy consumption and greenhouse gas emissions of collaborative robots in four Greek vineyards implementing a life cycle assessment approach. Eight scenarios were developed in order to assess the annual production of four Vitis vinifera L. cultivars, namely, Asyrtiko, Cabernet Sauvignon, Merlot, and Tempranillo, integrating data from two wineries for 3 consecutive years. For each conventional cultivation scenario, an alternative was developed, substituting conventional viticultural practices with collaborative robots. The results showed that collaborative robots’ scenarios could achieve a positive environmental and energy impact compared with conventional strategies. The major reason for lower impacts is fossil fuel consumption and the efficiency of the selected robots, though there are limitations regarding their functionality, lifetime, and production. The alternative scenarios have varying energy demand and environmental impact, potentially impacting agrochemical usage and requiring new policy adjustments, leading to increased complexity and potential controversy in farm management. In this context, this study shows the benefits of collaborative robots intended to replace conventional practices in a number of viticultural operations in order to cope with climate change impacts and excessive energy consumption.
... There are two, equivalent IN representations, namely the membership-function representation and the interval representation (Fig. 1). Applications of INs have been reported regarding neural networks, fuzzy inference systems as well as machine learning [13,[22][23][24][25]. An IN here is interpreted as a cumulative possibility distribution function according to the following rationale. ...
Chapter
Accurate prediction of agricultural yield is important also toward timely engaging the resources necessary for harvest. Even more informative, challenging though, than predicting a single number is predicting a distribution regarding an agricultural yield (random) variable such as fruit weight. Cumulative distribution functions are often elusive in practice, moreover they could be nonstationary. Nevertheless, estimates of cumulative distribution functions can be induced from data samples at a sampling time. This work interprets an aforementioned estimate as a cumulative possibility distribution, which is represented by an Intervals’ Number (IN) based on the resolution identity theorem of fuzzy set theory. The orientation of this work is toward real-world applications. Optimizable parametric difference equations, defined in the metric cone of lattice-ordered INs, are proposed toward predicting an IN from past INs. Computational experiments are carried out on data collected from vineyards in northern Greece. Preliminary application results demonstrate, comparatively, the capacity of the proposed method. Future work extensions are discussed.
... An IN is a mathematical object that can represent either a fuzzy interval or a distribution of samples [2,[28][29][30]. Applications of INs have been reported to neural networks (NNs) as well as to fuzzy inference systems (FIS) [23,[31][32][33][34][35]. INs are engaged here for massive data representation in time-series as explained in the following. ...
Article
Full-text available
Our interest is in time series classification regarding cyber–physical systems (CPSs) with emphasis in human-robot interaction. We propose an extension of the k nearest neighbor (kNN) classifier to time-series classification using intervals’ numbers (INs). More specifically, we partition a time-series into windows of equal length and from each window data we induce a distribution which is represented by an IN. This preserves the time dimension in the representation. All-order data statistics, represented by an IN, are employed implicitly as features; moreover, parametric non-linearities are introduced in order to tune the geometrical relationship (i.e., the distance) between signals and consequently tune classification performance. In conclusion, we introduce the windowed IN kNN (WINkNN) classifier whose application is demonstrated comparatively in two benchmark datasets regarding, first, electroencephalography (EEG) signals and, second, audio signals. The results by WINkNN are superior in both problems; in addition, no ad-hoc data preprocessing is required. Potential future work is discussed.
... The assessment of the teachers to be interviewed will be quantified (using a Likert scale) in order to be able to make quantitative conclusions on the appropriateness of using robots in special education. A critical future addition regards the incorporation of parametric cognitive models based on novel knowledge-representation and information-processing techniques in the context of mathematical lattice theory toward computing with semantics (Kaburlasos, 2006;Kaburlasos & Pachidis, 2014). Next steps within the current project are implementation of the complete scenarios and performing structured tests of the effects of 'blending' robotics in education of children with special needs. ...
Conference Paper
Full-text available
The paper presents a currently developed multidisciplinary framework for implementing novel robotic solutions in education of children with special learning needs. The framework emphasizes the entertaining role of the technology in special education, empowering the child to be in control of complex technological devices under the guidance of the teacher. Implementing a robot as an educational assistant to the teacher is defined as a method for ‘blending’ robotics in special educational settings. The results from the pilot studies are presented based on interviewing therapists and from observing the first reactions of children to a technical device like a robot.
... 28 Due to its ability to significantly improve predictions, voting spans many applications ranging from simple classification tasks 29,30 to more complex implementations such as clustering, 31 pairwise comparison 32 and fuzzy systems. 33,34 The challenging step when employing a voting algorithm is the selection of the base classifiers to be combined. When the number of potential classifier combinations and the size of the dataset are rather small, the optimal classifier combination can be determined exhaustively. ...
... Finally, the whole dataset can be classified by an ensemble of the multiple local target feature sets. Lattice-computing ensemble has also been applied to the fusion of disparate data types [50] . ...
Article
Full-text available
Multi-view learning combines data from multiple heterogeneous sources and employs their complementary information to build more accurate models. Multi-instance learning represents examples as labeled bags containing sets of instances. Data fusion of different multi-instance views cannot be simply concatenated into a single set of features due to their different cardinality and feature space. This paper proposes an ensemble approach that combines view learners and pursues consensus among the weighted class predictions to take advantage of the complementary information from multiple views. Importantly , the ensemble must deal with the different feature spaces coming from each of the views, while data for the bags may be partially represented in the views. The experimental study evaluates and compares the performance of the proposal with 20 traditional, ensemble-based, and multi-view algorithms on a set of 15 multi-instance datasets. Experimental results indicate the better performance of ensemble methods than single-classifiers, but especially the best results of the multi-view multi-instance approaches. Results are validated through multiple non-parametric statistical analysis.
... The assessment of the teachers to be interviewed will be quantified (using a Likert scale) in order to be able to make quantitative conclusions on the appropriateness of using robots in special education. A critical future addition regards the incorporation of parametric cognitive models based on novel knowledge-representation and information-processing techniques in the context of mathematical lattice theory toward computing with semantics ( Kaburlasos, 2006;Kaburlasos & Pachidis, 2014). Next steps within the current project are implementation of the complete scenarios and performing structured tests of the effects of 'blending' robotics in education of children with special needs. ...
Article
Full-text available
Robots are attractive tools that extremely enters the learning process at different levels and with specific purposes. We propose an originally designed artificial hand with five fingers that uses Microsoft Kinect sensor as assistive technology to sense Human motions and recognize gestures. In the context of learning new skills by imitation for children with special educational needs we designed gesture-based Wireless Hand-Kinect Framework, evaluated and optimized for real run-time. 3D fingers positions over time sensed by Kinect depth camera are used as features for hand gestures description, training and classification. After gesture recognition, data protocols are transmit wirelessly to control the artificial hand motors. The mechanics of the artificial human hand with six controllable and four dependently actuated mechanisms is presented. The proposed wireless framework is tested by experiments for its feasibility.
... In turn, (I 1 , ) is a complete lattice, where is the conventional set-inclusion relation with order [a,b] [c,d] (c a and b d) [24] and least (resp. is a positive valuation on lattice (L L, ) [10]. Hence, the function v (.) can be used to define a metric distance on the lattice (I 1 , ) of (Type-1) intervals. ...
Article
Full-text available
This work proposes an enhancement of Formal Concept Analysis (FCA) by Lattice Computing (LC) techniques. More specifically, a novel Galois connection is introduced toward defining tunable metric distances as well as tunable inclusion measure functions between formal concepts induced from hybrid (i.e., nominal and numerical) data. An induction of formal concepts is pursued here by a novel extension of the Karnaugh map, or K-map for short, technique from digital electronics. In conclusion, granular classification can be pursued. The capacity of a classifier based on formal concepts is demonstrated here with promising results. The formal concepts are interpreted as descriptive decisionmaking knowledge (rules) induced from the training data.
... By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems, such as classification problems [8][9][10][11][12][13]. ...
... The granule is induced by the training datum, the transformation between two granules is realized by the operation between two granules. The relation between two granules is compounded by the positive valuation function of granules [8][9][10]. ...
Article
Full-text available
The granular computing with l -norm is used to zoom the image. Firstly, a granule is represented by l -norm and has the form of hypercube. Secondly, the bottle-up computing model is adopted to transform the microcosmic world into the macroscopic world by the designed join operation between two hypercube granules. The proposed granular computing is used to zoom the image and achieves the super-resolution image for the input low-resolution image. Experimental results show that the granular computing with l -norm reduces the error between the original image and the reconstructed super-resolution image compared with bicubic interpolation and sparse representation.
... Finally, feature extraction and fusion of multiple views do not necessarily have to be considered two separate processing stages. For instance, in [40,41], lattice computing is proposed for low-dimensional representation of 2D shapes and data fusion. ...
Article
Full-text available
This paper presents a novel silhouette-based feature for vision-based human action recognition, which relies on the contour of the silhouette and a radial scheme. Its low-dimensionality and ease of extraction result in an outstanding proficiency for real-time scenarios. This feature is used in a learning algorithm that by means of model fusion of multiple camera streams builds a bag of key poses, which serves as a dictionary of known poses and allows converting the training sequences into sequences of key poses. These are used in order to perform action recognition by means of a sequence matching algorithm. Experimentation on three different datasets returns high and stable recognition rates. To the best of our knowledge, this paper presents the highest results so far on the MuHAVi-MAS dataset. Real-time suitability is given, since the method easily performs above video frequency. Therefore, the related requirements that applications as ambient-assisted living services impose are successfully fulfilled.
... Thus, for future works, we aim to use more accurate system as decision-making unit. For instance, soft computing methods, such as work done in [27] will be used to obtain better results. Additionally, other nonintrusive methods will be examined to find a better combination of different drowsiness detection methods. ...
Article
Full-text available
This study proposes a drowsiness detection approach based on the combination of several different detection methods, with robustness to the input signal loss. Hence, if one of the methods fails for any reason, the whole system continues to work properly. To choose correct combination of the available methods and to utilize the benefits of methods of different categories, an image processing-based technique as well as a method based on driver-vehicle interaction is used. In order to avoid driving distraction, any use of an intrusive method is prevented. A driving simulator is used to gather real data and then artificial neural networks are used in the structure of the designed system. Several tests were conducted on twelve volunteers while their sleeping situations during one day prior to the tests, were fully under control. Although the impact of the proposed system on the improvement of the detection accuracy is not remarkable, the results indicate the main advantages of the system are the reliability of the detections and robustness to the loss of the input signals. The high reliability of the drowsiness detection systems plays an important role to reduce drowsiness related road accidents and their associated costs.
... , is a complete lattice with the least element −∞ and the greatest element +∞ [41]. A lattice ) , ( £ L is totally ordered if and only if for any ...
Article
As networking and communication technology becomes more widespread, the quantity and impact of system attackers have been increased rapidly. The methodology of intrusion detection (IDS) is generally classified into two broad categories according to the detection approaches: misuse detection and anomaly detection. In misuse detection approach, abnormal system behavior is defined at first, and then any other behavior is defined as normal behavior. The main goal of the anomaly detection approach is to construct a model representing normal activities. Then, any deviation from this model can be considered as an anomaly, and recognized to be an attack. Recently much more attention is paid to the application of lattice theory in different fields. In this work we propose a lattice based nearest neighbor classifier capable of distinguishing between bad connections, called attacks, and good normal connections. A new nonlinear valuation function is introduced to tune the performance of the proposed model. The performance of the algorithm was evaluated by using KDD Cup 99 Data Set, the benchmark dataset used by Intrusion detection Systems researchers. Simulation results confirm the effectiveness of the proposed method.
Chapter
Ambient intelligence (AmI) is an user‐centric multidisciplinary paradigm that grounds its origins in the works of Wiser and Norman on ubiquitous and disappearing computing. Basically, it refers to adding intelligence in devices to make our surrounding environment sensitive, responsive, adaptive, and smart. An important feature that AmI applications need to achieve is the capability of applying human representations and reasoning. Humans represent the reality with hierarchies and abstractions, look at real problems from different perspectives, analyze a cognitive target with capabilities of granulation. Granular computing (GrC) is a suitable paradigm to design and develop AmI systems with capabilities of human representations, reasoning, and decision‐making. In this article, we offer an overview of how current GrC results are positioned with respect to the enabling technologies of AmI, and present a comprehensive view of how GrC can enforce AmI with human‐oriented perception, representation, reasoning, and decision‐making. This comprehensive view is based on a new computational model, namely, the Granular situation awareness.
Article
Full-text available
Situation Awareness is defined by Endsley as “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” and it deals with the continuous extraction of environmental information and its integration with prior knowledge for directing further perception and anticipating future events. To realize systems for Situation Awareness, individual pieces of raw information (e.g. sensor data) should be interpreted into a higher, domain-relevant concept called “situation”, which is an abstract state of affairs interesting to specific applications. The power of using “situations” lies in their ability to provide a simple, human-understandable representation of, for instance, sensor data. The aim of this work is to propose an overview of the applications of Computational Intelligence and Granular Computing for the implementation of systems supporting Situation Awareness. In this scenario, several and heterogeneous Computational Intelligence models and techniques (e.g. Fuzzy Cognitive Maps, Fuzzy Formal Concept Analysis, Dempster–Shafer Theory of Evidence, Ontologies, Knowledge Reasoning, Evolutionary Computing, Intelligent Agents) can be employed to implement such systems. Moreover, in a Situation Identification process, huge volumes of heterogeneous data need processing (e.g. fusion). With respect to this issue, Granular Computing is an information processing theory for using “granules” (e.g. subsets, intervals, fuzzy sets) effectively to build an efficient computational model for dealing with the above-mentioned data. The overview is proposed coherently to both methodological and architectural viewpoints for Situation Awareness.
Conference Paper
Recent work has proposed an enhancement of Formal Concept Analysis (FCA) in a tunable, hybrid formal context including both numerical and nominal data [1]. This work introduces FCknn, that is a granular knn classifier based on hybrid concepts, whose effectiveness is demonstrated on benchmark datasets from the literature including both numerical and nominal data. Preliminary experimental results compare well with the results by alternative classifiers from the literature. Formal concepts are interpreted as descriptive decision-making knowledge (rules) induced from the data.
Article
Full-text available
This paper describes the recognition of image patterns based on novel representation learning techniques by considering higher-level (meta-)representations of numerical data in a mathematical lattice. In particular, the interest here focuses on lattices of (Type-1) Intervals' Numbers (INs), where an IN represents a distribution of image features including orthogonal moments. A neural classifier, namely fuzzy lattice reasoning (flr) fuzzy-ARTMAP (FAM), or flrFAM for short, is described for learning distributions of INs; hence, Type-2 INs emerge. Four benchmark image pattern recognition applications are demonstrated. The results obtained by the proposed techniques compare well with the results obtained by alternative methods from the literature. Furthermore, due to the isomorphism between the lattice of INs and the lattice of fuzzy numbers, the proposed techniques are straightforward applicable to Type-1 and/or Type-2 fuzzy systems. The far-reaching potential for deep learning in big data applications is also discussed.
Conference Paper
This work introduces a novel methodology for human face recognition based on lattice computing kNN classification techniques applied on thermal infrared images. Novel feature extraction and knowledge-representation engage populations of orthogonal moments represented by intervals' numbers, or INs for short. Preliminary experimental results compare well with the results by alternative classifiers as well as with alternative feature extraction techniques from the literature. We point out the far-reaching potential of the proposed techniques to big data applications.
Article
Full-text available
This paper proposes a fundamentally novel extension, namely, flrFAM, of the fuzzy ARTMAP (FAM) neural classifier for incremental real-time learning and generalization based on fuzzy lattice reasoning techniques. FAM is enhanced first by a parameter optimization training (sub)phase, and then by a capacity to process partially ordered (non)numeric data including information granules. The interest here focuses on intervals' numbers (INs) data, where an IN represents a distribution of data samples. We describe the proposed flrFAM classifier as a fuzzy neural network that can induce descriptive as well as flexible (i.e., tunable) decision-making knowledge (rules) from the data. We demonstrate the capacity of the flrFAM classifier for human facial expression recognition on benchmark datasets. The novel feature extraction as well as knowledge-representation is based on orthogonal moments. The reported experimental results compare well with the results by alternative classifiers from the literature. The far-reaching potential of fuzzy lattice reasoning in human-machine interaction applications is discussed.
Conference Paper
Full-text available
Network services in MANETs, such as resource location and distribution of connectivity information, deal with node mobility and resource constraints to support applications. The reliability and availability of these services can be assured by data management approaches, as replication techniques using quorum systems. However, these systems are vulnerable to selfish and malicious nodes, that intentionally do not collaborate with replication operations or spread malicious data while participating in data replication. In order to handle these issues, this paper proposes QS2, a bio-inspired scheme to tolerate selfish and malicious nodes in replication operation of quorum systems. Differently from existing works on the literature, QS2 is distributed and self-organized, and each node has the autonomy to exclude misbehaving nodes. The scheme is inspired by quorum sensing and kin selection, both biological mechanisms resident in bacteria. Simulation results show that QS2 improves significantly the reliability of a quorum system for MANETs, detecting more than 80% of misbehaving nodes on replication operations.
Conference Paper
This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot’s acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).
Article
Full-text available
This work introduces a Type-II fuzzy lattice reasoning (FLRtypeII) scheme for learning/generalizing novel 2D shape representations. A 2D shape is represented as an element—induced from populations of three different shape descriptors—in the product lattice (F 3,⪯), where (F,⪯) denotes the lattice of Type-I intervals’ numbers (INs). Learning is carried out by inducing Type-II INs, i.e. intervals in (F,⪯). Our proposed techniques compare well with alternative classification methods from the literature in three benchmark classification problems. Competitive advantages include an accommodation of granular data as well as a visual representation of a class. We discuss extensions to gray/color images, etc.
Article
Full-text available
This work substantiates novel perspectives and tools for analysis and design of Fuzzy Inference Systems (FIS). It is shown rigorously that the cardinality of the set F of fuzzy numbers equals ℵ 1 , hence a FIS can implement "in principle" ℵ 2 functions, where ℵ 2 = 2 ℵ 1 >ℵ 1 and ℵ 1 is the cardinality of the set R of real numbers; furthermore a FIS is endowed with a capacity for local generalization. A formulation in the context of lattice theory introduces a tunable metric distance d K between fuzzy numbers. Implied advantages include: (1) an alleviation of the curse-of-dimensionality problem, regarding the number of rules, (2) a capacity to cope rigorously with heterogeneous data including (fuzzy) numbers and intervals, and (3) a capacity to introduce systematically useful nonlinearities. Extensive evidence from the literature appears to corroborate the proposed novel perspectives. Computational experiments demonstrate the utility of the proposed tools. A real-world industrial application is also described.
Chapter
Full-text available
Summary. Fuzzy adaptive resonance theory (fuzzy-ART) and self-organizing map (SOM) are two popular neural paradigms, which compute lattice-ordered granules. Hence, lattice theory emerges as a basis for unified analysis and design. We present both an enhancement of fuzzy-ART, namely fuzzy lattice reasoning (FLR), and an enhancement of SOM, namely granular SOM (grSOM). FLR as well as grSOM can rigorously deal with (fuzzy) numbers as well as with intervals.We introduce inspiring novel interpretations. In particular, the FLR is interpreted as a reasoning scheme, whereas the grSOM is interpreted as an energy function minimizer. Moreover, we can introduce tunable nonlinearities. The interest here is in classiffication applications. We cite evidence that the proposed techniques can clearly improve performance.
Article
Full-text available
Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in airports. To increase the system reliability, several biometric devices are often used. Such a combined system is known as a multimodal biometric system. This paper reports a benchmarking study carried out within the framework the Biosecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint and iris biometrics for person authentication, targeting the application of physical access control in a mediumsize establishment with some 500 persons. While multimodal biometrics is a well investigated subject in the literature, there exists no benchmark for a fusion algorithm comparison. Working towards this goal, we designed two sets of experiments: quality-dependent
Article
Full-text available
Constructing a single text classifier that excels in any given application is a rather inviable goal. As a result, ensemble systems are becoming an important resource, since they permit the use of simpler classifiers and the integration of different knowledge in the learning process. However, many text-classification ensemble approaches have an extremely high computational burden, which poses limitations in applications in real environments. Moreover, state-of-the-art kernel-based classifiers, such as support vector machines and relevance vector machines, demand large resources when applied to large databases. Therefore, we propose the use of a new systematic distributed ensemble framework to tackle these challenges, based on a generic deployment strategy in a cluster distributed environment. We employ a combination of both task and data decomposition of the text-classification system, based on partitioning, communication, agglomeration, and mapping to define and optimize a graph of dependent tasks. Additionally, the framework includes an ensemble system where we exploit diverse patterns of errors and gain from the synergies between the ensemble classifiers. The ensemble data partitioning strategy used is shown to improve the performance of baseline state-of-the-art kernel-based machines. The experimental results show that the performance of the proposed framework outperforms standard methods both in speed and classification.
Conference Paper
Full-text available
Information granules are partially/lattice-ordered. Therefore, lattice computing (LC) is proposed for dealing with them. The granules here are Intervals’ Numbers (INs), which can represent real numbers, intervals, fuzzy numbers, probability distributions, and logic values. Based on two novel theoretical propositions introduced here, it is demonstrated how LC may enhance popular fuzzy inference system (FIS) design by the rigorous fusion of granular input data, the sensible employment of sparse rules, and the introduction of tunable nonlinearities.
Article
Full-text available
Most of the text categorization algorithms in the literature represent documents as collections of words. An alternative which has not been sufficiently explored is the use of word mcanin#s, also known as senses. In this paper, using several algorithms, we compare the categorization accuracy of classifiers based on words to that of classifiers based on senses. The document collection on which this comparison takes place is a subset of the annotated Brown Corpus semantic concordance. A series of experiments indicates that the use of senses does not result in any significant categorization improvement.
Article
Full-text available
Gas identification represents a big challenge for pattern recognition systems due to several particular problems such as nonselectivity and drift. The purpose of this paper is twofold: 1) to compare the accuracy of a range of advanced and classical pattern recognition algorithms for gas identification for the in-house sensor array signals and 2) to propose a gas identification ensemble machine (GIEM), which combines various gas identification algorithms, to obtain a unified decision with improved accuracy. An integrated sensor array has been designed with the aim of identifying combustion gases. The classification accuracy of different density models is compared with several neural network architectures. On the gas sensors data used in this paper, Gaussian mixture models achieved the best performance with higher than 94% accuracy. A committee machine is implemented by assembling the outputs of these gas identification algorithms through advanced voting machines using a weighting and classification confidence function. Experiments on real sensors' data proved the effectiveness of the system with an improved accuracy over the individual classifiers. An average performance of 97% was achieved using the proposed committee machine.
Article
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
Article
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error ∈.
Article
Broad classes of statistical classification algorithms have been developed and applied successfully to a wide range of real-world domains. In general, ensuring that the particular classification algorithm matches the properties of the data is crucial ...
Article
A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.
Article
In this paper, a new characterization for the interval-valued fuzzy implication operators is presented, which provides a simple way to construct interval valued fuzzy implication operators from a given fuzzy implication operator using aggregation operators. This method will be used for defining the interval-valued fuzzy R- and S-implications. Finally, some examples of the interval-valued fuzzy implications built by this constructive method and a comparative study among them are shown. (c) 2005 Elsevier B.V. All rights reserved.
Article
Epoxy dispensing is a popular way to perform microchip encapsulation for chip-on-board (COB) packages. However, the determination of the proper process parameters setting for a satisfactory encapsulation quality is difficult due to the complex behaviour of the encapsulant during the dispensing process and the inherent fuzziness of epoxy dispensing systems. Sometimes, the observed values from the process may be irregular. In conventional regression models, deviations between the observed values and the estimated values are supposed to have a probability distribution. However, when data is scattered, the obtained regression model has too wide of a possibility range. These deviations in processes such as epoxy dispensing can be regarded as system fuzziness that can be dealt with satisfactorily using a fuzzy regression method. In this paper, the fuzzy linear regression concept with fuzzy intervals and its application to the process modelling of epoxy dispensing for microchip encapsulation are described. Two fuzzy regression models, expressing the correlation between various process parameters and the two quality characteristics, respectively, were developed. Validation experiments were performed to demonstrate the effectiveness of the method for process modelling.
Article
Fluid dispensing is a method by which fluid materials, such as epoxy, adhesive, and encapsulant, are delivered in a controlled manner in electronics packaging. This paper presents a brief review of past and recent developments in the modeling and control of the time-pressure fluid dispensing process. In particular, the characterization of the fluid flow behavior is addressed by reviewing several promising models from both time-independent and time-dependent perspectives. In the modeling of the time-pressure fluid dispensing process, various approaches for representing the flow rate of fluid dispensed and the profile of fluid formed on target are examined; and the issues involved are identified. In the control of time-pressure dispensing process, a brief review of various control methods is presented along with their limitations. The challenges associated with this control problem are also discussed. This paper is concluded with the recommendations of research in the future.
Article
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
Article
A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.
Article
This paper focuses on the compacted probabilistic binary visual classification for human targets in highly constrained wireless multimedia sensor network (WMSN). With consideration of robustness and accuracy, Gaussian process classifier (GPC) is used for classifier learning, since it can provide a Bayesian framework to automatically determine the optimal or near optimal kernel hyper-parameters. For decreasing computing complexity, feature compaction are carried out before learning, which are implemented by integer lifting wavelet transform (ILWT) and rough set. Then, the individual decisions of multiple nodes are combined by committee decision for improving the robustness and accuracy. Experimental results verify that GPC with committee decision can effectively carry out binary human target classification in WMSN. Importantly, GPC outperforms support vector machine, especially when committee decision is used. Furthermore, ILWT and rough set can offer compact representation of effective features, which can decrease the learning time and increase the learning accuracy.
Article
Dempster’s rule of combination in evidence theory is a powerful tool for reasoning under uncertainty. Since Zadeh highlighted the counter-intuitive behaviour of Dempster’s rule, a plethora of alternative combination rules have been proposed. In this paper, we propose a general formulation for combination rules in evidence theory as a weighted sum of the conjunctive and disjunctive rules. Moreover, with the aim of automatically accounting for the reliability of sources of information, we propose a class of robust combination rules (RCR) in which the weights are a function of the conflict between two pieces of information. The interpretation given to the weight of conflict between two BPAs is an indicator of the relative reliability of the sources: if the conflict is low, then both sources are reliable, and if the conflict is high, then at least one source is unreliable. We show some interesting properties satisfied by the RCRs, such as positive belief reinforcement or the neutral impact of vacuous belief, and establish links with other classes of rules. The behaviour of the RCRs over non-exhaustive frames of discernment is also studied, as the RCRs implicitly perform a kind of automatic deconditioning through the simple use of the disjunctive operator. We focus our study on two special cases: (1) RCR-S, a rule with symmetric coefficients that is proved to be unique and (2) RCR-L, a rule with asymmetric coefficients based on a logarithmic function. Their behaviours are then compared to some classical combination rules proposed thus far in the literature, on a few examples, and on Monte Carlo simulations.
Article
A morphological neural network is generally defined as a type of artificial neural network that performs an elementary operation of mathematical morphology at every node, possibly followed by the application of an activation function. The underlying framework of mathematical morphology can be found in lattice theory.With the advent of granular computing, lattice-based neurocomputing models such as morphological neural networks and fuzzy lattice neurocomputing models are becoming increasingly important since many information granules such as fuzzy sets and their extensions, intervals, and rough sets are lattice ordered. In this paper, we present the lattice-theoretical background and the learning algorithms for morphological perceptrons with competitive learning which arise by incorporating a winner-take-all output layer into the original morphological perceptron model. Several well-known classification problems that are available on the internet are used to compare our new model with a range of classifiers such as conventional multi-layer perceptrons, fuzzy lattice neurocomputing models, k-nearest neighbors, and decision trees.
Article
The basic operations of mathematical morphology, dilation and erosion, were introduced by Matheron and Serra. They were initially defined as Minkowski addition and subtraction on subsets of the Euclidean space, using translations, unions and intersections. Following Sternberg, they were generalized to the set of grey-level images with the help of umbras. Recently Serra and Matheron have generalized morphological operations to complete lattices, that is, sets in which the operations of supremum and infimum are well-defined. This generalization has proven useful by extending the scope of mathematical morphology to other structures. In this paper we show that it is also necessary for a mathematically coherent application of morphological operators to grey-level images. Indeed • In the continoous case, the definition of dilations on umbras is not exactly the same as for ordinary Euclidean sets; here the union must be replaced by a supremum operation similar to the one in the complete lattice of closed sets. Moreover, dilations and erosions can be defined directly with lattice-theoretic methods, without recourse to umbras. • In the digital case, when the set of grey-levels is bounded, the problem of grey-level overflow can be dealt with correctly only by taking into account the complete lattice structure of the set of grey-level images. Otherwise the properties of morphological operators are lost.
Article
We introduce a lattice independent component analysis (LICA) unsupervised scheme to functional magnetic resonance imaging (fMRI) data analysis. LICA is a non-linear alternative to independent component analysis (ICA), such that ICA’s statistical independent sources correspond to LICA’s lattice independent sources. In this paper, LICA uses an incremental lattice source induction algorithm (ILSIA) to induce the lattice independent sources from the input dataset. The ILSIA computes a set of Strongly Lattice Independent vectors using properties of lattice associative memories regarding Lattice Independence and Chebyshev best approximation. The lattice independent sources constitute a set of Affine Independent vectors that define a simplex covering the input data. LICA carries out data linear unmixing based on the lattice independent sources basis. Therefore, LICA is a hybrid combination of a non-linear lattice based component and a linear unmixing component. The principal advantage over ICA is that LICA does not impose any probabilistic model assumptions on the data sources. We compare LICA with ICA in two case studies. Firstly, including simulated fMRI data, LICA discovers the spatial location of meaningful sources with less ambiguity than ICA. Secondly, including real data from an auditory stimulation experiment, LICA improves over some state of the art ICA variants discovering the activation patterns detected by Statistical Parametric Mapping (SPM) on the same data.
Article
This paper presents a new architecture to integrate a library of feature extraction, Data-mining, and fusion techniques to automatically and optimally configure a classification solution for a given labeled set of training patterns. The most expensive and scarce resource in any detection problem (feature selection/classification) tends to be the acquiring of labeled training patterns from which to design the system. The objective of this paper is to present a new Data-mining architecture that will include conventional Data-mining algorithms, feature selection methods and algorithmic fusion techniques to best exploit the set of labeled training patterns so as to improve the design of the overall classification system. The paper describes how feature selection and Data-mining algorithms are combined through a Genetic Algorithm, using single source data, and how multi-source data are combined through several best-suited fusion techniques by employing a Genetic Algorithm for optimal fusion. A simplified version of the overall system is tested on the detection of volcanoes in the Magellan SAR database of Venus.
Article
Endmembers for the spectral unmixing analysis of hyperspectral images are sets of affinely independent vectors, which define a convex polytope covering the data points that represent the pixel image spectra. Strong lattice independence (SLI) is a property defined in the context of lattice associative memories convergence analysis. Recent results show that SLI implies affine independence, confirming the value of lattice associative memories for the study of endmember induction algorithms. In fact, SLI vector sets can be easily deduced from the vectors composing the lattice auto-associative memories (LAM). However, the number of candidate endmembers found by this algorithm is very large, so that some selection algorithm is needed to obtain the full benefits of the approach. In this paper we explore the unsupervised segmentation of hyperspectral images based on the abundance images computed, first, by an endmember selection algorithm and, second, by a previously proposed heuristically defined algorithm. We find their results comparable on a qualitative basis.
Article
Financial distress prediction of companies is such a hot topic that has called interest of managers, investors, auditors, and employees. Case-based reasoning (CBR) is a methodology for problem solving. It is an imitation of human beings’ actions in real life. When employing CBR in financial distress prediction, it can not only provide explanations for its prediction, but also advise how the company can get out of distress based on solutions of similar cases in the past. This research puts forward a multiple case-based reasoning system by majority voting (Multi-CBR–MV) for financial distress prediction. Four independent CBR models, deriving from Euclidean metric, Manhattan metric, grey coefficient metric, and outranking relation metric, are employed to generate the system of Multi-CBR. Pre-classifications of the former four independent CBRs are combined to generate the final prediction by majority voting. We employ two kinds of majority voting, i.e., pure majority voting (PMV) and weighted majority voting (WMV). Correspondingly, there are two deriving Multi-CBR systems, i.e., Multi-CBR–PMV and Multi-CBR–WMV. In the experiment, min–max normalization was used to scale all data into the specific range of [0, 1], the technique of grid-search was utilized to get optimal parameters under the assessment of leave-one-out cross-validation (LOO-CV), and 30 hold-out data sets were used to assess predictive performance of models. With data collected from Shanghai and Shenzhen Stock Exchanges, experiment was carried out to compare performance of the two Multi-CBR–MV systems with their composing CBRs and statistical models. Empirical results got satisfying results, which has testified the feasibility and validity of the proposed Multi-CBR–MV for listed companies’ financial distress prediction in China.
Article
In intelligent transportation systems (ITS), transportation infrastructure is complimented with information and communication technologies with the objectives of attaining improved passenger safety, reduced transportation time and fuel consumption and vehicle wear and tear. With the advent of modern communication and computational devices and inexpensive sensors it is possible to collect and process data from a number of sources. Data fusion (DF) is collection of techniques by which information from multiple sources are combined in order to reach a better inference. DF is an inevitable tool for ITS. This paper provides a survey of how DF is used in different areas of ITS.
Article
Voting-based consensus clustering refers to a distinct class of consensus methods in which the cluster label mismatch problem is explicitly addressed. The voting problem is defined as the problem of finding the optimal relabeling of a given partition with respect to a reference partition. It is commonly formulated as a weighted bipartite matching problem. In this paper, we present a more general formulation of the voting problem as a regression problem with multiple-response and multiple-input variables. We show that a recently introduced cumulative voting scheme is a special case corresponding to a linear regression method. We use a randomized ensemble generation technique, where an overproduced number of clusters is randomly selected for each ensemble partition. We apply an information theoretic algorithm for extracting the consensus clustering from the aggregated ensemble representation and for estimating the number of clusters. We apply it in conjunction with bipartite matching and cumulative voting. We present empirical evidence showing substantial improvements in clustering accuracy, stability, and estimation of the true number of clusters based on cumulative voting. The improvements are achieved in comparison to consensus algorithms based on bipartite matching, which perform very poorly with the chosen ensemble generation technique, and also to other recent consensus algorithms.
Article
In this paper, methods for parallel fuzzy inference and multistage-parallel fuzzy inference are studied on the basis of families of α-level sets. The parallel fuzzy inference is characterized by the unification of inference consequences obtained from a number of conditional propositions. Thus, in this paper, the methods for the unification of inference consequences via α-level sets are presented first. It is found that the unification approximated by using fuzzy convex hull is efficient in the case where the unification is performed by the maximum operation. The methods for defuzzification are also examined via a-level sets for the unified consequences. The computational efficiency is evaluated in order to show the effectiveness of the unification and defuzzification via α-level sets. Moreover, it is studied by computer simulations how the approximation by fuzzy convex hull affects the performance in fuzzy control. The results indicate that this approximation does not degrade the control performance. Next, the multistage-parallel fuzzy inference is considered from the operational point of view via α-level sets. The multistage-parallel fuzzy inference is characterized by passing the unified consequence of parallel fuzzy inference in each stage to the next stage as a fact. Hence, the studies are focused on this consequence passing in this paper. It is clarified that the straightforward way of inference operations via α-level sets is time consuming because of the non-convexity in the unified inference consequence in each stage. In order to solve the problem, the multistage-parallel fuzzy inference is formulated into a form of linguistic-truth-value propagation. As a result, the inference operations in middle stages can be conducted by convex fuzzy sets and then efficient computations for inference is provided. The computational efficiency is also evaluated to show the effectiveness of the formulation. Finally, this paper concludes with some brief discussions.
Article
The fuzzy lattice reasoning (FLR) classifier is presented for inducing descriptive, decision-making knowledge (rules) in a mathematical lattice data domain including space RN. Tunable generalization is possible based on non-linear (sigmoid) positive valuation functions; moreover, the FLR classifier can deal with missing data. Learning is carried out both incrementally and fast by computing disjunctions of join-lattice interval conjunctions, where a join-lattice interval conjunction corresponds to a hyperbox in RN. Our testbed in this work concerns the problem of estimating ambient ozone concentration from both meteorological and air-pollutant measurements. The results compare favorably with results obtained by C4.5 decision trees, fuzzy-ART as well as back-propagation neural networks. Novelties and advantages of classifier FLR are detailed extensively and in comparison with related work from the literature.
Article
Very limited research is published in the literature that applies content-based image retrieval (CBIR) techniques to retrieval of digitized spine X-ray images that combines inter-vertebral disc space and vertebral shape profiles. This paper describes a novel technique for retrieving vertebra pairs that exhibit a specified disc space narrowing (DSN) and inter-vertebral disc shape. DSN is characterized using spatial and geometrical features between two adjacent vertebrae. In order to obtain the best retrieval result, all selected features are ranked and assigned a weight to indicate their importance in the computation of the final similarity measure. Using a two phase algorithm, initial retrieval results are clustered and used to construct a voting committee to retrieve vertebra pairs with the highest DSN similarity. The overall retrieval accuracy is validated by a radiologist and proves that selected features combined with voting consensus are effective for DSN-based spine X-ray image retrieval.
Article
Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to the image domain.
Article
In a fully automated manufacturing environment, instant detection of the cutting tool condition is essential for the improved productivity and cost effectiveness. This paper studies a tool condition monitoring system (TCM) via machine learning (ML) and machine ensemble (ME) approach to investigate the effectiveness of multisensor fusion technique when machining 4340 steel with multilayer coated and multiflute carbide end mill cutter. In this study, 135 different features are extracted from multiple sensor signals of force, vibration, acoustic emission and spindle power in the time and frequency domain by using data acquisition and signal processing module. Then, a correlation-based feature selection technique (CFS) evaluates the significance of these features along with machining parameters collected from machining experiments. Next, an optimal feature subset is computed for various assorted combinations of sensors. Finally, machine ensemble methods based on majority voting and stacked generalization are studied for the selected features to classify not only flank wear but also breakage and chipping. It has been found in this paper that the stacked generalization ensemble can ensure the highest accuracy in tool condition monitoring. In addition, it has been shown that the support vector machine (SVM) outperforms other ML algorithms in most cases tested.
Article
We introduce an approach to fMRI analysis based on the Endmember Induction Heuristic Algorithm (EIHA). This algorithm uses the Lattice Associative Memory (LAM) to detect Lattice Independent vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Induced endmembers are used to compute the activation levels of voxels as result of an unmixing process. The endmembers correspond to diverse activation patterns, one of these activation patterns corresponds to the resting state of the neuronal tissue. The on-line working of the algorithm does not need neither a previous training process nor a priori models of the data. Results on a case study compare with the results given by the state of art SPM software.
Article
Several solutions have been proposed to exploit the availability of heterogeneous sources of biomolecular data for gene function prediction, but few attention has been dedicated to the evaluation of the potential improvement in functional classification results that could be achieved through data fusion realized by means of ensemble-based techniques. In this contribution we test the performance of several ensembles of support vector machine (SVM) classifiers, in which each component learner has been trained on different types of bio-molecular data, and then combined to obtain a consensus prediction using different aggregation techniques. Experimental results using data obtained with different high-throughput biotechnologies show that simple ensemble methods outperform both learning machines trained on single homogeneous types of bio-molecular data, and vector space integration methods.
Article
Weighted voting is the commonly used strategy for combining predictions in pairwise classification. Even though it shows good classification performance in practice, it is often criticized for lacking a sound theoretical justification. In this paper, we study the problem of combining predictions within a formal framework of label ranking and, under some model assumptions, derive a generalized voting strategy in which predictions are properly adapted according to the strengths of the corresponding base classifiers. We call this strategy adaptive voting and show that it is optimal in the sense of yielding a MAP prediction of the class label of a test instance. Moreover, we offer a theoretical justification for weighted voting by showing that it yields a good approximation of the optimal adaptive voting prediction. This result is further corroborated by empirical evidence from experiments with real and synthetic data sets showing that, even though adaptive voting is sometimes able to achieve consistent improvements, weighted voting is in general quite competitive, all the more in cases where the aforementioned model assumptions underlying adaptive voting are not met. In this sense, weighted voting appears to be a more robust aggregation strategy.
Article
By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e.,young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T(L), U,G,M) in which L is the name of the variable; T(L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T(L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.
Article
The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments.
Article
Linear models are preferable due to simplicity. Nevertheless, non-linear models often emerge in practice. A popular approach for modeling nonlinearities is by piecewise-linear approximation. Inspired from fuzzy inference systems (FISs) of Tagaki–Sugeno–Kang (TSK) type as well as from Kohonen’s self-organizing map (KSOM) this work introduces a genetically optimized synergy based on intervals’ numbers, or INs for short. The latter (INs) are interpreted here either probabilistically or possibilistically. The employment of mathematical lattice theory is instrumental. Advantages include accommodation of granular data, introduction of tunable nonlinearities, and induction of descriptive decision-making knowledge (rules) from the data. Both efficiency and effectiveness are demonstrated in three benchmark problems. The proposed computational method demonstrates invariably a better capacity for generalization; moreover, it learns orders-of-magnitude faster than alternative methods inducing clearly fewer rules.
Article
Conventional Fuzzy regression using possibilistic concepts allows the identification of models from uncertain data sets. However, some limitations still exist. This paper deals with a revisited approach for possibilistic fuzzy regression methods. Indeed, a new modified fuzzy linear model form is introduced where the identified model output can envelop all the observed data and ensure a total inclusion property. Moreover, this model output can have any kind of spread tendency. In this framework, the identification problem is reformulated according to a new criterion that assesses the model fuzziness independently from the collected data distribution. The potential of the proposed method with regard to the conventional approach is illustrated by simulation examples.
Article
In this paper, belief functions, defined on the lattice of intervals partitions of a set of objects, are investigated as a suitable framework for combining multiple clusterings. We first show how to represent clustering results as masses of evidence allocated to sets of partitions. Then a consensus belief function is obtained using a suitable combination rule. Tools for synthesizing the results are also proposed. The approach is illustrated using synthetic and real data sets.
Article
This paper describes an experiment on the “linguistic” synthesis of a controller for a model industrial plant (a steam engine), Fuzzy logic is used to convert heuristic control rues stated by a human operator into an automatic control strategy. The experiment was initiated to investigate the possibility of human interaction with a learning controller. However, the control strategy set up linguistically proved to be far better than expected in its own right, and the basic experiment of linguistic control synthesis in a non-learning controller is reported here.
Conference Paper
Knowledge systems technologies, as derived from AI methods and used in the modern Semantic Web movement, are dominated by graphical knowledge structures such as ontologies and semantic graph databases. A critical but typically overlooked aspect of all of these structures is their admission to analyses in terms of formal hierarchical relations. The partial order representations of whatever hierarchy is present within a knowledge structure afford opportunities to exploit these hierarchical constraints to facilitate a variety of tasks, including ontology analysis and alignment, visual layout, and anomaly detection. We introduce the basic concepts of order metrics and address the impact of a hierarchical (order-theoretical) analysis on knowledge systems tasks.
Article
Combined the modified AdaBoost.RT with extreme learning machine (ELM), a new hybrid artificial intelligent technique called ensemble ELM is developed for regression problem in this study. First, a new ELM algorithm is selected as ensemble predictor due to its rapid speed and good performance. Second, a modified AdaBoost.RT is proposed to overcome the limitation of original AdaBoost.RT by self-adaptively modifying the threshold value. Then, an ensemble ELM is presented by using the modified AdaBoost.RT for better accuracy of predictability than individual method. Finally, this new hybrid intelligence method is used to establish a temperature prediction model of molten steel by analyzing the metallurgic process of ladle furnace (LF). The model is examined by data of production from 300t LF in Baoshan Iron and Steel Co., Ltd. and compared with the models that established by single ELM, GA-BP (combined genetic algorithm with BP network), and original AdaBoost.RT. The experiments demonstrated that the hybrid intelligence method can improved generalization performance and boost the accuracy, and the accuracy of the temperature prediction is satisfied for the process of practical producing.
Article
Real Adaboost ensembles with weighted emphasis (RA-we) on erroneous and critical (near the classification boundary) samples have recently been proposed, leading to improved performance when an adequate combination of these terms is selected. However, finding the optimal emphasis adjustment is not an easy task. In this paper, we propose to make a fusion of the outputs of RA-we ensem- bles trained with dierent emphasis adjustments by means of a generalized voting scheme. The resulting committee of RA-we ensembles can retain the performance of the best RA-we component and even, occasionally, can improve it. Additionally, we present an ensemble selection strategy that removes from the committee RA-we ensembles with very poor performance. Experimental results show that these committees frequently outperform RA and RA-we with cross validated emphasis.
Article
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural ...
Article
The fuzzy lattice reasoning (FLR) classifier was introduced lately as an advantageous enhancement of the fuzzy-ARTMAP (FAM) neural classifier in the Euclidean space R^N. This work extends FLR to space F^N, where F is the granular data domain of fuzzy ...
Article
The accuracy attained in the mapping of underwater areas is limited by the effect of variations in the water column, which degrade the signal received by the orbital sensor, creating interclasses confusion that introduce errors into the final result of the classification process. In this article we will describe a hybrid classifier ensembles; the classification is done by progressive refining in three stages. At the end of this process, a combining unit links the various partial classifications generated and achieve the accuracy level desired. At the end, the result obtained by the ensemble is compared to the results achieved by the application of multi-class voting scheme methods based on support vector machine: One-Against-the-Rest and One-Against-One. Classification accuracy showed the viability and the potential of using the proposed ensemble to classify images.
Article
The fuzzy lattice reasoning (FLR) classifier was introduced lately as an advantageous enhancement of the fuzzy-ARTMAP (FAM) neural classifier in the Euclidean space RN. This work extends FLR to space FN, where F is the granular data domain of fuzzy interval numbers (FINs) including (fuzzy) numbers, intervals, and cumulative distribution functions. Based on a fundamentally improved mathematical notation this work proposes novel techniques for dealing, rigorously, with imprecision in practice. We demonstrate a favorable comparison of our proposed techniques with alternative techniques from the literature in an industrial prediction application involving digital images represented by histograms. Additional advantages of our techniques include a capacity to represent statistics of all orders by a FIN, an introduction of tunable (sigmoid) nonlinearities, a capacity for effective data processing without any data normalization, an induction of descriptive decision-making knowledge (rules) from the training data, and the potential for input variable selection.