Lab

LABSIN (Intelligent Systems Laboratory)


About the lab

At LABSIN we do applied Artificial Intelligence.


Our mission is:
(a) To develop scientific and technological research applying AI to problems in different application areas;
(b) To develop undergraduate and graduate highly-qualified human resources;
(c) To transfer knowledge and technology to public or private institutions in our region, our country and the world.


Our main research lines are:
(a) Data Science with applications
(b) Optimization
(c) Intelligent Agents

Website: http://www.labsin.org

Featured research (18)

In contrast to previous surveys, the present work is not focused on reviewing the datasets used in the network security field. The fact is that many of the available public labeled datasets represent the network behavior just for a particular time period. Given the rate of change in malicious behavior and the serious challenge to label, and maintain these datasets, they become quickly obsolete. Therefore, this work is focused on the analysis of current labeling methodologies applied to network-based data. In the field of network security, the process of labeling a representative network traffic dataset is particularly challenging and costly since very specialized knowledge is required to classify network traces. Consequently, most of the current traffic labeling methods are based on the automatic generation of synthetic network traces, which hides many of the essential aspects necessary for a correct differentiation between normal and malicious behavior. Alternatively, a few other methods incorporate non-experts users in the labeling process of real traffic with the help of visual and statistical tools. However, after conducting an in-depth analysis, it seems that all current methods for labeling suffer from fundamental drawbacks regarding the quality, volume, and speed of the resulting dataset. This lack of consistent methods for continuously generating a representative dataset with an accurate and validated methodology must be addressed by the network security research community. Moreover, a consistent label methodology is a fundamental condition for helping in the acceptance of novel detection approaches based on statistical and machine learning techniques.
Labeling a real network dataset is specially expensive in computer security, as an expert has to ponder several factors before assigning each label. This paper describes an interactive intelligent system to support the task of identifying hostile behaviors in network logs. The RiskID application uses visualizations to graphically encode features of network connections and promote visual comparison. In the background, two algorithms are used to actively organize connections and predict potential labels: a recommendation algorithm and a semi-supervised learning strategy. These algorithms together with interactive adaptions to the user interface constitute a behavior recommendation. A study is carried out to analyze how the algorithms for recommendation and prediction influence the workflow of labeling a dataset. The results of a study with 16 participants indicate that the behaviour recommendation significantly improves the quality of labels. Analyzing interaction patterns, we identify a more intuitive workflow used when behaviour recommendation is available.
In the field of network security, the process of labeling a network traffic dataset is specially expensive since expert knowledge is required to perform the annotations. With the aid of visual analytic applications such as RiskID, the effort of labeling network traffic is considerable reduced. However, since the label assignment still requires an expert pondering several factors, the annotation process remains a difficult task. The present article introduces a novel active learning strategy for building a random forest model based on user previously-labeled connections. The resulting model provides to the user an estimation of the probability of the remaining unlabeled connections helping him in the traffic annotation task. The article describes the active learning strategy, the interfaces with the RiskID system, the algorithms used to predict botnet behavior, and a proposed evaluation framework. The evaluation framework includes studies to assess not only the prediction performance of the active learning strategy but also the learning rate and resilience against noise as well as the improvements on other well known labeling strategies. The framework represents a complete methodology for evaluating the performance of any active learning solution. The evaluation results showed proposed approach is a significant improvement over previous labeling strategies
In the field of network security, the process of labeling a network traffic dataset is specially expensive since expert knowledge is required to perform the annotations. With the aid of visual analytic applications such as RiskID, the effort of labeling network traffic is considerable reduced. However, since the label assignment still requires an expert pondering several factors, the annotation process remains a difficult task. The present article introduces a novel active learning strategy for building a random forest model based on user previously-labeled connections. The resulting model provides to the user an estimation of the probability of the remaining unlabeled connections helping him in the traffic annotation task. The article describes the active learning strategy, the interfaces with the RiskID system, the algorithms used to predict botnet behavior, and a proposed evaluation framework. The evaluation framework includes studies to assess not only the prediction performance of the active learning strategy but also the learning rate and resilience against noise as well as the improvements on other well known labeling strategies. The framework represents a complete methodology for evaluating the performance of any active learning solution. The evaluation results showed proposed approach is a significant improvement over previous labeling strategies.
Processing of data gathered from remote sensing devices like satellite and aircraft-based sensors can provide useful information about important phenomena related to the earth, like volcano shape and activity, glacier and icebergs tracking, urban monitoring, forestation changes, among others. Particularly, forestation detection is useful in different problems like area desertification assessment, forest health analysis, and land flooding simulations. Different techniques have been applied to problems related to forest analysis based on the satellite data. However, these approaches require human expert intervention to correct them in several ways (like adjusting to different types of vegetation, seasons or geographic locations), which is tedious and costly. In this paper we address these issues by applying machine learning algorithms to the forest detection problem. The main goal of this work is to reduce the workload of experts to produce such detection models, and to improve their generality to be suitable for different conditions. The approach was validated using Digital Surface Models (DSM), optical and thermal spectral firms and forest/no-forest masks, obtained from the Shuttle Radar Topography Mission (SRTM), Landsat-8 and JAXA projects on the Brazilian's southeast and Argentinian's center-east regions.

Lab head

Carlos Catania
Department
  • Computer Science

Members (5)

Raymundo Q. Forradellas
  • National University of Cuyo
Martín G. Marchetta
  • National University of Cuyo
Jorge Luis Guerra
  • National University of Cuyo
Gabriel Dario Caffaratti
  • National University of Cuyo
Franco Palau
  • National University of Cuyo
Raymundo Quilez Forradellas
Raymundo Quilez Forradellas
  • Not confirmed yet