Lab

SDAS Research Group

About the lab

Smart Data Analysis Systems Group (SDAS Group) is supported by experts and young researchers from several academic institutions from different countries.

We develop software- and intelligent-systems-driven solutions for data-analysis-related problems of industry, medicine and education fields.

By vocation, all our R&D processes have a strong and genuine commitment to the synergistic, ethical and interactive integration of human thinking and artificial intelligence.

A wide range of data analysis related topics are covered by our research programs (https://sdas-group.com/researchtopics/).

At a technical consulting and deployment level, SDAS Group offers specialized services.

Featured projects (1)

Project
This research line aims at exploring and developing interactive methodologies for industrial-facilitiesand production-systems planning and assembly by using computer simulation, virtual and augmented reality environments, and data-analytics-based decision-making tools and techniques.

Featured research (52)

Face mask detection has become a great challenge in computer vision, demanding the coalition of technology with COVID-19 awareness. Researchers have proposed deep learning models to detect the use of face masks. However, the incorrect use of a face mask can be as harmful as not wearing any protection at all. In this paper, we propose a compound convolutional neural network (CNN) architecture based on two computer vision tasks: object localization to discover faces in images/videos, followed by an image classification CNN to categorize the faces and show if someone is using a face mask correctly, incorrectly, or not at all. The first CNN is built upon RetinaFace, a model to detect faces in images, whereas the second CNN uses a ResNet-18 architecture as a classification backbone. Our model enables an accurate identification of people who are not correctly following the COVID-19 healthcare recommendations on face mask use. To enable further global use of our technology, we have released both the dataset used to train the classification model and our proposed computer vision pipeline to the public, and optimized it for embedded systems deployment.
Virtual reality (VR) has been brought closer to the general public over the past decade as it has become increasingly available for desktop and mobile platforms. As a result, consumer-grade VR may redefine how people learn by creating an engaging “hands-on” training experience. Today, VR applications leverage rich interactivity in a virtual environment without real-world consequences to optimize training programs in companies and educational institutions. Therefore, the main objective of this article was to improve the collaboration and communication practices in 3D virtual worlds with VR and metaverse focused on the educational and productive sector in smart factory. A key premise of our work is that the characteristics of the real environment can be replicated in a virtual world through digital twins, wherein new, configurable, innovative, and valuable ways of working and learning collaboratively can be created using avatar models. To do so, we present a proposal for the development of an experimental framework that constitutes a crucial first step in the process of formalizing collaboration in virtual environments through VR-powered metaverses. The VR system includes functional components, object-oriented configurations, advanced core, interfaces, and an online multi-user system. We present the study of the first application case of the framework with VR in a metaverse, focused on the smart factory, that shows the most relevant technologies of Industry 4.0. Functionality tests were carried out and evaluated with users through usability metrics that showed the satisfactory results of its potential educational and commercial use. Finally, the experimental results show that a commercial software framework for VR games can accelerate the development of experiments in the metaverse to connect users from different parts of the world in real time.
IoT devices play a fundamental role in the machine learning (ML) application pipeline, as they collect rich data for model training using sensors. However, this process can be affected by uncontrollable variables that introduce errors into the data, resulting in a higher computational cost to eliminate them. Thus, selecting the most suitable algorithm for this pre-processing step on-device can reduce ML model complexity and unnecessary bandwidth usage for cloud processing. Therefore, this work presents a new sensor taxonomy with which to deploy data pre-processing on an IoT device by using a specific filter for each data type that the system handles. We define statistical and functional performance metrics to perform filter selection. Experimental results show that the Butterworth filter is a suitable solution for invariant sampling rates, while the Savi–Golay and medium filters are appropriate choices for variable sampling rates.
Recent engineering and neuroscience applications have led to the development of brain–computer interface (BCI) systems that improve the quality of life of people with motor disabilities. In the same area, a significant number of studies have been conducted in identifying or classifying upper-limb movement intentions. On the contrary, few works have been concerned with movement intention identification for lower limbs. Notwithstanding, lower-limb neurorehabilitation is a major topic in medical settings, as some people suffer from mobility problems in their lower limbs, such as those diagnosed with neurodegenerative disorders, such as multiple sclerosis, and people with hemiplegia or quadriplegia. Particularly, the conventional pattern recognition (PR) systems are one of the most suitable computational tools for electroencephalography (EEG) signal analysis as the explicit knowledge of the features involved in the PR process itself is crucial for both improving signal classification performance and providing more interpretability. In this regard, there is a real need for outline and comparative studies gathering benchmark and state-of-art PR techniques that allow for a deeper understanding thereof and a proper selection of a specific technique. This study conducted a topical overview of specialized papers covering lower-limb motor task identification through PR-based BCI/EEG signal analysis systems. To do so, we first established search terms and inclusion and exclusion criteria to find the most relevant papers on the subject. As a result, we identified the 22 most relevant papers. Next, we reviewed their experimental methodologies for recording EEG signals during the execution of lower limb tasks. In addition, we review the algorithms used in the preprocessing, feature extraction, and classification stages. Finally, we compared all the algorithms and determined which of them are the most suitable in terms of accuracy.

Lab head

Diego Peluffo
Department
  • Research Board
About Diego Peluffo
  • He received his degree in electronic engineering, the M.Eng.and PhD. degree in industrial automation from the Universidad Nacional de Colombia, Manizales - Colombia, in 2008, 2010 and 2013, respectively. He undertook his doctoral internship at KU Leuven - Belgium. Afterwards, he worked as a post-doc at Université Catholique de Louvain at Louvain la-Neuve, Belgium. He is the head of the SDAS Research Group.

Members (33)

Miguel A Becerra
  • Instituto Tecnologico Pascual Bravo
Paul Rosero
  • IT University of Copenhagen
Ana Cristina Umaquinga
  • Universidad Técnica del Norte
Leandro Lorente
  • SDAS Research Group
Dagoberto Mayorca
  • Universidad Mariana
Israel Herrera
  • Universidad Técnica del Norte
Luz Marina Sierra Martínez
  • Universidad del Cauca

Alumni (2)

Andres Javier Anaya Isaza
  • Pontificia Universidad Javeriana
Diana M. Viveros Melo
  • Pontifícia Universidade Católica do Rio de Janeiro