Carlos Jaime Barrios Hernandez

Carlos Jaime Barrios Hernandez
Verified
Carlos verified their affiliation via an institutional email.
Verified
Carlos verified their affiliation via an institutional email.
  • PhD. MSc. Eng.
  • Full Professor at Industrial University of Santander

About

105
Publications
16,064
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
125
Citations
Introduction
Researcher in Large Scale Architectures, High Performance and Scientific Computing, at same time Advanced Computing and New Trends in Computing as Quantum Computing. With more of 10 years of experience in HPC and Scalable architectures.
Current institution
Industrial University of Santander
Current position
  • Full Professor
Additional affiliations
April 2011 - present
Universidad Industiral de Santander
Position
  • Professor - Director
February 2016 - August 2018
Industrial University of Santander
Position
  • Consultant
Description
  • Responsible and advisor of postgraduate studies at School of Systems Engineering and Informatics. The studies involve master of science, master professional and the leading of the doctoral proposition in Computer Science.
August 2015 - present
Think Thank in IT Technologies of Colombia
Position
  • Researcher
Description
  • The TT of IT Technologies at Colombia aims to support the analysis and development of IT public politics.
Education
January 2008
Balatonfüred
Field of study
  • International Summer School in Grid Computing
January 2008
Aussois CNRS Center
Field of study
  • 9 Performance Evaluation School
January 2008
Nice, Côte d'Azur
Field of study
  • 2008 ICAR Summer School : "Distributed Applications and Middleware "

Publications

Publications (105)
Article
Full-text available
Explore GenNAS for chest X-ray classification in lung diseases, leveraging novel parallel training methods for enhanced accuracy and efficiency. Medical image classification for pulmonary pathologies from chest X-rays is traditionally time-consuming. GenNAS, using GPT-4's generative capabilities, automates optimal architecture learning from data. T...
Article
The Parkin RBR E3 Ubiquitin Protein Ligase gene, the second largest gene within the human genome, plays a crucial role in physiological processes and is implicated in neurodegenerative pathologies, notably Parkinson's disease. The average size of this gene is approximately 1,300,000 base pairs (bp), while one of its messengers RNA (mRNAs) measures...
Article
Full-text available
Interest in High-Performance Computing (HPC) has surged, driven by the demand for skills to utilize advanced computing methods. These methods include managing vast amounts of data, implement- ing complex algorithms, and developing Artificial Intelligence (AI) applications. HPC ecosystems play a crucial role in tackling in- tricate scientific and en...
Data
Supplementary material available in Zenodo from the article entitled: "PRKN Gene Introns in Families from Order Primates Show Common Open Reading Frames" Published in: IEEE Transactions on Computational Biology and Bioinformatics
Article
Full-text available
Given the critical nature of their missions, space systems such as satellites, probes, spacecraft, etc., are commonly embedded with specific hardware and software solutions. While this approach has led to numerous achievements, it has also limited the available capacities of a spacecraft and the potential integration between multiple units. In this...
Conference Paper
Full-text available
The interest in High Performance Computing (HPC) has grown due to the need for skills to leverage advanced computing, such as managing large volumes of data, using complex algorithms, and developing AI applications. HPC ecosystems are essential for addressing complex scientific and engineering challenges, but integrating diverse stakeholders requir...
Article
Full-text available
RESUMEN. México es el principal productor de aguacate, aportando el 46% de las exportaciones globales. Sin embargo, el cultivo está expuesto a plagas y enfermedades, como la araña cristalina (Oligonychus perseae), que interfiere con la fotosíntesis y debilita el follaje, exponiendo los frutos a condiciones adversas. La detección oportuna de esta pl...
Preprint
Full-text available
Software quantum simulators are the most accessible tools for designing and testing quantum algorithms. This paper presents a comprehensive approach to building a software-based quantum simulator based on classical computing architectures. We explore fundamental quantum computing concepts, including state vector representations, quantum gates, and...
Preprint
Full-text available
Matrixmultiplicationisfundamentalinthebackpropagation algorithm used to train deep neural network models. Libraries like In- tel’s MKL or NVIDIA’s cuBLAS implemented new and optimized matrix multiplication techniques that increase performance and reduce compu- tational costs. These techniques can also be implemented in CUDA and SYCL and functions w...
Preprint
Full-text available
Matrix multiplication is fundamental in the backpropagation algorithm used to train deep neural network models. Libraries like Intel's MKL or NVIDIA's cuBLAS implemented new and optimized matrix multiplication techniques that increase performance and reduce computational costs. These techniques can also be implemented in CUDA and SYCL and functions...
Article
Full-text available
Background The COVID-19 pandemic, caused by the SARS-CoV-2, can be effectively managed with diagnostic tools such as RT-qPCR. However, it can produce false-negative results due to viral mutations and RNA secondary structures from the target gene sequence. Methods With High Performance Computing, the complete SARS-CoV-2 genome was obtained from the...
Article
Full-text available
High-Performance Computing (HPC) is one of the pillars of developing modern science and disruptive technologies, uniting computer architectures and parallel programming into multidisciplinary interactions to face domain-specific problems. That is why different areas of knowledge require their future professionals (scientists or not) to acquire skil...
Chapter
High-performance computers are now essential in scientific and technological research and development because of their high processing capacity and extensive memory; they allow us to simulate phenomena where processing and handling such information is necessary. The simulation of physicochemical problems involves the inherent analysis and processin...
Article
Full-text available
In the year 2002, DNA loss model (DNA-LM) postulated that neuropeptide genes to emerged through codons loss via the repair of damaged DNA from ancestral gene namely Neuropeptide Precursor Predictive ( NPP ), which organization correspond two or more neuropeptides precursors evolutive related. The DNA-LM was elaborated according to amino acids homol...
Poster
Full-text available
En este trabajo evaluamos las tres posibles variantes de procesamiento alternativo de Sirtuin 2 (Sir2) de Apis mellifera, las cuales, se encuentran reportadas en el GenBank. Se diseñaron cebadores para caracterización por RT-PCR de la expresión en las muestras en las casta sociales. Además, se realizaron las relaciones filogenéticas de las tres var...
Conference Paper
HPC platforms seek to ensure peak computing performance with minimal energy cost searching sustainability. Considering different cases of use and implementation, both post-Moore hardware elements and software deployment (as virtualization or containerization) are incorporated. However, as the number of devices proliferates, managing applications ha...
Conference Paper
Full-text available
Production in aquaculture is affected by several factors, with water quality in culture ponds being a vital element. Ensuring optimal crop water status is essential and requires continuous monitoring. However, current methods are laborious, costly, time-consuming and error-prone, putting the crop at risk. This paper addresses the technical challeng...
Article
Full-text available
Water quality is a vital factor in aquaculture, directly involved in the development of farmed species. The monitoring systems of the parameters that determine this quality are limited to retrieving their values and publishing them. The lack of usability of these systems disrupts interpretation by aquaculture farm staff, who are exposed to technica...
Article
Full-text available
Production in aquaculture is affected by several factors, with water quality in culture ponds being a vital element. Ensuring optimal crop water status is essential and requires continuous monitoring. However, current methods are laborious, costly, time-consuming and error-prone, putting the crop at risk. This paper addresses the technical challeng...
Poster
Full-text available
The study of CFD fluid dynamics is currently one of the main branches for the investigation of highly complex fluids, the advancement in the application of new methodologies and novel techniques, such as mesh-free methods, has exerted a positive pressure on its growth. As part of these, the SPH method has taken an important position in CFD advances...
Chapter
Full-text available
Forest fires and environmental disasters that are rarely avoided due to Forest fires are environmental disasters is a crucial problem to resolve with High Performance Computing (HPC) due to the real-time need to avoid the reaction of the control agencies and the community. One of the strategies to support early warnings related to forest fires is u...
Chapter
Quantum computing has ceased to be an exotic topic for researchers, moving its treatment today from theoretical physicists to computer scientists and engineers. Recently, several real quantum devices have become available through the cloud. On the other hand, different possibilities on-premises allow having quantum computing simulators using High-P...
Conference Paper
Hyperparameter optimization in machine learning is a critical task that aims to find the hyper-parameters of a given machine learning algorithm that provide the best performance measured on a validation set. Unlike model parameters, the ma-chine learning engineer sets hyperparameters before training. The number of trees in a random forest is a hype...
Article
Full-text available
Los productores acuícolas necesitan mantener el agua de sus estanques con calidad adecuada para que los organismos cultivados se desarrollen satisfactoriamente. Para comprobar la calidad, es necesario realizar un monitoreo constante de los parámetros fisicoquímicos más importantes como son PH, OD, CE y Temperatura. El objetivo de este trabajo es el...
Article
Full-text available
Background: Recent estimates indicate that the COVID-19 pandemic, which is caused by the SARS-CoV-2 virus, could be effectively controlled via the development and implementation of diagnostic tools such as quantitative reverse transcription PCR (RT-qPCR). However, this reaction often generates false-negative results due to novel mutations and can a...
Article
Full-text available
Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate reso...
Chapter
The improvement of computational systems has been based on Moore’s law and Dennard’s scale, but for more than a decade it has started to fall to a standstill. To maintain the improvement new technologies are proposed, this has established a Post-Moore era. Currently there are different methodologies to evaluate emerging devices and technologies, bu...
Chapter
The growing interest for Deep Learning has leading the develop of methods for Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO). Generally, architectures are designed in an iterative and expensive process of trial and error, besides being restricted to designer creativity and its specific knowledge about the topic. This paper p...
Article
Full-text available
The increase in computational capacities has helped in the exploration, production and research process, allowing for the use of applications that were infeasible years ago. This increase has brought us into a new era (known as the Post-Moore Era) and produced a wide range of promising devices, such as Single Board Computers (SBC) and Personal Comp...
Chapter
Since 2012, in a case-study in Bucaramanga-Colombia, 179 pedestrians died in car accidents, and another 2873 pedestrians were injured. Each day, at least one passerby is involved in a tragedy. Knowing the causes to decrease accidents is crucial, and using system-dynamics to reproduce the collisions’ events is critical to prevent further accidents....
Article
Full-text available
Recently, several real quantum devices have become available through the cloud. Nevertheless, they are expected to be very limited, in the near term, in the number and quality of the fundamental storage element, the qubit. Therefore, software quantum simulators are the only widely available tools to design and test quantum algorithms. However, the...
Preprint
Full-text available
Since 2012, in a case-study in Bucaramanga-Colombia, 179 pedestrians died in car accidents, and another 2873 pedestrians were injured. Each day, at least one passerby is involved in a tragedy. Knowing the causes to decrease accidents is crucial, and using system-dynamics to reproduce the collisions' events is critical to prevent further accidents....
Chapter
Full-text available
Visualization of scientific data is crucial for scientific discovery to gain insight into the results of simulations and experiments. Remote visualization is of crucial importance to access infrastructure, data and computational resources and, to avoid data movement from where data is produced and to where data will be analyzed. Remote visualizatio...
Chapter
Full-text available
For years, clusters for HPC have been implemented through the typical process of obtaining the source code, configuring and compiling each of the tools that make up the infrastructure services. Each administrator based on their experience and knowledge assumes a series of considerations to design and implement a cluster that is considered efficient...
Chapter
Deep Learning models have come into significant use in the field of biology and healthcare, genomics, medical imaging, EEGs, and electronic medical records [1, 2, 3, 4]. In the training these models can be affected due to overfitting, which is mainly due to the fact that Deep Learning models try to adapt as much as possible to the training data, lo...
Article
Full-text available
Natural or human-made disasters could do huge damage in urban areas and eventually could take lives. It is fundamental to get knowledge of the event’s characteristics to dispose of hasty information to help affected people or to prevent all the citizens from the danger zone, and then it will get time to respond to the crisis. Internet of Things (Io...
Chapter
Full-text available
Cloud storage is one of the most popular models of cloud computing. It benefits from a shared set of configurable resources without limitations of local data storage infrastructures. However, it brings several cybersecurity issues. In this work, we address the methods of mitigating risks of confidentiality, integrity, availability, information leak...
Chapter
Full-text available
Cities host more than half of the population in only 2% of the earth’s surface and consume 75% of the resources extracted from the planet, this abrupt demographic growth in urban areas has worsened the level of pollution in the city, as well as the problems of road congestion. Therefore, smart cities propose the incorporation of technologies to opt...
Article
Full-text available
Crustacean vitellogenesis is a process that involves Vitellin, produced via endoproteolysis of its precursor, which is designated as Vitellogenin (Vtg). The Vtg gene, mRNA and protein regulation involve several environmental factors and physiological processes, including gonadal maturation and moult stages, among others. Once the Vtg gene, mRNAs an...
Poster
Full-text available
This work was presented at the Industrial University of Santander as part of the IV Seminar on Biodiversity and Conservation of Endangered Species held in the city of Bucaramanga, Santander, Colombia. Here we show how the duplication of genes and the DNA loss model are evolutionary processes that gave rise to LWamide, APGWamide, Red Pigment Concent...
Chapter
Full-text available
Hardware for embedded devices has increasing capabilities, popular Linux distributions incorporate large sets of applications and services that require intensive use of resources, limiting the hardware that can run these distributions. This work is concerned with developing a methodology to build a light operating system, oriented to both scientifi...
Book
This book constitutes the proceedings of the 5th Latin American Conference, CARLA 2018, held in Bucaramanga, Colombia, in September 2018. The 24 papers presented in this volume were carefully reviewed and selected from 38 submissions. They are organized in topical sections on: Artificial Intelligence; Accelerators; Applications; Performance Evalua...
Article
Full-text available
Although computing research and facilities in Latin America have been developing steadily, a remarkable gap nevertheless remains in the availability of resources and specialized human resources compared to other regions. RICAP (Red Iberoamericana de Computación de Altas Prestaciones, or Ibero-American Network for High-Performance Computing) aims to...
Conference Paper
Full-text available
Energy efficiency in high performance computing (HPC) systems is a relevant issue nowadays, which is approached from multiple edges and components (network, I/O, resource management, etc). HPC industry turned its focus towards embedded and low-power computational infrastructures (of RISC architecture processors) to improve energy efficiency, theref...
Conference Paper
Some features shared by families of natural phenomena may be exploited for the process of implementation of software simulation tools. An analogy of this situation is the experimentation in manufacturing, where the products are designed by organisations in a way that it is possible to exploit commonality in components and process. This work aims to...
Article
Full-text available
The globalization of trade and the organization of work are currently causing a large migratory flow towards the cities. This growth of cities requires new urban planning where digital tools take a preponderant place to capture data and understand and decide in face of changes. These tools however hardly resist to natural disasters, terrorism, acci...
Book
This book constitutes the proceedings of the Third Latin American Conference on High Performance Computing, CARLA 2016, held in Mexico City, Mexico, in August/September 2016. The 30 papers presented in this volume were carefully reviewed and selected from 70 submissions. They are organized in topical sections named: HPC Infrastructure and Applicati...
Poster
Full-text available
Heterogeneous parallel programming has two main problems on large computation systems: the first one is the increase of power consumption on supercomputers in proportion to the amount of computational resources used to obtain high performance, the second one is the underuse these resources by scientific applications with improper distribution of ta...
Article
Full-text available
Recently, bioinformatics has become a new field of science, indispensable in the analysis of millions of nucleic acids sequences, which are currently deposited in international databases (public or private); these databases contain information of genes, RNA, ORF, proteins, intergenic regions, including entire genomes from some species. The analysis...
Conference Paper
Full-text available
The evaluation of performance and power consump- tion is a key step in the design of applications for large compu- tational systems as supercomputers and clusters (multicore and accelerator nodes, multicore and coprocessor nodes, manycore and accelerator nodes). In these systems the developers must de- sign several experiments for workload characte...
Article
Full-text available
System dynamics is a methodology to model and simulate complexity in science and engineering. In this approach there are limited mathematical descriptions due to a large time in the behavior and a non-mathematical definition of the relationships between variables of the phenomena. It proposes causality and feedbacks to explain relationships between...
Conference Paper
Full-text available
This paper explains the wave phenomenon that was worked and the medium in which the propagation occurs. A differential equation of the elastic wave in isotropic and heterogeneous media is solved using a finite difference method with staggered mesh with second order accuracy in time and fourth grade in space, aiming for greater stability and efficie...
Conference Paper
Full-text available
BrainPuzzle 3D is a visual interactive tool developed to support learning about the most complex organ of the human body: the brain. Using non conventional interaction devices such as 3D Mouse, this work aims to show the user about the structural distribution and physiology of the Human Brain, solving a puzzle of a human brain.
Conference Paper
BrainPuzzle 3D is a visual interactive tool developed to support learning about the most complex organ of the human body: the brain. Using non conventional interaction devices such as 3D Mouse, this work aims to show the user about the structural distribution and physiology of the Human Brain, solving a puzzle of a human brain.
Conference Paper
This paper explains the wave phenomenon that was worked and the medium in which the propagation occurs. A differential equation of the elastic wave in isotropic and heterogeneous media is solved using a finite difference method with staggered mesh with second order accuracy in time and fourth grade in space, aiming for greater stability and efficie...
Conference Paper
Full-text available
Numerical simulations using supercomputers are producing an ever growing amount of data. Efficient production and analysis of these data are the key to future discoveries. The In-Situ paradigm is emerging as a promising solution to avoid the I/O bottleneck encounter on the file system for both the simulation and the analytics by treating the data a...
Conference Paper
Full-text available
This paper provides a review of heuristics and metaheuristics methods, to solve the job scheduling problem in grid systems under the ETC (Expected Time to Compute) model. The problem is an important issue for efficient resource management in computational grids, which is performed by schedulers of these High Performance Computing systems. We presen...
Conference Paper
Full-text available
The evaluation of performance and power consumption is a key step in the design of applications for large computing systems, such as supercomputers, clusters with nodes that have manycores and multi-GPUs. Researchers must design several experiments for workload characterization by observing the architectural implications of different combinations o...
Data
Full-text available
Seismic modelling is one of the most challenging technologies in the petroleum exploration industry nowadays. The need for modelling the seismic propagation phenomena realistically, taking into account the subsoil characteristics, leads to a better description of rock properties. However this increased detail also generates a bigger computational d...
Conference Paper
There is a technique for presenting stereograms, where full information, for the two eyes, is contained in a single image. These images are known as "autostereograms", they may contain a wide variety of forms of depth with some limitations. The images are generated in multiple planes and in turn front or behind the physical plane. In order to perce...
Conference Paper
Full-text available
Seismic modelling is one of the most challenging technologies in the petroleum exploration industry today. The need for modelling the seismic propagation phenomena realistically, taking into account the subsoil characteristics, leads to a better description of rock properties. However, this increased detail also generates a bigger computational dem...
Book
This book constitutes the proceedings of the Second Latin American Conference on High Performance Computing, CARLA 2015, a joint conference of the High-Performance Computing Latin America Community, HPCLATAM, and the Conferencia Latino Americana de Computación de Alto Rendimiento, CLCAR, held in Petrópolis, Brazil, in August 2015. The 11 papers pre...
Conference Paper
Full-text available
Modeling of nonlinear circuits in Direct Current Electricity (DC) conditions is an interesting problem, and mainly as a scenario problem for the metaheuristic Unified Particle Swarm Optimization (*UPSO). The approach presents an alternative solution for circuits that exhibit nonlinearities associated with DC conditions and its circuital components....
Conference Paper
Full-text available
Aplicación como Servicio es una propuesta construida para definir un modelo de servicios de computación avanzada orientada principalmente a usuarios académicos, científicos e industriales (en procesos de investigación y desarrollado) que requieran este tipo de soporte. Inspirados en el modelo de visibilidad y negocio de la computación en nube y con...
Article
Full-text available
Extended depth of field (EDF) Method is used to analyze and treat specific image zones in optical research. Due to the complexity of the EDF and the possible large volume of data processed in optics problems, EDF is a good candidate to process in parallel architectures. This work is a first approach of implementation of parallel-extended depth of f...
Poster
Full-text available
Implementing Extended Depth of Field on Parallel Architectures
Article
A parallel computing approach to run fast and full-wave electromagnetic simulation of complex structures in Grid Computing environment is presented. In this study, we show how Grid Computing improves speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable s...
Article
Full-text available
Large-scale architectures, as grid computing, provide resources allowing the handling of complex problems, vast collections of data storage, specific processing and collaborative interaction between distributed communities. Nowadays, there are several scientific applications that run on grid computing architectures. However, in most of these cases,...
Conference Paper
This paper demonstrates the capability of the scale changing technique (SCT) in designing reflectarray antennas, having enormous finite number of non-uniform radiating elements with high scale ratios in a given band. The use of parallel advantages of SCT for the electromagnetic simulation of reflectarrays in a grid computing environment. System par...
Conference Paper
Full-text available
Le transfert des données est un processus critique qui influe considérablement sur la performance des applications pour peu que celles-ci utilisent d’important flux de données. Pour caractériser le transfert de données en fonction des principaux éléments architecturaux de la plate-forme, deux approches sont fréquemment utilisées. La première consis...
Conference Paper
The performance of the intensive communications in clusters and grids is critical during the execution time of the parallel programs. We have detected anomalies during test on the Grid 5000 infrastructure as losses of bandwidth. This paper describes effectuated tests, analyses the results and propose a model for bandwidth lost in intensive data tra...

Questions

Question (1)
Question
Dear researchers,
#COVID19 changes the way of our interaction and for the HPC/Advanced Computing Community the different conferences are supported by virtual and remote options. This year, the most important conference on HPC of the Latin American Region will be virtual and free.
Call for Participation - Free Registration
Latin American Conference on High Performance Computing 2020 #CARLA2020 Cuenca - Virtual
2- 4 September (Conference)
7-11 September (Workshops)
7-19 September (Tutorials (Beginners and Advanced)
CARLA is an international conference aimed at providing a forum to foster the growth and strength of the High Performance Computing (HPC) community in Latin America through the exchange and dissemination of new ideas, techniques, and research in HPC and its applications areas. Started in 2014,
CARLA has become the flagship conference for HPC in the region. We invite the international community to share its advances on both HPC and HPC&AI (convergence between HPC and Artificial Intelligence) as those two key areas are becoming the predominant engine for innovation and development.
This year the Latin America High Performance Computing Conference (CARLA 2020) will be virtual from Cuenca from September 2 to September 4 2020, in collaboration with TICAL 2020. We expect contributions from faculty members, researchers, specialists and graduate students around the world.
For general information about the congress, including registration, please contact us at: Carla2020@cedia.org.ec +593 4079300 Cuenca - Ecuador.
Platinum Sponsors:
HPE - LENOVO - NVIDIA - Cendio/ThincLinc
Organizers:
CEDIA - SCALAC - RedCLARA
With the Support of RICAP (CYTED ACTION 517RT0529)

Network

Cited By