Esther-Lydia Silva-Ramírez

Esther-Lydia Silva-Ramírez
Universidad de Cádiz | UCA · Department of Computer Engineering

PhD

About

35
Publications
2,313
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
265
Citations
Citations since 2017
9 Research Items
211 Citations
2017201820192020202120222023010203040
2017201820192020202120222023010203040
2017201820192020202120222023010203040
2017201820192020202120222023010203040
Additional affiliations
April 1997 - January 2021
Universidad de Cádiz
Position
  • Lecturer

Publications

Publications (35)
Article
Full-text available
Machine learning‐based algorithms have been widely applied recently in different areas due to its ability to solve problems in all fields. In this research, machine learning techniques classifying the Bravais lattices from a conventional X‐ray diffraction diagram have been applied. Indexing algorithms are an essential tool of the preliminary protoc...
Article
Full-text available
This paper is an addendum to Ref. Neural Computing and Applications (2021) 33:8981–9004 that adds information to clarify certain aspects of the models used adding information and some citations. In this paper an approach based on Co-active Neuro-Fuzzy Inference System named CANFIS-ART is proposed to automate data imputation procedure. This model is...
Article
Full-text available
Data imputation aims to solve missing values problem which is common in nowadays applications. Many techniques have been proposed to solve this problem from statistical methods such as Mean/Mode to machine learning models. In this paper, an approach based on Co-active Neuro-Fuzzy Inference System named CANFIS-ART is proposed to automate data imputa...
Conference Paper
Full-text available
Pseudocode is one of the recommended methods for teaching students to design algorithms. Having a tool that performs the automatic translation of an algorithm in pseudocode to a programming language would allow the student to understand the complete process of program development. In addition, the introduction of quality measurement of algorithms d...
Chapter
This chapter presents studies about the data imputation to estimate missing values, and the Data Editing and Imputation process to identify and correct values erroneously. Artificial Neural Networks and Support Vector Machines are trained as Machine Learning techniques on real and simulated data sets obtaining a complete data set what help to impro...
Conference Paper
En el presente trabajo se describe Vary, un entorno de desarrollo basado en Eclipse, el cual permite escribir algoritmos en pseudocódigo y posteriormente, ejecutar los programas obtenidos mediante la transformación de dichos algoritmos en código fuente. En este caso, el entorno realiza la transformación automática desde pseudocódigo a código C/C++....
Article
The knowledge discovery process is supported by data files information gathered from collected data sets, which often contain errors in the form of missing values. Data imputation is the activity aimed at estimating values for missing data items. This study focuses on the development of automated data imputation models, based on artificial neural n...
Article
A dissimilarity measure between the empiric characteristic functions of the subsamples associated to the different classes in a multivariate data set is proposed. This measure can be efficiently computed, and it depends on all the cases of each class. It may be used to find groups of similar classes, which could be joined for further analysis, or i...
Article
Data mining is based on data files which usually contain errors in the form of missing values. This paper focuses on a methodological framework for the development of an automated data imputation model based on artificial neural networks. Fifteen real and simulated data sets are exposed to a perturbation experiment, based on the random generation o...
Conference Paper
Usually, the knowledge discovery process is developed using data sets which contain errors in the form of inconsistent values. The activity aimed at detecting and correcting logical inconsistencies in data sets is named as data editing. Traditional tools for this task, as the Fellegi-Holt methodology, require a heavy intervention of subject matter...
Conference Paper
A procedure for designing non-linear models for predicting time series is proposed. It is based on a set of rules emerging from a previously fitted ARIMA model. These rules are extracted from the set of coefficients in the ARIMA model, so they consider the autocorrelation structure of the time series, but a nonlinear approach is adopted. The propos...
Conference Paper
Bagging is an ensemble method proposed to improve the predictive performance of learning algorithms, being specially effective when applied to unstable predictors. It is based on the aggregation of a certain number of prediction models, each one generated from a bootstrap sample of the available training set. We introduce an alternative method for...
Book
Contenido: Introducción a la informática; Problemas, algoritmos y programas; Programación estructurada; Abstracción operacional; Recursividad; Tipos de datos.
Conference Paper
Random sequence generation is a central topic within the limits of much simulation systems. In this kind of systems, sometimes we need to produce an string of random numbers in order to simulate real situatitons and evaluate the performance under little changes over critical parameters. We can find the classical example when try to simulate a cashi...

Network

Cited By