University of Information Science
Recent publications
Zeolitic materials incorporating mono- and bimetallic systems of nickel and cobalt were obtained from natural zeolite modified with Ni ²⁺ and Co ²⁺ chloride solutions through traditional ion exchange (IE) and impregnation (Imp) processes. Special attention was given to analyzing the cationic and anionic composition of the resulting materials. The catalytic potential was evaluated in the selective hydrogenation of citral, focused on the formation of unsaturated alcohols. The IE process replaced mainly Ca ²⁺ and Na ⁺ with Ni ²⁺ and Co ²⁺ cations in the zeolite phases (clinoptilolite and mordenite mix), while Imp resulted in higher metal content (2.0–2.7%) but retained significant amounts of chloride (1.9–3.8%), as confirmed by XRD and temperature-programmed reduction. The materials prepared by IE had negligible chloride content (0.02–0.07%), and their specific surface areas (138–146 m ² /g) were greater than those of the materials obtained by Imp (54–67 m ² /g). The bimetallic systems exhibited enhanced reducibility of the Co ²⁺ and Ni ²⁺ isolated cations, attributed to synergistic interactions that weakened the cation–framework binding. Catalytic activity tests showed that nickel species were primarily responsible for citronellal formation. Among all materials, the bimetallic CoNi IE catalyst, prepared by IE, was the only one to produce unsaturated alcohols, suggesting that synergistic Ni–Co interactions played a role in their formation.
This work addresses the challenges of managing science and innovation projects, with a particular focus on the issues faced by the international science and innovation funding and project management office of CITMA. As part of the study, a state-of-the-art review is conducted to examine trends in project management. A critical analysis based on the literature is then presented. In the methods section, a proposed platform for project management, called IADESPro, is introduced. The proposed platform is based on agile management methods and performance domains, incorporating best practices from PMBOK and ISO standards. The platform is supported by artificial intelligence techniques to aid decision-making. The proposed platform focuses on value generation, covering the various performance domains of project management. In the results section, the implementation of the platform is evaluated in the context of managing research and innovation projects. A comparison is made between the proposed platform and others reported in the literature, with a critical analysis of their advantages and disadvantages. The feasibility of the proposal is demonstrated, along with its potential for decision-making in environments characterized by uncertainty.
The increase in the number of conversational systems (chatbots) applied to different scenarios in society is notable. However, the development of metrics and evaluation methods for chatbots still remains an open line of research. The aim is to create evaluation methods that are less and less invasive and that do not reduce the participation of users or other human agents. In this work, in the methods section, a systematic review is carried out on different evaluation methods for chatbots. Then, in the same section, new metrics for the evaluation of conversations with chatbots inspired by the neutrosophic theory are presented. In the results section, the validation of the proposed model is carried out. The applicability of the model is evaluated and the proposal is subjected to expert triangulation methods. In the work, it is possible to demonstrate that the application of neutrosophic logic can contribute to achieving a more natural response from chatbots. This effect can be useful to mitigate the presence of false responses or hallucinations.
This work addresses the challenge of capacity building in the areas of artificial intelligence and data science. It starts by recognizing the need for new academic programs that consider these subjects as central themes. To develop researchers skilled in topics such as computational intelligence, decision-making in uncertain environments, generative artificial intelligence, and other trends in the development of new AI technologies in society, an ethical approach is required. In the methods section, the proposal addresses the fundamental challenges related to these topics and provides a brief analysis of the state of the art. Additionally, a training strategy is proposed, ranging from short-cycle programs to postgraduate education. The proposal includes a short-cycle program for a Data Science Technician, an Artificial Intelligence Engineering degree, and a master’s degree in Artificial Intelligence. In this way, the training is provided at various levels, accompanied by a strategy for continuous education. In the results analysis section, the proposal was evaluated by a group of specialists in curriculum design, yielding positive results. Finally, the conclusions focus on the fair and ethical development of artificial intelligence.
To contribute to SDG-15 about the conservation of terrestrial ecosystems, the effective management of land resources is required. In this respect, determining the land use and cover (LUC) based on remote sensing constitutes a strength. For the Arimao watershed in the province Cienfuegos of Cuba, the main difficulty in determining the LUC is related to the topographic correction in the mountains of Trinidad. This study aims to validate the effectiveness of seven topographic correction methods using classification accuracy as a criterion. For this purpose, the mountain area was cut out on the Landsat-8 OLI image of December 2020, based on its physical–geographical and geological characteristics. Seven topographic correction algorithms were applied: Cosine correction, Improved cosine, C-correction, Minnaert, Minnaert with the slope, including Riano and others by Law, and the Normalization correction. To evaluate their performance, three criteria were used: visual interpretation, statistical analysis, and assessing classification accuracy taking into account eight cover classes. The obtained results showed a higher effectiveness of the Minnaert correction with slope and roughness coefficient k = 0.3, with an overall accuracy of 94.08%. The user and producer accuracies increased the performance for almost all forest classes. For the mountains of Trinidad, the non-forest classes were not affected by the topographic correction, so it was possible to apply the topographic correction algorithms to the entire area. The results have demonstrated the necessity of applying the criterion of accuracy assessment to select the best topographic correction.
The study of human body shape using classical anthropometric techniques is often problematic due to several error sources. Instead, 3D models and representations provide more accurate registrations, which are stable across acquisitions, and enable more precise, systematic, and fast measuring capabilities. Thus, the same person can be scanned several times and precise differential measurements can be established in an accurate manner. Here we present 3DPatBody, a dataset including 3D body scans, with their corresponding 3D point clouds and anthropometric measurements, from a sample of a Patagonian population (female=211, male=87, other=1). The sample is of scientific interest since it is representative of a phenotype characterized by both its biomedical meaning as a descriptor of overweight and obesity, and its population-specific nature related to ancestry and/or local environmental factors. The acquired 3D models were used to compare shape variables against classical anthropometric data. The shape indicators proved to be accurate predictors of classical indices, also adding geometric characteristics that reflect more properly the shape of the body under study.
Smart devices that operate in a shared environment with people need to be aligned with their values and requirements. We study the problem of multiple stakeholders informing the same device on what the right thing to do is. Specifically, we focus on how to reach a middle ground among the stakeholders inevitably incoherent judgments on what the rules of conduct for the device should be. We formally define a notion of middle ground and discuss the main properties of this notion. Then, we identify three sufficient conditions on the class of Horn expressions for which middle grounds are guaranteed to exist. We provide a polynomial time algorithm that computes middle grounds, under these conditions. We also show that if any of the three conditions is removed then middle grounds for the resulting (larger) class may not exist. Finally, we implement our algorithm and perform experiments using data from the Moral Machine Experiment. We present conflicting rules for different countries and how the algorithm finds the middle ground in this case.
Frailty syndrome is prevalent among the elderly, often linked to chronic diseases and resulting in various adverse health outcomes. Existing research has predominantly focused on predicting individual frailty-related outcomes. However, this paper takes a novel approach by framing frailty as a multi-label learning problem, aiming to predict multiple adverse outcomes simultaneously. In the context of multi-label classification, dealing with imbalanced label distribution poses inherent challenges to multi-label prediction. To address this issue, our study proposes a hybrid resampling approach tailored for handling imbalance problems in the multi-label scenario. The proposed resampling technique and prediction tasks were applied to a high-dimensional real-life medical dataset comprising individuals aged 65 years and above. Several multi-label algorithms were employed in the experiment, and their performance was evaluated using multi-label metrics. The results obtained through our proposed approach revealed that the best-performing prediction model achieved an average precision score of 83%. These findings underscore the effectiveness of our method in predicting multiple frailty outcomes from a complex and imbalanced multi-label dataset.
This dataset compiles breast cancer risk factors from 1697 Cuban women who attended consultations at the Hospital Universitario Clínico-Quirúrgico Comandante Manuel Fajardo in Havana, Cuba. The data were collected to develop a breast cancer risk estimation model specifically tailored to the Cuban population. The dataset includes 23 variables encompassing internationally recognized risk factors such as family history of breast cancer, lifestyle habits, demographic characteristics, and clinical outcomes. The data were extracted from electronic records and anonymized to protect patient privacy, in compliance with the principles of the Declaration of Helsinki and with the approval of the hospital's scientific and ethics committees. This dataset can be employed in the development of predictive models and in comparative studies of risk factors across different populations. It is important to note that the data originate from a single hospital, which may limit their representativeness at the national level.
AI-driven journalism refers to various methods and tools for gathering, verifying, producing, and distributing news information. Their potential is to extend human capabilities and create new forms of augmented journalism. Although scholars agreed on the necessity to embed journalistic values in these systems to make AI systems accountable, less attention was paid to data quality, while the results’ accuracy and efficiency depend on high-quality data in any machine learning task. Assessing data quality in the context of AI-driven journalism requires a broader and interdisciplinary approach, relying on the challenges of data quality in machine learning and the ethical challenges of using machine learning in journalism. To better identify these, we propose a data quality assessment framework to support the collection and pre-processing stages in machine learning. It relies on three of the core principles of ethical journalism—accuracy, fairness, and transparency—and participates in the shift from model-centric to data-centric AI, by focusing on data quality to reduce reliance on large datasets with errors, making data labelling consistent, and better integrating journalistic knowledge.
Expert systems are fundamental tools in the fields of artificial intelligence and decision-making. In the context of the selection of areas for the location of dams, expert systems are especially relevant since they can integrate multiple variables for the detection of optimal locations that minimize environmental impacts and maximize the efficiency of the infrastructure. This research paper presents a new model of multicriteria and geospatial analysis (M-SALD). M-SALD is an expert system structured in three phases that can select from a set of areas the most suitable for the possible location of dams. In addition, it integrates the processes of multicriteria analysis and geospatial analysis, allowing multiple factors to be considered simultaneously, considering the spatial location of the data and the interaction between them. The model is applied to a case study in which a total of 29 watersheds (alternatives) were evaluated, considering 4 criteria and 25 sub-criteria. As a result of the application, it is evident that the model allows for the expansion of the number of possible factors, parameters, and alternatives to be evaluated, reducing the inconsistency from 80 to 20%. It eliminates the subjective evaluation performed by the experts during the weighing of the alternatives and reduces by up to 22% the number of candidate points (12,591) evaluated on the rivers. To obtain the results, the possible scenarios of hydrological development were considered, including promising areas to ensure the balance of water resources.
The issue of fake news has become a viral concern in social networks due to the dangers it implies for the various social actors, and the solutions proposed are varied, some focus on media education and others on the role of technology using artificial intelligence to detect this event and discriminate it, with the limitation that it is not recognized that this fact is inherent to the capitalist relations of production. From the bibliographic analysis it is clear that the predominant approach is that of the communication sciences. Based on these concerns, the purpose of this article is to offer an evaluation from the point of view of the Political Economy of Fake News, recognizing that its deepest cause is a reflection of the sharpening of the fundamental economic contradiction of this system.
Ab-initio molecular dynamics (AIMD) is a key method for realistic simulation of complex atomistic systems and processes in nanoscale. In AIMD, finite-temperature dynamical trajectories are generated by using forces computed from electronic structure calculations. In systems with high numbers of components a typical AIMD run is computationally demanding. On the other hand, machine learning (ML) is a subfield of the artificial intelligence that consist in a set of algorithms that show learning by experience with the use of input and output data where algorithms are capable of analysing and predicting the future. At present, the main application of ML techniques in atomic simulations is the development of new interatomic potentials to correctly describe the potential energy surfaces (PES). This technique is in constant progress since its inception around 30 years ago. The ML potentials combine the advantages of classical and Ab-initio methods, that is, the efficiency of a simple functional form and the accuracy of first principles calculations. In this article we review the evolution of four generations of machine learning potentials and some of their most notable applications. This review focuses on MLPs based on neural networks. Also, we present a state of art of this topic and future trends. Finally, we report the results of a scientometric study (covering the period 1995–2023) about the impact of ML techniques applied to atomistic simulations, distribution of publications by geographical regions and hot topics investigated in the literature.
Citizen Science (CS) initiatives have proliferated in different scientific and social fields, producing vast amounts of data. Existing CS projects usually adopt PPSR Core as a data and metadata standard. However, these projects are still not FAIR (Findable, Accessible, Interoperable and Reusable)-compliant. We propose to use DCAT as a data and metadata standard since it helps to improve the interoperability of CS data catalogs and all the FAIR features. For this purpose, in this paper we present a model-driven approach to make CS data FAIR. Our approach has the following contributions: (i) the definition of a metamodel based on PPSR Core, (ii) the definition of a DCAT profile for CS, (iii) a definition of set of automated transformations from PPSR Core to DCAT. Finally, the implementation of the model-driven process has been validated by evaluating several FAIR metrics. The results show that our proposal has significantly improved the FAIR quality of CS projects.
Internationalization as a challenge of higher education, manages collaboration between universities, of these with users and beneficiaries of the substantive processes they execute. In this sense, this article declares as a objective: to reveal, from the testimonies of the participants, the process of interrelation between life and the work associated with language teaching, enhancing development in this area of knowledge for the formation of Education professionals, starting point for the elevation of the quality of services at the University of Computer Science, of Havana, through research projects, agreements and scientific exchanges. The qualitative study uses the methods: documentary analysis; experiential, testimonial and the matrix of coincidence, united to a satisfaction test applied to the teachers of the Language Center. The results evidenced the contribution of scientific exchanges to this process in permanent and continuous training for the improvement of educational practice, exposed through presentations in a methodological scientific seminar and in the X workshop L@ngtech by professionals of the cloister, converted into testimony.
Resumen Las editoriales académicas constituyen una fuente relevante para el desarrollo de la ciencia y la innovación dentro de las universidades. Por esta razón es importante obtener parámetros para conocer el impacto de la producción científica dentro de las academias. Por lo que el presente trabajo tiene como objetivo: Proponer indicadores de innovación que deben utilizar las editoriales universitarias para medir el impacto de las producciones científicas de dichas academias, a partir de la experiencia de Ediciones Futuro y a través del estudio y análisis de los diferentes artículos y libros relacionados con el tema. Además estaremos abordando en el desarrollo del trabajo algunas herramientas online para medir el impacto de las publicaciones científicas. El resultado de este trabajo es considerado de gran importancia porque a través de estos indicadores podemos determinar si nuestras editoriales universitarias están innovando en su campo y pueden mantenerse a la vanguardia de la producción y distribución de publicaciones académicas.
Nowadays, it is necessary to perfect the online training process to achieve the skills that professionals require. In this context, the objective is to evaluate the methodological proposal for the online training of engineers to strengthen motivation and participation. The methodology used is quantitative-descriptive-non-experimental, exploratory and evaluative in nature, and the presentation of empirical evidence from the results obtained in 2021 with 4th year Civil Engineering students of the university Cujae in an online course. The article presents a didactic conception based on the integration of processes, activities, resources and technologies; cooperative learning (CL) and didactic co-design (DC) for the assessment of motivation and participation levels.
In the current context, where the educational system breaks some schemes, the execution of a research project at the University of Informatics Sciences (UCI, by its initials in Spanish) is devised, which conceives the use of active methodologies in the teaching-learning process. This work aims to present the design of a methodology to develop learning based on software development projects, which contributes to the integration of the Disciplines of the syllabus, using Professional Practice as a space for integration. Historical-logical, synthetic analytical and systemic-structural-functional methods are used. The main elements of the methodology are described: theoretical and instrumental apparatus and the main results obtained after partial application are presented. For its assessment, consultation with specialists and the Iadov Technique were applied. The results obtained have an impact on the integration of the Disciplines of the Major Engineering in Informatics Sciences.
In recent decades, sustainability has been gaining presence in the political, institutional, business and academic spheres; with a notable increase in regulations, institutional guidelines, indicators and indices developed by international organisations, which provide a guide for the incorporation of this issue in the management and administration system of companies and institutions. The aim of this research is to develop a framework for the management of sustainability in entities of the Information and Communication Technologies sector in Cuba from the perspective of strategic management. The framework is composed of three dimensions of articulation that provide the strategic, business process and normative ethical approach; and five dimensions of evaluation: institutional, economic, social, ecological and technological innovation, which allows the management and evaluation of sustainability in the organisation under study. The implementation of the framework makes it possible to diagnose and design the strategic direction for sustainability, prioritise and classify the sector’s Critical Success Factors, identify relevant processes, design ethics and regulatory compliance programmes based on risks, in one of the sectors classified as a priority for the national economy, contributing to the improvement of the level of sustainability.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,289 members
Rey Segundo Guerrero-Proenza
  • Department of Computational Intelligence
J. Gulín González
  • Centro de Matemática Computacional
Juan Antonio Plasencia Soler
  • Organisational Management
Information
Address
Havana, Cuba
Head of institution
Prof. Dr. C. Walter Baluja