Enrique Tomás Martínez Beltrán

Enrique Tomás Martínez Beltrán
Verified
Enrique verified their affiliation via an institutional email.
Verified
Enrique verified their affiliation via an institutional email.
  • PhD student in Computer Science
  • PhD Student at University of Murcia

About

37
Publications
5,289
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
423
Citations
Introduction
Enrique Tomás Martínez Beltrán is working towards a Ph.D. in Computer Science at the University of Murcia, Spain. He obtained a B.Sc. degree in Information and Communication Technologies and an M.Sc. degree in New Technologies, specializing in information security, networks, and telematics. His research interests include cybersecurity, Federated Learning, Brain-Computer Interfaces, IoT, and Artificial Intelligence applied to different fields using Machine Learning and Deep Learning techniques.
Current institution
University of Murcia
Current position
  • PhD Student
Additional affiliations
September 2016 - present
University of Murcia
Description
  • Mention in Information and Communication Technologies. Introduced to the world of cybersecurity and new technologies.
July 2020 - October 2021
University of Murcia
Position
  • Research Associate
Education
September 2020 - July 2021
University of Murcia
Field of study
  • Bilingual Master's Degree. Specialization in networks, telematics and cybersecurity.
September 2016 - June 2020
University of Murcia
Field of study

Publications

Publications (37)
Preprint
Full-text available
Decentralized Federated Learning (DFL) enables collaborative, privacy-preserving model training without relying on a central server. This decentralized approach reduces bottlenecks and eliminates single points of failure, enhancing scalability and resilience. However, DFL also introduces challenges such as suboptimal models with non-IID data distri...
Article
Mosaic warfare is a military strategy where reconnaissance missions with aerial vehicles are critical for gathering enemy information and achieving battlefield dominance. Nowadays, machine learning (ML) techniques play a pivotal role in this task by enabling precise detection of military vehicles. However, reconnaissance missions face challenges, p...
Preprint
Full-text available
Decentralized Federated Learning (DFL) trains models in a collaborative and privacy-preserving manner while removing model centralization risks and improving communication bottlenecks. However, DFL faces challenges in efficient communication management and model aggregation within decentralized environments, especially with heterogeneous data distr...
Chapter
Decentralized Federated Learning (DFL) emerges as an innovative paradigm to train collaborative models, addressing the single point of failure limitation. However, the security and trustworthiness of FL and DFL are compromised by poisoning attacks, negatively impacting its performance. Existing defense mechanisms have been designed for centralized...
Preprint
Full-text available
Machine Learning (ML) faces several challenges, including susceptibility to data leakage and the overhead associated with data storage. Decentralized Federated Learning (DFL) offers a robust solution to these issues by eliminating the need for centralized data collection, thereby enhancing data privacy. In DFL, distributed nodes collaboratively tra...
Article
Full-text available
In response to the global safety concern of drowsiness during driving, the European Union enforces that new vehicles must integrate detection systems compliant with the general data protection regulation. To identify drowsiness patterns while preserving drivers’ data privacy, recent literature has combined Federated Learning (FL) with different bio...
Preprint
Full-text available
Federated Learning (FL) has emerged as a promising approach to address privacy concerns inherent in Machine Learning (ML) practices. However, conventional FL methods, particularly those following the Centralized FL (CFL) paradigm, utilize a central server for global aggregation, which exhibits limitations such as bottleneck and single point of fail...
Article
Full-text available
Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness o...
Article
Full-text available
The rise of Decentralized Federated Learning (DFL) has enabled the training of machine learning models across federated participants, fostering decentralized model aggregation and reducing dependence on a server. However, this approach introduces unique communication security challenges that have yet to be thoroughly addressed in the literature. Th...
Article
Full-text available
Driver drowsiness is a significant concern and one of the leading causes of traffic accidents. Advances in cognitive neuroscience and computer science have enabled the detection of drivers’ drowsiness using Brain-Computer Interfaces (BCIs) and Machine Learning (ML). However, the literature lacks a comprehensive evaluation of drowsiness detection pe...
Article
Full-text available
In recent years, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, height...
Preprint
Full-text available
Industry 4.0 has brought numerous advantages, such as increasing productivity through automation. However, it also presents major cybersecurity issues such as cyberattacks affecting industrial processes. Federated Learning (FL) combined with time-series analysis is a promising cyberattack detection mechanism proposed in the literature. However, the...
Conference Paper
Full-text available
This paper presents Fedstellar, a platform for training decentralized Federated Learning (FL) models in heterogeneous topologies in terms of the number of federation participants and their connections. Fedstellar allows users to build custom topologies, enabling them to control the aggregation of model parameters in a decentralized manner. The plat...
Preprint
Full-text available
The rise of Decentralized Federated Learning (DFL) has enabled the training of machine learning models across federated participants, fostering decentralized model aggregation and reducing dependence on a server. However, this approach introduces unique communication security challenges that have yet to be thoroughly addressed in the literature. Th...
Preprint
Full-text available
In 2016, Google proposed Federated Learning (FL) as a novel paradigm to train Machine Learning (ML) models across the participants of a federation while preserving data privacy. Since its birth, Centralized FL (CFL) has been the most used approach, where a central entity aggregates participants' models to create a global one. However, CFL presents...
Article
Full-text available
Traffic accidents are the leading cause of death among young people, a problem that today costs an enormous number of victims. Several technologies have been proposed to prevent accidents, being brain–computer interfaces (BCIs) one of the most promising. In this context, BCIs have been used to detect emotional states, concentration issues, or stres...
Preprint
Full-text available
The metaverse has gained tremendous popularity in recent years, allowing the interconnection of users worldwide. However, current systems used in metaverse scenarios, such as virtual reality glasses, offer a partial immersive experience. In this context, Brain-Computer Interfaces (BCIs) can introduce a revolution in the metaverse, although a study...
Preprint
Full-text available
In recent years, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, height...
Preprint
Full-text available
Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of ho...
Preprint
Full-text available
Drowsiness is a major concern for drivers and one of the leading causes of traffic accidents. Advances in Cognitive Neuroscience and Computer Science have enabled the detection of drivers' drowsiness by using Brain-Computer Interfaces (BCIs) and Machine Learning (ML). Nevertheless, several challenges remain open and should be faced. First, a compre...
Preprint
Full-text available
Traffic accidents are the leading cause of death among young people, a problem that today costs an enormous number of victims. Several technologies have been proposed to prevent accidents, being Brain-Computer Interfaces (BCIs) one of the most promising. In this context, BCIs have been used to detect emotional states, concentration issues, or stres...
Conference Paper
Full-text available
Brain-Computer Interfaces are devices that enable two-way communication between an individual's brain and external devices, allowing the acquisition of neural activity and neurostimulation. Considering the first one, electroencephalographic signals are widely used for the acquisition of subjects' information. Therefore, a manipulation of the data a...
Conference Paper
Full-text available
Brain-Computer Interfaces (BCIs) are bidirectional devices that have allowed people to control computers or external devices through their brain activity. The P300 Speller is one of the most widely used BCI applications, where subjects can transmit textual information mentally with satisfactory performance. However, the P300 Speller still has room...
Article
Full-text available
As recently reported by the World Health Organization (WHO), the high use of intelligent devices such as smartphones, multimedia systems, or billboards causes an increase in distraction and, consequently, fatal accidents while driving. The use of EEG-based Brain-Computer Interfaces (BCIs) has been proposed as a promising way to detect distractions....
Article
Full-text available
Most of the current Brain–Computer Interfaces (BCIs) application scenarios use electroencephalographic signals (EEG) containing the subject’s information. It means that if EEG were maliciously manipulated, the proper functioning of BCI frameworks could be at risk. Unfortunately, it happens in frameworks sensitive to noise-based cyberattacks, and mo...
Chapter
Full-text available
In recent years, the growth of brain-computer interfaces (BCIs) has been remarkable in specific application fields, such as the medical sector or the entertainment industry. Most of these fields use evoked potentials, like P300, to obtain neural data able to handle prostheses or achieve greater immersion experience in videogames. The natural use of...
Article
Full-text available
Brain-computer interfaces (BCIs) started being used in clinical scenarios, reaching nowadays new fields such as entertainment or learning. Using BCIs, neuronal activity can be monitored for various purposes, with the study of the central nervous system response to certain stimuli being one of them, being the case of evoked potentials. However, due...
Article
Since the first reported case in Wuhan in late 2019, COVID-19 has rapidly spread worldwide, dramatically impacting the lives of millions of citizens. To deal with the severe crisis resulting from the pandemic, worldwide institutions have been forced to make decisions that 1 affect the socio-economic realm. In this sense, researchers from diverse kn...

Network

Cited By