IEEE Computer Society
  • Washington, D.C., United States
Recent publications
Aim/Purpose: The major goal of this work is to establish prediction patterns that can influence better diagnosis and treatment strategies using unidentified interactions between genes. Background: Driven by the rapid advances in genomics, knowledge of the factors causing disease depends more on deciphering the deep linkages existent in the data of gene expression. Common approaches typically fail to grasp temporal links when dealing with always-changing living biological systems. This work overcomes this restriction by leveraging the sequential learning abilities of LSTM together with the improved pattern recognition capacity of SVM. Methodology: Our method uses a hybrid model combining LSTM and SVM to forecast gene expression. Working together, the LSTM and SVM components find relevant features in the gene expression data, clarifying trends in the data. Furthermore, the LSTM component oversees data temporal dependencies. Regarding accuracy and interpretability, this extra method helps to improve prediction models used in the healthcare industry. Contribution: There are many ways to get a key insight from data on gene expression. The LSTM and SVM for biclustering gene expression data offer much for healthcare informatics. Findings: The proposed LSTM-based SVM is used to evaluate numerous current methods of evaluating performance metrics. Using these opens several opportunities for the development of customized medicine and the customization of therapies in line with personal genetic profiles. Recommendation for Researchers: Examining the LSTM-SVM hybrid model that has been proposed using a variety of healthcare-related datasets Future Research: This work can be enhanced using several deep-learning algorithms to achieve better accuracy and performance.
Artificial intelligence has emerged as a revolutionary technology offering substantial advances over traditional information and communication systems. However, the increasing prevalence of AI introduces new vulnerabilities, making AI-driven systems more susceptible to cybercriminal activities and security threats aimed at disrupting their operations. This study comprehensively examines the cybersecurity challenges and threats associated with AI applications, emphasizing the core principles of information security. Confidentiality, Integrity, and Availability. The study categorizes AI-related threats into two key areas: first, threats targeting critical AI components such as data, models, and algorithms, and second, the malicious exploitation of AI to conduct sophisticated, large-scale cyberattacks. This analysis contributes to a threat-informed defense by examining risk assessment methodologies to address these challenges, under-scoring the need for robust security frameworks. Furthermore, it leverages the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS) guidelines, offering future research directions to enhance AI security, and providing practical recommendations for securing AI across diverse deployment scenarios.
This paper presents a novel key exchange scheme based on the underwater acoustic channel that tackles the challenges posed by the uncertainty and vulnerability of the ocean environment. The proposed scheme models the channel’s uncertainty by constructing expressions for noise, multipath, and Doppler parameters and introducing the concept of underwater acoustic channel interference factors using Rényi entropy. To ensure identity authentication and initial key extraction, the scheme uses an intelligent hash function based on the twisted Edwards elliptic curve. It then employs the segmented initial key sequence to generate a segmented Toeplitz matrix, which is multiplied to generate labels through block operations, ensuring secure transmission of the initial key. The scheme enhances confidentiality through an additional hash process to generate the final security key. The scheme’s correctness, robustness, and confidentiality are confirmed using information theory, and simulation results show that it achieves a key generation rate of 631 bit/s with an upper bound of the adversary’s success rate of 4.3 × 10−23 for an initial information volume of 50,000 bits, indicating significant advantages in terms of bit and bit error rates. Overall, this paper presents a promising key exchange scheme that can mitigate the challenges posed by the underwater acoustic channel’s uncertainty and vulnerability.
Optimizing electric vehicle charging stations through advanced predictive ensemble techniques is essential for enhancing efficiency, reducing operational costs, and promoting the widespread adoption of electric vehicles. This approach plays a pivotal role in ensuring seamless charging experiences, thereby advancing the transition to a sustainable and eco-friendly transportation system. By this regard, the proposed paper presents StateEVMan, a novel approach employing doubly-fed Long ShortTerm Memory (LSTM) techniques in conjunction with a comprehensive Electric Vehicle (EV) station dataset. Utilizing stacked ensemble learning, the model predicts three key performance indicators (KPIs): Charging Time [Hour], Total Power Output [kWh], and Total Cost [$]. The study assesses the model's performance using Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R-squared (R 2 ) metrics across a dataset comprising 10,185 data points. Notably, the model achieves accurate predictions for these KPIs, demonstrating its robust forecasting capabilities. StateEVMan emerges as a considerable tool for optimizing EV charging station operations and enhancing efficiency.
Predicting CO 2 emissions in the automotive industry is vital for driving innovation in fuel efficiency, shaping policies, and fostering a greener, sustainable future. An advanced predictive modeling approach for estimating CO 2 emissions in the automotive industry using machine learning techniques is presented in this paper. Data from 46 distinct automotive brands was incorporated, comprehensively analyzing various vehicles. The predictive model employed six numeric features, encompassing engine size, cylinder count, and diverse fuel consumption metrics, along with five categorical features concerning brand, model, vehicle class, transmission, and fuel type. Considerable results were achieved, with a mean squared error (MSE) of 29.99, a root mean squared error (RMSE) of 5.48, and an R2 of 0.991, showcasing the model’s forecasting accuracy for CO 2 emissions. Therefore, this work underscores the effectiveness of machine learning in CO 2 emissions prediction and emphasizes the importance of considering diverse features and multiple automotive brands for constructing comprehensive and robust models in the context of environmental impact assessment, thereby contributing to a more sustainable automotive industry.
Remote-sensing imagery plays an important role in areas such as geographic information systems (GIS), environmental monitoring and resource management. In order to enhance the reliability of remote-sensing image transmission, a chaotic image cryptosystem is proposed in this paper. The algorithm employs the discrete memristor-coupled Rulkov neuron map with rich dynamic behavior, in conjunction with Knuth-Morria-Pratt (KMP), Three-Input Majority Gate (TIMG), and DNA operations. The original image is associated with the cryptosystem through SHA-256 as the key. The KMP algorithm is employed in the confusion process to obtain the next array for sliding selection and position mapping confusion. Additionally, a diffusion operation is designed to control TIMG by the next array, aiming to choose the DNA operation method in a flexible manner. The results are: key space reaches 2⁷⁹⁰, correlation coefficients are almost 0, average entropy = 7.9979, average NPCR = 99.6137%, and average UACI = 33.4940%. Moreover, experimental simulations demonstrate that our solution exhibits a degree of resilience to noise interference and data loss. In summary, through grayscale and color image analysis, it is evident that the proposed algorithm can achieve a cryptosystem that is both flexible and secure.
This study has been undertaken to amalgamate the principles of Site Reliability Engineering and Data Engineering to effectively measure, monitor and manage the reliability of petabyte-scale data engineering process from collection at source to processing, analyzing, and distributing the data for appropriate decision making to improve business outcomes and system performance. Modern data architectures increasingly leverage cloud platforms, low-code systems, and serverless technologies to enable scalable data engineering. However, these innovations also introduce new complexities regarding reliability assurance. As these failure-prone yet business-critical data infrastructures continue rapid adoption, it is vital to elucidate architectural paradigms, quantified benchmarks, and procedural methodologies tailored to safeguarding dependability across heterogeneous, distributed data ecosystems. This paper will equip end users with a reusable framework ingrained with best practices to develop a blueprint for data reliability across business units of an organization.
A large amount of data has been accumulated. with the development of the Internet industry. Many problems have been exposed with data explosion: 1. The contradiction between data privacy and data collaborations; 2. The contradiction between data ownership and the right of data usage; 3. The legality of data collection and data usage; 4. The relationship between the governance of data and the governance of rules; 5. Traceability of evidence chain. To face such a complicated situation, many algorithms were proposed and developed. This article tries to build a model from the perspective of blockchain to make some breakthroughs. The Internet Of Rights(IOR) model uses multi-chain technology to logically break down the consensus mechanism into layers, including storage consensus, permission consensus, role consensus, transaction consensus, etc., thus building new infrastructure, that enables data sources with complex organizational structures and interactions to collaborate smoothly on the premise of protecting data privacy. With blockchain’s nature of decentralization, openness, autonomy, immutability, and controllable anonymity, the Internet Of Rights(IOR) model registers the ownership of data and enables applications to build an ecosystem based on responsibilities and rights. It also provides cross-domain processing with privacy protection, as well as the separation of data governance and rule governance. With the processing capabilities of artificial intelligence and big data technology, as well as the ubiquitous data collection capabilities of the Internet of Things, the Internet Of Rights(IOR) model may provide a new infrastructure concept for realizing swarm intelligence and building a new paradigm of the Internet, i.e. intelligent governance.
The influence of music on mood and emotions has been widely studied, highlighting its potential for self-expression and personal delight. As technology continues to advance exponentially, manually selecting and analyzing music from the vast array of artists, songs, and listeners becomes impractical. In this study, we propose a system called the “emotion-aware music recommendations” that leverages real-time facial expressions to determine a person’s emotional state. Deep learning models are employed to accurately detect facial emotions, leveraging the principles of transfer learning. By combining the model’s output with the mapped songs from the dataset, a personalized playlist is created. The main objective of the study is to effectively classify user emotions into six distinct categories using pre-trained models. Experimental studies conducted on the proposed approach employ the RAF-ML benchmark facial expression dataset. The findings indicate that the model outperforms existing approaches, demonstrating its effectiveness in generating tailored music recommendations.
Lattice-based cryptographic schemes such as Crystals-Kyber and Dilithium are post-quantum algorithms selected to be standardized by NIST as they are considered to be secure against quantum computing attacks. The multiplication in polynomial rings is the most time-consuming operation in many lattice-based cryptographic schemes, which is also subject to side-channel attacks. While NTT-based polynomial multiplication is almost a norm in a wide range of implementations, a relatively new method, incomplete NTT is preferred to accelerate lattice-based cryptography, especially on some computing platforms that feature special instructions. In this paper, we present a novel, efficient and non-profiled power/EM side-channel attack targeting polynomial multiplication based on the incomplete NTT algorithm. We apply the attack on the Crystals-Dilithium signature algorithm and Crystals-Kyber KEM. We demonstrate that the method accelerates attack run-time when compared to the existing approaches. While a conventional non-profiled side-channel attack tests a much larger hypothesis set because it needs to predict two coefficients of secret polynomials together, we propose a much faster zero-value filtering attack (ZV-FA), which reduces the size of the hypothesis set by targeting the coefficients individually. We also propose an effective and efficient validation and correction technique employing the inverse NTT to estimate and modify the mispredicted coefficients. Our experimental results show that we can achieve a speed-up of 1915×over brute-force.
Welcome to the IEEE Transactions on Privacy , a new periodical from the IEEE Computer Society that is focused directly on privacy and embraces its multidisciplinary nature.
Since its inception, humanity has depended on the skills of individuals and groups. Over millennia, humanity and skills evolved, differentiating those who would prosper from those who did not. From gatherers to hunters, from agriculture to industry, and from IT to AI, the complexity and rate of change have increased exponentially, driven by language, curiosity, and communication. IT skills and a variety of IT professions have dominated the past 25 years. At this juncture, advances in AI have altered the technology landscape, causing tectonic shifts in many professions. How will IT professions evolve, and how should IT professionals adapt? We contend that, first, required skill sets will rapidly change, increasing the importance of continuing education. Second, with the increased adoption of AI, the importance of data will also increase, demanding an ever-growing need for data science skills. Finally, many IT activities will be automated, requiring IT professionals to collaborate with AI assistants and take more strategic roles.
Assuring that digital systems and services operate in accordance with agreed norms and principles is essential to foster trust and facilitate their adoption. Ethical assurance requires a global ecosystem, where organizations not only commit to upholding human values, dignity, and well-being but are also able to demonstrate this when required by the specific context in which they operate. We focus on possible governance frameworks including regulatory and non-regulatory measures, taking as an example AI systems. Thereby, we highlight the importance of considering the specific context, as well as the entire life cycle, from design to deployment, including data governance. Socio-technical, value-based standards, and certification schemes are introduced as enabling instruments for operationalizing responsible and ethical approaches to AI in line with upcoming regulatory requirements.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.