Recent publications
yocardial infarction (MI) stands as one of the most critical cardiac complications, occurring when blood flow to the cardiovascular system is partially or completely blocked. Electrocardiography (ECG) is an invaluable tool for detecting diverse cardiac irregularities. Manual investigation of MI-induced ECG changes is tedious, laborious, and time-consuming. Nowadays, deep-learningbased algorithms are widely investigated to detect various cardiac abnormalities and enhance the performance of medical diagnostic systems. Therefore, this work presents a lightweight deep learning framework (CardioNet) for MI detection using ECG signals. To construct time-frequency (T-F) spectrograms, filtered ECG sensor data is subjected to the short-time Fourier transform, movable Gaussian window-based S-transform (ST), and smoothed pseudo-Wigner-Ville distribution methods. To develop an automated MI detection system, obtained spectrograms are fed to benchmark Squeeze-Net, Alex-Net, and a newly developed, lightweight deep learning model. The developed CardioNet with ST-based T-F images has obtained an average classification accuracy of 99.82%, a specificity of 99.57%, and a sensitivity of 99.97%. The proposed system, in combination with a cloud-based algorithm, is suitable for designing wearable to detect several cardiac diseases using other biological signals from the cardiovascular system.yocardial infarction (MI) stands as one of the most critical cardiac complications, occurring when blood flow to the cardiovascular system is partially or completely blocked. Electrocardiography (ECG) is an invaluable tool for detecting diverse cardiac irregularities. Manual investigation of MI-induced ECG changes is tedious, laborious, and time-consuming. Nowadays, deep-learning-based algorithms are widely investigated to detect various cardiac abnormalities and enhance the performance of medical diagnostic systems. Therefore, this work presents a lightweight deep learning framework (CardioNet) for MI detection using ECG signals. To construct time-frequency (T-F) spectrograms, filtered ECG sensor data is subjected to the short-time Fourier transform, movable Gaussian windowbased S-transform (ST), and smoothed pseudo-Wigner-Ville distribution methods. To develop an automated MI detection system, obtained spectrograms are fed to benchmark Squeeze-Net, Alex-Net, and a newly developed, lightweight deep learning model. The developed CardioNet with ST-based T-F images has obtained an average classification accuracy of 99.82%, a specificity of 99.57%, and a sensitivity of 99.97%. The proposed system, in combination with a cloud-based algorithm, is suitable for designing wearable to detect several cardiac diseases using other biological signals from the cardiovascular system.M.
In software development, software fault prediction (SFP) models aim to identify code sections with a high likelihood of faults before the testing process. SFP models achieve this by analyzing data about the structural properties of the software’s previous versions. Consequently, the accuracy and interpretation of SFP models depend heavily on the chosen software metrics and how well they correlate with patterns of fault occurrence. Previous research has explored improving SFP model performance through feature selection (metric selection). Yet inconsistencies in conclusions arose due to the presence of inconsistent and correlated software metrics. Relying solely on correlations between metrics and faults makes it difficult for developers to take actionable steps, as the causal relationships remain unclear. To address this challenge, this work investigates the use of Causal Inference (CI) methods to understand the causal relationships between software project characteristics, development practices, and the fault-proneness of code sections. We propose a CI-based technique called Average Treatment Effect for Feature Selection (ATE-FS). This technique leverages the causal inference concept to quantify the cause-and-effect relationships between software metrics and fault-proneness. ATE-FS utilizes Average Treatment Effect (ATE) features to identify code metrics that are most suitable for building SFP models. These ATE features capture the causal impact of a metric on fault-proneness. Through an experimental analysis involving twenty-seven SFP datasets, we validate the performance of ATE-FS. We further compare its performance with other state-of-the-art feature selection techniques. The results demonstrate that ATE-FS achieves a significant performance for fault prediction. Additionally, ATE-FS improved consistency in feature selection across diverse SFP datasets.
In the backdrop of a culture where sex and sexual health are often associated with taboo and stigma, this paper investigates the role of Indian cinema in addressing women’s sexual health concerns and in de-stigmatizing condoms through a critical analysis of three recent films: Janhit Mein Jaari, Chhatriwali, and Helmet. Employing Intersectional Feminist Theory and Rhetorical Discourse Analysis, the study examines how these films have the potential to contribute to de-stigmatization of condoms in the Indian context. It also explores the portrayal of gender dynamics and societal norms intersecting with sexual and reproductive health issues. Findings suggest that these films not only challenge pre-existing stigmas and taboos associated with condoms but also advocate for gender equality and mutual respect in sexual relationships. This research underscores the transformative potential of cinema in shaping societal attitudes and promoting sexual health awareness, including the de-stigmatization of condom use.
Securing medical sensor data is imperative due to the susceptibility of wireless transmissions to eavesdropping. In this letter, we focus on improving the security of two-way communication in medical networks by investigating Deep Neural Networks (DNN) for two-way relay non-orthogonal multiple access (NOMA) systems. Utilizing a decode-and-forward relay and considering both maximum ratio combining (MRC) and selection combining (SC) at the eavesdropper, we derive analytical expressions for the secrecy outage probability (SOP), leveraging the exact SOP expression from [1]. Due to the system's complexity, deriving a closed-form SOP is challenging. To address this, we introduce a DNN framework for real-time SOP prediction, which not only validates the theoretical model but also significantly reduces offline execution time and computational complexity.
Real-time crisis information on social media is crucial for supporting relief and rescue operations during the early stages of a crisis. However, the lack of sufficient information about ongoing incidents and the wealth of data from previous crises necessitate the use of domain adaptation (DA) techniques over other methods. Nevertheless, current DA approaches often fail to fully utilize the available past resources, resulting in the loss of important information for ongoing crises and negatively impacting performance. Existing pitfalls of state-of-the-art models are: (1) models do not work on joint domain feature relation at elementary and instance level to exploit the complete information of each domain, and (2) moreover, these models could not efficiently harness the information, when there are diversified and varying number of source crisis incidents. Inspired by the ensemble setup in identifying the infrastructure damage, we introduce Ensemble model using the elementary feature (Parts-of-speech tagging) Attention and Hypersphere Separator (EnPHyS). It operates at joint feature levels where each level works with the abundant source and scarce target data to extract the best of the (1) shared and (2) invariant features for the objective task. Ensemble uses multi-task learning (MTL) and an adversarial approach to enhance the information retrieval of target features. EnPHyS performance was investigated under single-source as well as multi-source domain adaptation scenarios with four publicly available datasets. The reported results on standard metric F-measure reveal the average growth of , and , respectively, over the best performing baseline model.
This paper proposes an eco-driving technique for an ego vehicle operating behind a non-communicating leading Heavy-Duty Vehicle (HDV), aimed at minimizing energy consumption while ensuring inter-vehicle distance.Anovel data-driven approach based on Deep Reinforcement Learning (DRL) is developed to predict the future speed trajectory of the leadingHDVusing simulated speed profiles and road slope information. The DQN-based speed predictor achieves a prediction accuracy of 95.4% and 93.2% in Driving Cycles 1 and 2, respectively. This predicted speed is then used to optimize the ego vehicle’s speed plan through a distributionally robust Model Predictive Controller (MPC), which accounts for uncertainties in the prediction, ensuring operational safety. The proposed method demonstrates energy savings of 12.5% in Driving Cycle 1 and 8.6% in Driving Cycle 2, compared to traditional leading vehicle speed prediction methods. Validated through case studies across simulated and real-world driving cycles, the solution is scalable, computationally efficient, and suitable for real-time applications in Intelligent Transportation Systems (ITS), making it a viable approach for enhancing sustainability in non-communicating vehicle environments.
Nowadays, Twitter is an important source of information and latest updates during ongoing events, such as disaster events. However, the huge number of tweets posted during a disaster makes identification of relevant information highly challenging. Therefore, a summary of the tweets can help the decision-makers to ensure efficient allocation of resources among the affected population. There exist several automated summarization approaches which can generate a summary given the tweets related to a disaster. Development of these automated summarization approaches require availability of ground-truth summary of the dataset for verification. However, the number of publicly available datasets along with the ground-truth summary for disaster events are still inadequate. To improve this situation, we need to create more number of ground-truth summaries. Existing approaches for ground-truth summary generation rely on the annotators’ wisdom and intuition. This process requires immense human effort and significant time. Moreover, the selection of the important tweets from the humongous set of input tweets often results in sub-optimal choice of tweets in the final summary. Therefore, to handle these challenges, we propose a hybrid approach (PORTRAIT) for ground-truth summary generation, where we partly automate the procedure to improve the quality of ground-truth summary and reduce human effort and time. We validate the effectiveness of PORTRAIT on 9 disaster events through quantitative and qualitative analysis. We prepare and release the ground-truth summaries for 9 disaster events, which consist of both natural and man-made disaster events belonging to 5 different continents.
This study presents a multi-objective optimization approach for enhancing responsible sourcing, consumption, and production in construction supply chains, aligning with Sustainable Development Goal 12 (SDG 12). Using the Non-Dominated Sorting Genetic Algorithm III (NSGA-III), the research addresses complex trade-offs between environmental impact, cost-effectiveness, and social responsibility in construction projects. Data from industry case studies, including real-world construction projects, and simulations reflecting varying material costs, emissions regulations, and logistical challenges were used to validate the model. The findings reveal Pareto-efficient solutions, with up to a 9.4% reduction in carbon emissions and 3.3% cost savings while achieving a 7% improvement in social responsibility metrics. Sensitivity analysis demonstrates the model’s robustness to changes in material costs and supply chain disruptions. These results underscore NSGA-III’s effectiveness in generating optimized solutions that minimize environmental footprint, enhance resource efficiency, and promote ethical practices. This research provides actionable insights for construction firms and policymakers, offering a scalable model to integrate sustainable practices into construction supply chains and advance SDG 12 objectives.
Routing protocol for low-power and lossy networks (RPL) is the standard routing protocol specified by the Internet engineering task force (IETF) for Low power and lossy Nnetworks (LLNs) based Internet of Things (IoT) applications. Although RPL gives many benefits to LLNs, due to the resource-constrained and easily tamperable nature of LLN devices, LLNs are vulnerable to a wide range of attacks that primarily alter the functioning of the RPL. One such attack is known as a Network partitioning attack (NPA). An NPA in RPL occurs when an attacker node intentionally divides a network into disjoint segments, preventing communication between nodes that may be previously able to communicate. This can happen when an attacker does not complete the route registration step at the root node, exploits the rank property of RPL, and continues the standard Destination advertisement object (DAO) forwarding technique of RPL during the node joining and DAG maintenance phase. Consequently, it segregates the section of nodes from the root node. In the literature, Enhanced-RPL (ERPL) is the only solution proposed to address NPA. However, our study shows that ERPL further induces fake Destination Advertisement Object Acknowledgement (DAO-ACK) and dropping DAO Attack, making it unsuitable for deployment in real-world applications. Our analysis indicates that the performance metrics of the network are not improved with the existing mitigation technique when the attacker unicasts fake DAO-ACK packets to victim client nodes. Our key idea is to improve the existing mitigation technique (ERPL) to incorporate an effective NPA detection approach that authenticates the DAO-ACK packet sent from parent nodes to client nodes. Our proposed approach, SecRPLNPA, has been integrated and thoroughly tested using the Cooja simulator. We have conducted a performance assessment of SecRPLNPA comparing it against standard RPL and ERPL. Our empirical results suggest that in both stationary and mobile scenarios, SecRPLNPA proficiently detects and mitigates Network Partitioning Attacks while causing only minimal impact on resource-constrained nodes.
Plant diseases have been detrimental for the agriculture industry, as they cause substantial crop loss globally. To overcome this, IoT and AI-based smart agriculture solutions are being deployed for plant disease detection. However, a diverse range of crops and their diseases pose enormous challenges to these methods. Additionally, limited generalizability and the black-box nature of existing deep learning models, together with the scarcity of in-field datasets, are the main bottlenecks in developing efficient and acceptable solutions for large-scale applications. In the present work, a lightweight model ‘ConViTX’ is proposed for plant disease classification that demonstrates improved generalizability and explainability. The compact architecture of ConViTX uses a fusion of convolutional neural networks and vision transformers to simultaneously capture local and global features. Remarkably, ConViTX outperforms nine state-of-the-art deep learning methods on four publicly available datasets and a self-collected in-field maize dataset. Furthermore, the model demonstrates explainable prediction through Gradient Weighted Class Activation Maps and Locally Interpretable Model-Agnostic Evaluations. ConViTX attains 98.8% accuracy on the maize dataset and 61.42% on drone camera-captured raw images. With only 0.7 million parameters and 0.647 billion operations per second, the proposed model has the potential for deployment on resource-constrained precision agriculture setups.
Network virtualization (NV) allows service providers (SPs) to instantiate logically isolated entities called virtual networks (VNs) on top of a substrate network (SN). Though VNs bring about multiple benefits, particularly in terms of economic costs and elasticity, they also force various technical challenges to be addressed. The primary one is the issue of optimally allocating resources to VNs, also termed virtual network embedding (VNE). This paper presents an exhaustive survey of VNE by extensively covering the state-of-the-art research field in this very active field and focusing on the emerging research trends in industry and academia over the last decade. In addition, this survey originally contributes to the literature by proposing a novel taxonomy for existing VNE solutions and providing a thorough comparative study of their strategies.
The increasing frequency and intensity of wildfires in recent years have not only devastated forest ecosystems, but have also caused a significant economic burden. According to a World Economic Forum report, annual expenditures to combat wildfire hazards is estimated to be more than {\} 50\_\_$ SmokeRS dataset. The model also efficiently identifies even tiny occurrence of smoke covering as little as 2% area of an image. The model has also been tested on industrial chimney smoke images and outdoor video fire–smoke scenes. Furthermore, the lightweight architecture of the model with only 0.7 million parameters and 0.2 billion floating point operations per second makes it suitable for deployment on Internet of Things-based forest and industrial surveillance systems.
Open-circuit switch faults (OCSFs) in power semiconductor switches are caused by wire bonding failures, gate driver malfunction, surge voltage/current, electromagnetic interference, and cosmic radiation. Under OCSFs, the signal characteristics are not excessively high, but prolonged OCSFs risk cascading system failures. This paper presents a comprehensive analysis of various deep neural network (DNN)-based architectures, such as long short-term memory (LSTM) and convolutional neural network (CNN), to diagnose multi-class OCSFs in three-phase active front-end rectifiers (TP-AFRs). A novel multi-sensor time-series sequence (MTSS) dataset is acquired at 500 Hz, comprising 624 observations from 19 sensor signals for single, double, and triple-switch OCSFs. The intertwining issue in the MTSS dataset is visualized using t-SNE, and the initial experiments with support vector machine (SVM) rendered the highest test accuracy of 93% against k-nearest neighbour, artificial neural network, and decision tree classifiers. Further, our investigations revealed that an architecture with two-layer CNN, one-layer LSTM, and one fully connected layer achieves a competitive testing accuracy of 95.03%, showing an improvement of 2.03% from the SVM classifier, and 7.03% from the one-layer LSTM network. These findings demonstrate the potential of this approach for enhancing reliability of TP-AFRs with the direct application of down-sampled raw electrical signals.
Present paper attempts to investigate the effect of FDI on economic growth of China and India. To take care ofthe issue of structural change in economy, time period of the study is taken to be 1993-2009. First of all we builtour modified growth model from basic growth model. The factors included in growth model were GDP, HumalCapital, Labor Force, FDI and Gross Capital Formation, among which GDP was dependent variable while restfour were independent variables. After running OLS (Ordinary Least Square) method of regression we foundthat 1% increase in FDI would result in 0.07% increase in GDP of China and 0.02% increase in GDP of India.We also found that China’s growth is more affected by FDI, than India’s growth. The study also providespossible reasons behind China’s great show of FDI and the lessons India should learn from China for betterutilization of FDI.
This study explores Foreign Direct Investment (FDI) inflow determinants in Brazil, Russia Federation, India and China; collectively known as BRIC countries. A random effect model is employed on the panel data set consisting of annual frequency data of 35 years ranging from 1975 to 2009 to identify the FDI inflow determinants. The empirical results show that market size, trade openness, labour cost, infrastructure facilities and macroeconomic stability and growth prospects are potential determinants of FDI inflow in BRIC where as gross capital formation and labour force are insignificant, although macroeconomic stability and growth prospects have very little impact.
Introduction
Management of type 2 diabetes is based on achieving the glycaemic control goal with the help of drugs or insulin. There are many markers used to measure glycaemic control, such as glycated haemoglobin (HbA1c), random blood sugar and fructosamine levels. Although HbA1c is a good method for measuring glycaemic control, it has some limitations. Fructosamine is a less studied method that can be used as another marker to measure glycaemic control when HbA1c is questionable or absent. The aim of this study was to evaluate the use of fructosamine as an alternate marker for glycaemic control by comparing fructosamine levels with HbA1c in glycaemic control in type 2 diabetic patients.
Materials and Methods
This is a cross-sectional study involving 77 patients with type 2 diabetes from the central laboratory of Bowring and Lady Curzon Hospital, Shivajinagar, Bangalore. Data collected includes age, gender, body mass index, diabetes duration, fructosamine, HbA1c and fasting blood sugar (FBS) levels. Glycaemic control level is measured by HbA1c, fructosamine and FBS.
Results
Analysis of glycaemic control in 77 patients with type 2 diabetes showed a positive association between fructosamine and HbA1c levels (P < 0.001).
Conclusion
Fructosamine levels are significantly associated with HbA1c levels, so fructosamine can be used as a biomarker to assess glycaemic control in patients with type 2 diabetes, especially when HbA1c is significantly erroneous or absent.
This paper presents a channel estimation method for an intelligent reflecting surface (IRS)‐aided orthogonal time‐frequency spacing (OTFS) system in a dynamic scenario. Current channel estimation techniques for IRS‐aided OTFS systems are built upon explicit channel model assumptions, which can constrain their adaptability in intricate environments. Furthermore, their reliance on pilot signals introduces significant pilot overhead in high‐speed scenarios. To address these issues, we propose a dilated attention generative adversarial network (DAGAN) that has a novel architecture for capturing long‐range dependency among data symbols separated in the delay‐Doppler (DD) domain for estimating channels. Furthermore, the DAGAN includes an attention block to extract essential features from data symbols for channel information generation. This mechanism is guided by least square (LS) estimates of specific DD paths, serving as additional information for the DAGAN. Experimental results illustrate that the DAGAN method performs the best with the least NMSE with limited pilot overhead in comparison to other methods.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information