Recent publications
The incorporation of artificial intelligence (AI) into medical data processing is increasingly prevalent due to its diagnostic and analytical competencies. Numerous deep learning models have been applied to medical data analyses, including the Temporal Convolutional Network (TCN) for its competence of temporal pattern abstraction. However, TCN may be suboptimal for modeling longer-range dependencies in EEG data, lacking metric learning and data clustering mechanisms. In other words, there is neither metric learning nor a data clustering mechanism in the conventional TCN architecture. Therefore, an enhanced TCN model is devised, namely Metric Learning-based Temporal Convolutional Network for EEG signals (MLTCN-EEG), to tackle the challenges associated with classifying seizure and non-seizure EEG signals. Specifically, in the proposed model, metric learning is integrated into the TCN architecture to learn discriminative feature spaces. This enhances the extraction of complex patterns inherent in EEG data, boosting the model’s classification competency. In other words, this study formulates and integrates a specialized metric learning component within the temporal convolutional architecture, enabling the model to discern subtle variations crucial for accurate seizure identification. Despite facing the common challenge of unbalanced training data in deep neural network training, this study also explores and assesses two methods for balancing datasets: sub-sampling and Deep Convolutional Generative Adversarial Network (DCGAN). The empirical results demonstrate that MLTCN-EEG with a DCGAN-balanced dataset exhibits superior performance compared to other existing techniques, showcasing its efficacy in distinguishing seizure events with superior classification performance. The proposed model achieves an accuracy of 99.15%, precision of 100%, recall rate of 98.35%, specificity of 100% and a remarkable F1 score of 99.16% using the University of California Irvine (UCI) Seizure datasets. The proposed model also attained an accuracy of 82.66%, precision of 68.09%, recall rate of 67.95%, specificity of 88.13% and F1 score of 68.02% using Children’s Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) dataset. These results highlight the potential impact of MLTCN-EEG and DCGAN in advancing the classification of epileptic EEG signals for improved medical diagnosis and therapy.
Electrochromic (EC) smart windows are highly efficient in preventing solar irradiation. Nevertheless, an insufficient automated monitoring and control system for EC windows has been observed. This study outlined the framework for designing a control system that effectively regulated the voltage polarity of an EC smart window. Integrating Internet of Things (IoT) capabilities and environmental sensors enhanced the smart window control system. This integration allowed for automated monitoring and adjustment of light intensity (below and above 30,000 lx), temperature (below and above 32 °C), and UV index (below 3 and above 7) for the smart window. The magnitude of the current supplied from the output load of the operational circuit module was directly correlated with the switching time of the electrochromic window. Moreover, voltage fluctuation was observed during the evaluation testing of the control system, which was ascribed to the reliability of the breadboard platform. This study also provided a functional design analysis for a Wi-Fi-enabled control system, presented insights into manual and automatic control modes under diverse environmental conditions, and improved user interaction through real-time data visualization using the IoT Blynk platform. Overall, this study presented a range of cost-effective approaches for managing the dynamic EC window using Wi-Fi and sensor-enabled technology.
The popularity of cloud computing (CC) has increased significantly in recent years due to its cost-effectiveness and simplified resource allocation. Owing to the exponential rise of cloud computing in the past decade, many corporations and businesses have moved to the cloud to ensure accessibility, scalability, and transparency. The proposed research involves comparing the accuracy and fault prediction of five machine learning algorithms: AdaBoostM1, Bagging, Decision Tree (J48), Deep Learning (Dl4jMLP), and Naive Bayes Tree (NB Tree). The results from secondary data analysis indicate that the Central Processing Unit CPU-Mem Multi classifier has the highest accuracy percentage and the least amount of fault prediction. This holds for the Decision Tree (J48) classifier with an accuracy rate of 89.71% for 80/20, 90.28% for 70/30, and 92.82% for 10-fold cross-validation. Additionally, the Hard Disk Drive HDD-Mono classifier has an accuracy rate of 90.35% for 80/20, 92.35% for 70/30, and 90.49% for 10-fold cross-validation. The AdaBoostM1 classifier was found to have the highest accuracy percentage and the least amount of fault prediction for the HDD Multi classifier with an accuracy rate of 93.63% for 80/20, 90.09% for 70/30, and 88.92% for 10-fold cross-validation. Finally, the CPU-Mem Mono classifier has an accuracy rate of 77.87% for 80/20, 77.01% for 70/30, and 77.06% for 10-fold cross-validation. Based on the primary data results, the Naive Bayes Tree (NB Tree) classifier is found to have the highest accuracy rate with less fault prediction of 97.05% for 80/20, 96.09% for 70/30, and 96.78% for 10 folds cross-validation. However, the algorithm complexity is not good, taking 1.01 seconds. On the other hand, the Decision Tree (J48) has the second-highest accuracy rate of 96.78%, 95.95%, and 96.78% for 80/20, 70/30, and 10-fold cross-validation, respectively. J48 also has less fault prediction but with a good algorithm complexity of 0.11 seconds. The difference in accuracy and less fault prediction between NB Tree and J48 is only 0.9%, but the difference in time complexity is 9 seconds. Based on the results, we have decided to make modifications to the Decision Tree (J48) algorithm. This method has been proposed as it offers the highest accuracy and less fault prediction errors, with 97.05% accuracy for the 80/20 split, 96.42% for the 70/30 split, and 97.07% for the 10-fold cross-validation.
In the era of digital platforms and abundant data, food recommender systems have been essential tools for guiding individuals to discover preferences and perfect meals. Nowadays, the wide variety of available food options presents a challenge for consumers seeking personalized meals and relevant recommendations. By dynamically allocating evaluations based on user behaviour and item characteristics, the system aims to increase the variety and precision of dietary recommendations. Furthermore, the system will implement continuous learning mechanisms to responds to fluctuations in user preferences over time, ensuring sustained high levels of user satisfaction. Therefore, the primary objective of this paper is to design and implement the recommender system, test and evaluate the hybrid recommender system and explore the various recommendation techniques. Besides that, this paper will discuss the combination of various algorithms: collaborative filtering, content-based filtering, and hybrid approaches. The expected outcome of this research is a robust recommender system that provides accurate and relevant food recommendations to individual preferences. In conclusion, a system with a graphical user interface will be implemented so that the end-user and administrator can visualize it for better insight into decision-making.
The performance of a novel transferred electron device structure aimed at sustaining high-frequency signals in the terahertz (THz) range is investigated. The device uses a highly doped δ-layer to split the n-doped device into two distinct regions, forming a doped-δ-doped configuration. The first region generates high-speed electrons toward the δ-layer, while the second region utilizes negative differential resistance to modulate electron speeds and sustain oscillations. An ensemble self-consistent Monte Carlo model is employed to analyze electron dynamics and THz signal generation in this structure under a constant bias. The design demonstrates superior performance, achieving a fundamental operating frequency of 427 GHz in a 600 nm length InP device, nearly a 50% increase over conventional notch-doped design, while maintaining the current harmonic amplitude. This design achieves higher frequencies without reducing device length and increasing doping density, effectively addressing the trade-off of the Kroemer criterion. The study of the effects of varying doping densities and region lengths on device performance, highlighting the importance of optimizing these parameters to sustain current oscillations and efficiently generate THz signals. This design offers a promising solution for a compact and efficient THz source.
This research introduces an innovative design for a metamaterial-based compact multi-band biosensor aimed at early-stage cervical cancer detection. The device operates within the terahertz (THz) frequency range, specifically from zero to six THz. The proposed sensor architecture features a metamaterial layer composed of a patterned aluminum structure deposited on a polyimide substrate. The primary design objective is to optimize the geometry parameters to achieve near-perfect absorption of electromagnetic waves across the entire operating bandwidth. The design process utilizes full-wave electromagnetic simulation tools. The paper details all intermediate steps in the sensor’s topology development, guided by an investigation of the absorption characteristics of successive architectural variations. It also analyzes the effects of the substrate and resonator material. The suitability of the proposed sensor for early-stage cancer diagnosis is demonstrated using a microwave imaging (MWI) system that incorporates the device. Extensive simulation studies confirm the sensor’s capability to distinguish between healthy and cancerous cervical tissue. For further validation, comprehensive benchmarking is conducted against numerous state-of-the-art sensor designs reported in recent literature. These comparative studies indicate that the proposed sensor offers superior performance in terms of absorbance levels and the width of the operating bandwidth, both of which enhance the sensitivity of cancer detection.
Viruses are submicroscopic agents that can infect other lifeforms and use their hosts’ cells to replicate themselves. Despite having simplistic genetic structures among all living beings, viruses are highly adaptable, resilient, and capable of causing severe complications in their hosts’ bodies. Due to their multiple transmission pathways, high contagion rate, and lethality, viruses pose the biggest biological threat both animal and plant species face. It is often challenging to promptly detect a virus in a host and accurately determine its type using manual examination techniques. However, computer-based automatic diagnosis methods, especially the ones using Transmission Electron Microscopy (TEM) images, have proven effective in instant virus identification. Using TEM images collected from a recent dataset, this article proposes a deep learning-based classification model to identify the virus type within those images. The methodology of this study includes two coherent image processing techniques to reduce the noise present in raw microscopy images and a functional Convolutional Neural Network (CNN) model for classification. Experimental results show that it can differentiate among 14 types of viruses with a maximum of 97.44% classification accuracy and F1-score, which asserts the effectiveness and reliability of the proposed method. Implementing this scheme will impart a fast and dependable virus identification scheme subsidiary to the thorough diagnostic procedures.
Purpose
This study evaluated a programme, CAP Youth Empowerment Institute (CAPYEI) that uses Basic Employability Skills Training (BEST) model to contribute the evidence and generate lessons on the type of skills needed to enhance women economic empowerment. The purpose of the study is to generate evidence of what works in women skill acquisition and employability in Kenya.
Design/methodology/approach
This study adopted a mixed research design incorporating both quantitative and qualitative approaches to conduct an impact evaluation of the CAP YEI training programme on the employability and entrepreneurship of women and girls in Kenya. The design allows for the consideration of two groups: treatment and control groups thus allowing for a clear comparison of outcomes between those who received the training (treatment group) and those who did not (control group). Project evaluation data were collected from both primary and secondary sources. Given that the study was conducted post evaluation, it did not have baseline survey data, therefore an ex post baseline evaluation using a retrospective approach was computed. In the absence of a true baseline, the questionnaire was tailored to allow beneficiary recall. A key design consideration of impact evaluation study was the identification of a valid control group that could generate a suitable counterfactual outcome.
Findings
The results indicate positive self-evaluation on most of the selected soft skills. For instance, over 80% of both the beneficiaries and non-beneficiaries indicated that they possessed communication, teamwork, interpersonal, decision-making, prioritization, assertiveness and negotiation skills, whereas 58 % and 63 % of the beneficiaries and non-beneficiaries, respectively, indicating that they possessed information and communication technology (ICT) skills. Results indicate that skills development improves chances of employment among the target group and especially women. The results indicate that addressing gender inequality requires targeted interventions. The targeted interventions could be aimed at ensuring that women and girls are empowered to compete favourable with men and boys in the labour market.
Research limitations/implications
This study was an evaluative study of the impact of an intervention in a single case study. This means while the findings of the study are relevant to policy and practice, they cannot be generalized to a broader populace. The absence of base-line data rendered the use of comparative data impossible. Data generated through self-reported assessment of intervention impacts are prone to responder biases, which may raise questions about the validity of the findings.
Practical implications
This study recommends integration of transferable skills training in teaching and training institutions to enhance competitiveness, employability and entrepreneurship chances of the graduates in the labour market. The study is significant in informing policy direction in Kenya.
Originality/value
This study evaluated a model of integrating transferable skills into a young women training programme and evaluated its impacts with a view of documenting what works for women employability. This case is a unique one in the country specific context.
Mobile phishing has emerged as one of the most severe cybercrime threats; thus, research must examine the factors affecting people’s likelihood of becoming instant messaging phishing targets. In this study, we draw on the cyber-routine activity theory (Cyber-RAT) and heuristic-systematic model (HSM) to predict Gen-Zers’ phishing susceptibility. Based on online survey data (n = 361), the proposed research model was validated via structural equation modeling conducted with SmartPLS 4. Findings indicate that engaging in online risky behavior (social media: instant messaging, vocational, and leisure activities) increases Gen-Zers’ exposure to phishers, increasing their likelihood of becoming instant messaging phishing targets. Phishing messages with a desirable or relevant topic (high message involvement) significantly impact Gen-Zers’ phishing susceptibility. Gen-Zers’ phishing susceptibility is also influenced by phishing messages with persuasive cues. While knowledge of the phishing domain does not directly influence Gen-Zers’ susceptibility to phishing attacks, it significantly motivated them to adopt effective online security management practices on social instant messaging platforms. This paper discusses how these findings implicate online users and inform agencies to promote knowledge for understanding and detecting phishing attacks to avoid victimization.
Several extended Burr-type X distributions have been formed in the past decade. These distributions are widely used in modeling lifetime data as their hazard functions can fit various shapes, such as bathtub, decreasing, and increasing. However, certain extended Burr-type X distributions may not adequately fit the unimodal hazard function. Thus, this paper proposes a new extended distribution with greater flexibility to solve this deficiency: exponentiated gamma Burr-type X distribution. We provide the expressions for the probability density and cumulative distribution functions of the proposed distribution, along with its statistical properties, such as limit behavior, quantile function, moment function, moment-generating function, Renyi entropy, and order statistics. To estimate the model parameters, we employ the maximum likelihood estimation method, and we assess its performance through a simulation study with different sample sizes and parameter values. Finally, to demonstrate the application of this new distribution, we apply it to a real dataset concerning the failure times of aircraft windshields. The results indicate that the new distribution provides a superior fit compared to its submodels and the extended Burr-type X distributions. Moreover, it proves to be highly competitive and can serve as an alternative to certain nonnested models. In summary, the new distribution is highly flexible, capable of modeling a variety of hazard-function shapes, including decreasing, increasing, bathtub, and unimodal patterns.
With the continued development of information technology and increased global cultural exchanges, translation has gained significant attention. Traditional manual translation relies heavily on dictionaries or personal experience, translating word by word. While this method ensures high translation quality, it is often too slow to meet the demands of today’s fast-paced environment. Computer-assisted translation (CAT) addresses the issue of slow translation speed; however, the quality of CAT translations still requires rigorous evaluation. This study aims to answer the following questions: How do CAT systems that use automated programming fare compared to more conventional methods of human translation when translating English vocabulary? (2) How can CAT systems be improved to handle difficult English words, specialised terminology, and semantic subtleties? The working premise is that CAT systems that use automated programming techniques will outperform traditional methods in terms of translation accuracy. English vocabulary plays a crucial role in translation, as words can have different meanings depending on the context. CAT systems improve their translation accuracy by utilising specific automated programs and building a translation corpus through translation memory technology. This study compares the accuracy of English vocabulary translations produced by CAT based on automatic programming with those produced by traditional manual translation. Experimental results demonstrate that CAT based on automatic programming is 8% more accurate than traditional manual translation when dealing with complex English vocabulary sentences, professional jargon, English acronyms, and semantic nuances. Consequently, compared to conventional human translation, CAT can enhance the accuracy of English vocabulary translation, making it a valuable tool in the translation industry.
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between “Amplified” and “Non-Amplified” regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
This paper presents the detailed design configuration and investigation of a small-scale dual-band metamaterial absorber (MTMA) for solid and liquid sensing applications. The overall dimension of the MTMA unit cell is 10 × 10 × 1.57 mm³ and constitutes an affordable FR-4 substrate. The absorber exhibits dual absorption peaks at 3.470 GHz for the S-band and 7.219 GHz for the C-band, respectively. Both absorption characteristics have been validated through comprehensive simulation and experimental procedures. The dual-band absorption rate exceeded 99% during simulations, and experimental validation showed an absorption rate above 98%. For sensing applications, various solid materials, including different Rogers substrates ( RT 5880, RT 5870, RT 4003 and RT 4835) and liquids such as sunflower and crown oil, were utilized. Our findings indicate that the proposed MTMA achieves a maximum Q-factor of 191 and a sensitivity of up to 2.5 for both solid and liquid sensing applications compared to previous studies. The simulation and experimental validation of the result indicate that suggested MTMA can be effectively used in different sensing applications such as the medical and communications industry.
The study of research proposes a systematic pattern for optimization and fabrication of a sustainable-cost effective electrochemical sensor made by Bi-CdFe2O4 (BCDF) nanoparticle and graphite powder. The structural examinations of synthesized BCDF materials were analyzed by specific spectral techniques viz.; P-XRD, SEM-EDX, TEM, XPS, FT-IR and DRS techniques. The modified sensor electrode offer a significant electrochemical properties that can improve the material selectivity and sensitivity actions measured by Cyclic Voltammetric (CV) and Electrochemical Impedance Spectral (EIS) plots. We demonstrated a developed highly-purity BCDF-graphite paste electrode for sensing actions on Paracetamol and Lead (Pb²⁺) ions under 0.1 M KCl. The excellent sensing activity towards Lead ions and Paracetamol confirmed by its redox potential peaks at scan rate of 1–5 V/s with maximum sensitivity of -0.61 V and 0.69 V respectively. The excellent photo-dye-degradation action of BCDF (98.2%) than those of host CDF (81.6%) on Rose Bengal (RB) dye was demonstrated. Its kinetic study reveals that this process follows first order kinetics and rate constants of the host (18.1 × 10⁻³ min⁻¹) and BCDF (39.2 × 10⁻³ min⁻¹) were measured. Thus, the synthesized BCDF electrode provides a new perception for developing specific nano-sensor towards detection of toxic metals.
This study presents the “ESP32 Dataset,” a dataset of radio frequency (RF) data intended for human activity detection. This dataset comprises 10 activities carried out by 8 volunteers in three different indoor floor plan experiment setups. Line-of-sight (LOS) scenarios are represented by the first two experiment setups, and non-line-of-sight (NLOS) scenarios are simulated in the third experiment setup. For every activity, the volunteers performed 20 trials, hence there were 1,600 recorded trials overall per experiment setup in the sample (8 people × 10 activities × 20 trials) . In order to obtain the Received Signal Strength Indicator (RSSI) and Channel State Information (CSI) values from the recorded transmissions, the D-Link AX3000 router and ESP32 microcontroller were used as the transmitter (Tx) and receiver (Rx) in the data collection process. This collection is an invaluable resource for academics and practitioners in the field of human activity detection since it offers rich and diversified RF data across a wide range of experiment setups and activities. In contrast to other datasets with different hardware configurations, this dataset records one RSSI value and fifty-two CSI subcarriers using the ESP-CSI Tool RF data capture tool. The number of RSSI and CSI signals, specific to the ESP32 hardware, allows for the exploration of resource-efficient activity detection algorithms, which is crucial for Internet of Things (IoT) applications where low-power and cost-effective solutions are required. This dataset is particularly valuable because it reflects the constraints and capabilities of the widely used ESP32 microcontrollers, making it highly relevant for developing and testing new algorithms tailored to IoT environments. The availability of this dataset enables the development and evaluation of activity detection algorithms and methodologies, enhancing the potential for improved experimental setups in IoT applications.
This study investigates the effect of busy independent directors on earnings management in Chinese initial public offering (IPO) companies from 2010 to 2020. Using various measures of busy independent directors, the results indicate that directors in IPO companies play a significant role in supervision and governance, thereby significantly mitigating earnings management. Busy independent directors, serving on multiple boards, may face time constraints, but, intriguingly, companies with such directors show a lower inclination for earnings manipulation. This suggests that despite potential time pressures, the independent oversight provided by these directors serves as a deterrent to financial reporting manipulation in the context of Chinese IPOs, underscoring the importance of robust corporate governance structures for transparency and reliability. Additionally, this study identifies specific conditions that amplify the constraining effect and finds that the effect of busy independent directors on earnings management in IPO companies is more significant in companies with high compensation for independent directors, independent directors serving off-site, and companies that receive higher media attention. Mechanism testing indicated that busy independent directors mitigate IPO firms’ earnings management by engaging reputable audit firms and enhancing the effectiveness of internal controls. This nuanced analysis offers valuable insights into how busy, independent directors actively contribute to alleviating the risks associated with earnings management during the IPO process. This study enhances our understanding of the governance benefits of active independent directors in multiple roles and offers novel perspectives on how stakeholders can influence and constrain earnings management in IPO companies.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Kuala Lumpur, Malaysia
Website