Epileptic seizure detection classification distinguishes between epileptic and non-epileptic signals and is an important step that can aid doctors in diagnosing and treating epileptic seizures. In this paper, we studied the existing epileptic seizure detection methods in terms of challenges and processes developed based on electroencephalograph (EEG) signals. To identify the research deficiencies and provide a feasible solution, we surveyed the existing techniques at each phase, including signal acquisition, pre-processing, feature extraction, and classification. Most previous and current research efforts have used traditional features and decomposing techniques. Therefore, in this paper, we introduced an enhanced and efficient epileptic seizure technique using EEG signals, for which we also developed a mobile application for monitoring the classification of EEG signals. The application triggers notifications to all associated users and sends a visual notification should an EEG signal be classified as epileptic. In this research, we have used publicly available EEG data from the University of Bonn. Our proposed method achieved an average accuracy of 98% by utilizing different machine-learning algorithms for classification, and it has outperformed recently published studies. Though there have been other mobile applications for epileptic seizure detection, they have been based on motion and falling detection, as opposed to ours, which was developed based on EEG classification. Our proposed method will have an impact in the medical field, particularly for epilepsy seizure monitoring as well as in the Human–Computer Interaction fields, majorly in the Brain–Computer Interaction (BCI) applications.
This study sought to examine demographic, treatment-related, and diagnosis-related correlates of substance use disorder (SUD)-related perceived discrimination among patients receiving methadone maintenance treatment (MMT). Participants were 164 patients at nonprofit, low-barrier-to-treatment-access MMT programs. Participants completed measures of demographics, diagnosis-related characteristics (Brief Symptom Inventory (BSI-18) and Depressive Experiences Questionnaire (DEQ)), and treatment-related characteristics. Perceived discrimination was measured on a seven-point Likert-type scale ranging from 1 ("Not at all") to 7 ("Extremely") in response to the item: "I often feel discriminated against because of my substance abuse." Given the variable's distribution, a median split was used to categorize participants into "high" and "low" discrimination groups. Correlates of high and low discrimination were analyzed with bivariate and logistic regression models. Ninety-four participants (57%) reported high SUD-related perceived discrimination. Bivariate analyses identified six statistically significant correlates of SUD-related perceived discrimination (P < .05): age, race, age of onset of opioid use disorder, BSI-18 Depression, DEQ Dependency, and DEQ Self-Criticism. In the final logistic regression model, those with high (versus low) SUD-related perceived discrimination were more likely to report depressive symptoms and be self-critical. Patients in MMT with high compared to low SUD-related perceived discrimination may be more likely to report being depressed and self-critical.
The edTPA is an educative performance assessment designed to assess teacher readiness. It has gained momentum across the country; yet, it has often met with resistance from educators in various roles. This study addressed the gap in edTPA literature by exploring new teachers' perceptions of the assessment as an educative and efficacious tool. The researchers explored: 1) What are novice teachers' levels of self-efficacy regarding readiness to teach as measured by the edTPA Teacher Survey? and 2) How do novice teachers perceive the edTPA process as an influence on their professional practices? A major outcome revealed new teachers are using mastery experiences to build efficacy and hone their craft in spite of their edTPA experience. Recommendations include a re-tooling of the edTPA process with pre-service candidates to ensure meaningful long-term value and prompt in-service development for new teachers.
Graphene quantum dots (GQDs) are carbon-based, zero-dimensional nanomaterials and unique due to their astonishing optical, electronic, chemical, and biological properties. Chemical, photochemical, and biochemical properties of GQDs are intensely being explored for bioimaging, biosensing, and drug delivery. The synthesis of GQDs by top-down and bottom-up approaches, their chemical functionalization, bandgap engineering, and biomedical applications are reviewed here. Current challenges and future perspectives of GQDs are also presented.
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in cancer diagnosis using histopathology images. Feature extraction is vital in both approaches since the feature set is fed to a classifier and determines the performance. This paper evaluates three feature extraction methods and their performance in breast cancer diagnosis. Features are extracted by (1) a Convolutional Neural Network, (2) a transfer learning architecture VGG16, and (3) a knowledge-based system. The feature sets are tested by seven classifiers, including Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units) on the BreakHis 400× image dataset. The CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network, and the knowledge-based features achieved up to 98% for Neural Network, Random Forest, Multilayer Perceptron classifiers.
Background and objectives: Minimal research has examined body image dissatisfaction (BID) among patients receiving methadone maintenance treatment (MMT). We tested associations between BID and MMT quality indicators (psychological distress, mental and physical health-related quality of life [HRQoL]) and whether these associations varied by gender. Methods: One hundred and sixty-four participants (n = 164) in MMT completed self-report measures of body mass index (BMI), BID, and MMT quality indicators. General linear models tested if BID was associated with MMT quality indicators. Results: Patients were primarily non-Hispanic White (56%) men (59%) with an average BMI in the overweight range. Approximately 30% of the sample had moderate or marked BID. Women and patients with a BMI in the obese range reported higher BID than men and patients with normal weight, respectively. BID was associated with higher psychological distress, lower physical HRQoL, and was unrelated to mental HRQoL. However, there was a significant interaction in which the association between BID and lower mental HRQoL was stronger for men than women. Discussion and conclusions: Moderate or marked BID is present for about three in 10 patients. These data also suggest that BID is tied to important MMT quality indicators, and that these associations can vary by gender. The long-term course of MMT may allow for assessing and addressing novel factors influencing MMT outcomes, including BID. Scientific significance: This is one of the first studies to examine BID among MMT patients, and it highlights MMT subgroups most at risk for BID and reduced MMT quality indicators due to BID.
This paper discusses concerns with specific approaches in identifying and eliminating gastrointestinal (GI) pathogens, as well as detoxifying toxic metals, that may be misleading and harmful to a patient's health. These are non-scientific methods that claim to improve GI microbial balance and mineral nutritional status that persist in the nutritional and natural medicine market, and unfortunately many are actively promoted through specific products and protocols marketed by nutritional supplement companies that should know better. The potential toxicity and mucosal damage of the long-term use of aggressive laxative herbs such as Cascara sagrada, rhubarb and/or Senna, as well as potential adverse events from ingredients containing fulvic acids and/or humic acids are discussed.
The Quality-of-Service (QoS) provision in machine learning is affected by lesser accuracy, noise, random error, and weak generalization (ML). The Parallel Turing Integration Paradigm (PTIP) is introduced as a solution to lower accuracy and weak generalization. A logical table (LT) is part of the PTIP and is used to store datasets. The PTIP has elements that enhance classifier learning, enhance 3-D cube logic for security provision, and balance the engineering process of paradigms. The probability weightage function for adding and removing algorithms during the training phase is included in the PTIP. Additionally, it uses local and global error functions to limit overconfidence and underconfidence in learning processes. By utilizing the local gain (LG) and global gain (GG), the optimization of the model’s constituent parts is validated. By blending the sub-algorithms with a new dataset in a foretelling and realistic setting, the PTIP validation is further ensured. A mathematical modeling technique is used to ascertain the efficacy of the proposed PTIP. The results of the testing show that the proposed PTIP obtains lower relative accuracy of 38.76% with error bounds reflection. The lower relative accuracy with low GG is considered good. The PTIP also obtains 70.5% relative accuracy with high GG, which is considered an acceptable accuracy. Moreover, the PTIP gets better accuracy of 99.91% with a 100% fitness factor. Finally, the proposed PTIP is compared with cutting-edge, well-established models and algorithms based on different state-of-the-art parameters (e.g., relative accuracy, accuracy with fitness factor, fitness process, error reduction, and generalization measurement). The results confirm that the proposed PTIP demonstrates better results as compared to contending models and algorithms.
This paper proposes a novel mathematical theory of adaptation to convexity of loss functions based on the definition of the condense-discrete convexity (CDC) method. The developed theory is considered to be of immense value to stochastic settings and is used for developing the well-known stochastic gradient-descent (SGD) method. The successful contribution of change of the convexity definition impacts the exploration of the learning-rate scheduler used in the SGD method and therefore impacts the convergence rate of the solution that is used for measuring the effectiveness of deep networks. In our development of methodology, the convexity method CDC and learning rate are directly related to each other through the difference operator. In addition, we have incorporated the developed theory of adaptation with trigonometric simplex (TS) designs to explore different learning rate schedules for the weight and bias parameters within the network. Experiments confirm that by using the new definition of convexity to explore learning rate schedules, the optimization is more effective in practice and has a strong effect on the training of the deep neural network.
Carbon nanotubes (CNT) have fascinating applications in flexible electronics, biosensors, and energy storage devices, and are classified as metallic or semiconducting based on their chirality. Semiconducting CNTs have been teased as a new material for building blocks in electronic devices, owing to their band gap resembling silicon. However, CNTs must be sorted into metallic and sem-iconducting for such applications. Formerly, gel chromatography, ultracentrifugation, size exclusion chromatography, and phage display libraries were utilized for sorting CNTs. Nevertheless, these techniques are either expensive or have poor efficiency. In this study, we utilize a novel technique of using a library of nine tripeptides with glycine as a central residue to study the effect of flanking residues for large-scale separation of CNTs. Through molecular dynamics, we found that the tripeptide combinations with threonine as one of the flanking residues have a high affinity for metallic CNTs, whereas those with flanking residues having uncharged and negatively charged polar groups show selectivity towards semiconducting CNTs. Furthermore, the role of interfacial water molecules and the ability of the tripeptides to form hydrogen bonds play a crucial role in sorting the CNTs. It is envisaged that CNTs can be sorted based on their chirality-selective interaction affinity to tripeptides.
This paper aims to design and implement a robust wireless charging system that utilizes affordable materials and the principle of piezoelectricity to generate clean energy to allow the user to store the energy for later use. A wireless charging system that utilizes the piezoelectricity generated as a power source and integrated with Qi-standard wireless transmission would substantially affect the environment and the users. The approach consists of a full-wave-rectified piezoelectric generation, battery storage, Qi-standard wireless transmission, and Bluetooth Low Energy (BLE) as the controller and application monitor. Three main functions are involved in the design of the proposed system: power generation, power storage, and power transmission. A client application is conceived to monitor the transmission and receipt of data. The piezoelectric elements generate the AC electricity from the mechanical movements, which converts the electricity to DC using the full-wave bridge rectifiers. The sensor transmits the data to the application via BLE protocols. The user receives continuous updates regarding the storage level, paired devices, and remaining time for a complete charge. A Qi-standard wireless transmitter transfers the stored electricity to charge the respective devices. The output generates pulses to 60 voltage on each compression of a transducer. The design is based on multiple parallel configurations to solve the issue of charging up to the triggering value VH = 5.2 V when tested with a single piezoelectric transducer. AA-type battery cells are charged in parallel in a series configuration. The system is tested for a number of scenarios. In addition, we simulate the design for 11.11 h for approximately 70,000 joules of input. The system can charge from 5% to 100% and draw from 98%. Using four piezos in the designed module results in an average output voltage of 1.16 V. Increasing the number of piezos results in 17.2 W of power. The system is able to wirelessly transmit and store power with a stable power status after less than 0.01 s.
Face recognition is one of the most researched subjects in computer vision. The attention it receives is due to the complexity of the problem. Face recognition models have to deal with a wide variety of intraclass variations such as pose variations, facial expressions, the effect of aging, and natural occlusion due to different illumination. This challenge is often referred to as pose-illumination-expression. Our brain performs face recognition efficiently because we process it holistically. Since achieving human-level accuracy in face recognition is the ultimate goal, we should ascertain whether a computational model mimicking this approach would better tackle this problem. In this research, we developed a computational learning model that closely mimics the way the human visual cortex performs face recognition. It decomposes the recognition task into two specialized sub-tasks. A generator performs the holistic step, followed by a classifier for the recognition step, together referred to as the "holistic model." To deal with the pose variations problem, we introduced the use of calculated distance features known as configural information, which correlate the frontal face with profile face. We compared the holistic model against two baseline models and the current state-of-the-art (or classical models). The experimental results show the holistic model outperforming the current state-of-the-art for the Multi-PIE dataset, by 2.14% and performed as expected for the Labeled Face in the Wild dataset. The ability of the holistic model to recognize a face in any orientation with high accuracy will have a tremendous impact on biometric authentication with liveness detection.
This paper introduces a novel approach to enhance the efficiency of compressing Arabic Unicode text using the Lempel-Ziv-Welch (LZW) technique. This method includes two stages: transformation and compression. During the first phase, a multi-level scheme that works according to the level of words, syllables, and characters replaces multi-byte symbols with single-byte symbols, resulting in a binary output of 51%-75% smaller than the actual size and effective for compression. In the second phase: the outputs of the previous phase are received as inputs to the adaptive LZW technique, attached to a value representing the length of the initial phrase (minimum code length). This value is automatically determined according to the size of the data source to enhance the performance of LZW. The original data size is included in the compressed file, to be used during the decompression process to detect the length of the initial phrase. The compression ratio achieved by the proposed method was compared to that of the traditional LZW technology that uses multi-byte encoded characters and a fixed initial length phrase, as well as two recent technologies, DEFLATE and Gzip. Experimental results indicate that our method achieves an average compression rate of about 71% and outperforms other methods for all forms of Arabic texts, with improved LZW able to compress an additional 7% or more or less of the files it compresses. Variable-length dictionary LZW continuously displays a significant difference in compression ratios for small files compared to modern methods, whether it uses a variable-width- encoding scheme only or with multi-level encoding as a precompression step. Multi-level schema can be used as a preprocess to other compression techniques, especially those that work efficiently with binary data. Also, the original data volume can be used as a private key within data security and encryption applications.
Moringa oleifera is a nutrient-rich plant, also referred to as a miracle tree, and is commonly used in the preparation of functional foods including herbal biscuits. Despite having a wide range of biomolecules, M. oleifera has not been studied for its nutritional benefits in Nepal. To fill this gap, five different formulations of flower and leaf powder ratios of 11:4, 11.75:3.25, 12.5:2.5, 13.25:1.75, 14:1 named as A, B, C, D, E, and control formulations were tested for their sensory and chemical characteristics. The results showed that calcium content (115.73 mg/100 g) was higher in biscuits with a higher percentage of the leaf (11:4) while TPC was minimum. Further, biscuits containing a higher percentage of flower powder contained fewer tannins. The sensory analysis concluded that D was deemed the best in overall attributes by panelists upon statistical analysis, however formulations A and B were superior to other samples regarding the chemical properties. These findings confirm that there is a huge potential for improving herbal biscuits.
Currently, fraud detection is employed in numerous domains, including banking, finance, insurance, government organizations, law enforcement, and so on. The amount of fraud attempts has recently grown significantly, making fraud detection critical when it comes to protecting your personal information or sensitive data. There are several forms of fraud issues, such as stolen credit cards, forged checks, deceptive accounting practices, card-not-present fraud (CNP), and so on. This article introduces the credit card-not-present fraud detection and prevention (CCFDP) method for dealing with CNP fraud utilizing big data analytics. In order to deal with suspicious behavior, the proposed CCFDP includes two steps: the fraud detection Process (FDP) and the fraud prevention process (FPP). The FDP examines the system to detect harmful behavior, after which the FPP assists in preventing malicious activity. Five cutting-edge methods are used in the FDP step: random undersampling (RU), t-distributed stochastic neighbor embedding (t-SNE), principal component analysis (PCA), singular value decomposition (SVD), and logistic regression learning (LRL). For conducting experiments, the FDP needs to balance the dataset. In order to overcome this issue, Random Undersampling is used. Furthermore, in order to better data presentation, FDP must lower the dimensionality characteristics. This procedure employs the t-SNE, PCA, and SVD algorithms, resulting in a speedier data training process and improved accuracy. The logistic regression learning (LRL) model is used by the FPP to evaluate the success and failure probability of CNP fraud. Python is used to implement the suggested CCFDP mechanism. We validate the efficacy of the hypothesized CCFDP mechanism based on the testing results.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.