Recent publications
The high degree of convergence between thermal fluid sciences and artificial intelligence (AI) has changed traditional energy management methods. The technology provides energy conservation, fluid dynamics, and heat transfer optimisation solutions. In order to model prediction and increase the effectiveness of thermal fluid application proposals, this review looks at the latest developments in the use of AI-enabled machine learning techniques, such as Artificial Neural Networks (ANNs), Support Vector Machines (SVM), and Deep Learning Hierarchy. In order to support sustainable energy goals, these highlighted machine learning algorithms offer a potent environment for optimising energy flow, temperature regulation, and application stability. Furthermore, diverse reinforcement learning techniques facilitate the adoptive control of intricate thermal applications in real-time settings, while Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are employed for applicational monitoring and real-time data processing. By combining blockchain technology with artificial intelligence, a decentralised framework environment is introduced that offers energy conservation methods that are safe, transparent, honest, and reliable. An unchangeable ledger is provided by the technology, and accountability and traceability are provided by smart contracts. It supports the vital tasks of dynamically monitoring and validating energy consumption across decentralised applications (DApps) in real-time. Additionally, this article offers a thorough examination of recent research, the integration of emerging technologies, and real-world uses of blockchain and artificial intelligence in thermal fluid applications. A cost-effective energy management environment that supports international energy conservation initiatives is created by combining the predictive power of AI with the security features of blockchain technology. In addition, it offers a platform for future study, giving it a starting point for innovation in sustainable energy management.
The aim of this study was to evaluate the influence of innovative mineral–organic mixtures containing zeolite composites produced from fly ashes and lignite or leonardite on the fractional composition of soil organic matter in sandy loam soil under two-year pot experiments with maize. The fractional composition of soil organic matter (SOM) was analyzed and changes in the functional properties of soil groups were identified using the ATR-FTIR method. Changes in the content of phenolic compounds were assessed, and the potential impact of fertilizer mixtures on soil carbon stocks was investigated. The addition of these mixtures improved the stability of SOM. The application of mineral–organic mixtures significantly increased the total organic carbon (TOC) by 18% after the 2nd year of the experiment. The maximum TOC content in the soil was observed by 33% with the addition of MC3%Leo3% amendment. Nitrogen content in soil was increased by 62% with MV9%Leo6% additive, indicating increased soil fertility. The study highlighted an increase in fulvic acid carbon relative to humic acid carbon, signaling positive changes in organic matter quality. The new mineral–organic mixtures influence changes in specific functional groups (ATR-FTIR) present in the soil matrix, compared to mineral fertilization alone. The additive mixtures also contributed to an increase in soil carbon stocks, highlighting their potential for long-term improvement of soil fertility and carbon sequestration.
OPEN Abstract The intricacies and instability of introducing cryogenic propellants into the combustion system have piqued the curiosity of scientists studying the procedure. The latest innovation is utilizing data-driven machine learning and deep learning approaches to gain deeper insights into the related difficulties. However, the current work serves as a baseline for future research because relatively few studies have used data-driven methodologies to assess the temperature of liquid fuel injections in combustion systems.
This study advances the concept of individual psychological capital (positive psychology) to the collective-level construct (a novel contribution). It examines the unexplored relationships between Collective Psychological Capital (CPsyCap), safety stressors, and safety behaviors in oil and gas downstream plants and ground-level workers. Improving workplace safety requires understanding the causes of these safety behaviors. Structural Equation Modeling (SEM) was utilized to determine cause and effect relations. The sample included 376 downstream oil and gas workers. A structured questionnaire measured the relationship between Collective PsyCap, safety stressors, and Safety Behaviors through a cross-sectional approach. Collective PsyCap positively affects workers’ safety behaviors. Safety role conflict and ambiguity negatively impact safety behaviors. Safety role conflict and ambiguity dampen the positive association between Collective PsyCap and safety behaviors. The study’s results emphasize the positive influence of Collective PsyCap on safety behaviors within the oil and gas downstream sector, underscoring the importance of cultivating positive psychological resources among employees. The negative impact of Safety Role Conflict and Safety Role Ambiguity on safety behaviors highlights the significance of mitigating role-related stressors to improve safety outcomes. This study enhances our comprehension of workplace safety through collective psychological capital (cognitive resource) in the downstream oil and gas industry.
Forests are considered a source of fresh air and trees emit oxygen, which is the lifeline of the biotic environment. Nowadays the world is facing Air Quality issues that are none other than relevant to the Forest loss from the face of the Earth. The world has tropical rainforests in the equatorial regions and tundra forests in the polar region in a rational amount as compared to temperate forests. Existing research was based on the forest of Pakistan. The country passes from 24° to 36° latitudes which reflects its lies in and around temperate zones but in the North of Pakistan world’s most spectacular mountainous ranges with exceptional heights an ice cape region that affects climatic conditions and associated vegetative cover. Therefore, Pakistan has all kinds of forests, i.e., conifer, deciduous, and all associated types. This research was based on land use land cover (LULC) assessment methods using Geopatial Techniques, i.e., GIS (Geographical Information System) and SRS (Satellite Remote Sensing) by using different software. This study encompasses two regions that have different topographical characteristics, i.e., Mountainous North (Chitral), and other is river lowland or plain area (Sindh). For the acquisition of data object-based analysis and supervised classification methods have been used under the remote sensing domain while digitization and GCPs (ground control points) through GPS (Global Positioning System) under the GIS domain have been done. AI (Artificial intelligence) on GEE (Google Earth Engine) is performed to detect the location of threats in terms of forest loss. The prominent result, which has been found in the loss of forests, has been observed especially in the Chitral while Sindh is trying hard to save leftover forest zones. This study will be useful for further research on forest land in Pakistan. Climate change triggers many environmental issues that can be overcome if we try to reduce the cutting of wood for lumbering which is indeed not due to climate change but this human intervention may be the root cause of climate change.
Denoising is one of the most important processes in digital image processing to recover visual quality and structural integrity in images. Traditional methods often suffer from limitations like computational complexity, over-smoothing, and the inability to preserve critical details, particularly edges. This paper introduces a hybrid denoising algorithm combining Adaptive Median Filter (AMF) and Modified Decision-Based Median Filter (MDBMF) to address these challenges. The AMF adjusts the window sizes dynamically to precisely detect noisy pixels, and MDBMF selectively recovers corrupted pixels without affecting intact regions, effectively reducing noise while preserving edges. The subjective analysis is supplemented with objective analyses in which visual quality proves that hybrid approach performance considerably outperforms existing state-of-the-art methods. The test is conducted on nine benchmark images standard and medical dataset, namely, Chest and Liver images with different noise densities in the range from 10 to 90%. Quantitative evaluations PSNR, MSE, IEF, SSIM, FOM and VIF clearly show the performance superiority of the hybrid approach when compared to the state-of-the-art approaches. The improvement in PSNR was up to 2.34 dB, IEF improvement was more than 20%, and the improvement in MSE was up to 15% improvement over other methods like BPDF, AT2FF, and SVMMF. Improvement in the values of SSIM is up to 0.07, which confirms improved structural similarity. Furthermore, the FOM and VIF metrics demonstrate the remarkable performance of the hybrid approach: both the FOM and VIF exceeded all other denoising techniques evaluated, reaching 0.68 and 0.61, respectively.
This study presents the synthesis and utilization of a conductive polymer/clay nanocomposite for the adsorptive removal of an azo dye, methyl orange (MO), from artificial wastewater. The PANI-CLAY nanocomposites were synthesized by means of the oxidative polymerization route and characterized using the Brunauer, Emmett and Teller thermogravimetric analysis, Fourier-Transform Infrared spectra and Scanning Electron Microscopy. The surface area of the clay mineral decreased from 37.38 to 13.44 m²/g for 10 g of PANI/CLAY when made into a composite with PANI. Such behavior is most likely due to the possible coverage of the clay surface by a layer of PANI. Further, TGA revealed that incorporating CLAY significantly improved the thermal stability of PANI. The effects of adsorption process parameters such as adsorbent dosage (0.006–0.4 g), solution pH (1, 3, 5, 7, 9, 11 and 13), initial dye concentration (50–300 ppm), contact time (1–80 min) and temperature (25 °C, 30 °C, 35 °C and 40 °C) on the % removal efficiency were investigated. The experimental data were well fitted by the pseudo-second-order kinetic model. The maximum uptake capacity (qmax) values increased from 42.017 mg/g (PANI/CLAY 10 g) to 55.87 mg/g for PANI alone. The uptake capacity implies that the prepared adsorbents possess excellent adsorption characteristics with high affinity towards organic dye removal.
The study aims to investigate acoustic patterns of word stress in Pakistani English (PE) speech. It hypothesizes that lexical stress in Pakistani English modifies acoustic properties and that speakers of English do not differentiate disyllabic words as nouns or verbs based on lexical stress in English. The study offers valuable insights into the variability of English speech among Pakistani L1 speakers, contributing to the understanding of English pronunciation in a multilingual context; particularly focusing on the pronunciation differences among speakers with various first languages (L1), including Sindhi, Urdu, Punjabi, Pashto, and Balochi. By examining the acoustic properties such as pitch (F0), duration, and vowel formants (F1 and F2), the study is committed to identifying and comparing the lexical stress patterns in (PE), using a sample of 100 (20 for each language) participants from different L1 backgrounds. Seven pairs of disyllabic words were selected as stimuli following the methodology of as reported (Beckman Stress and non-stress accent, Foris Publications, 1986). and (Fry in Journal of the Acoustical Society of America 27:765–768, 1955), (Fry in Language and Speech 1:120–152, 1958). Each word pair consisted of a noun and a verb that had identical spelling forms and differed only in terms of stress placement (noun: stress on the initial syllable; verb: stress on the final syllable). These stimulus pairs were formed from the following corpus of word forms: contract, desert, object, permit, rebel, record, and subject. Each target word was elicited in isolation and in the semantically neutral frame sentence I said __ this time and accompanied by associated context sentences created specifically for each word. The study's experimental data sets will be used to train machine learning models, which will increase the accuracy of voice recognition for English speakers in Pakistan. For linguistic study and pedagogical reasons, the study offers insights into the phonetic variants of Pakistani English. The study's conclusions can help develop speech recognition and machine learning tools that more accurately understands and interprets the lexical stress patterns in Pakistani English speech.
The widespread use of mobile devices has made it possible to gather large amounts of crowdsourced educational data, which presents new possibilities for improving mobile application recommendation algorithms. This article explores the efficient use of this data to enhance recommendation algorithms through the application of machine learning and deep learning approaches. We apply collaborative filtering techniques in the context of machine learning using mobile data from 806 students. We also investigate more complex deep learning models such as stacked autoencoders, and graph autoencoders. Our experiments show that deep learning methods greatly improve recommendation accuracy and relevance by capturing intricate patterns and context-aware data. The revolutionary potential of crowdsourcing mobile educational data in influencing the development of mobile application recommendation algorithms is highlighted by this study.
Text summarization is crucial in various sectors, such as engineering and healthcare, because it enhances efficiency in terms of time and costs. Current extractive text summarization methods struggle with challenges such as greedy selection, model generalization limitations, and high computational demands. To solve these problems, this research introduces a novel extractive text summarization method that uniquely integrates a Generative Adversarial Network (GAN), Transductive Long Short-Term Memory (TLSTM), and DistilBERT for sentence embedding. Our technique uses GANs, which include generator and discriminator components, with the core design based on TLSTM. TLSTM utilizes transductive learning to improve accuracy by focusing on samples geographically closer to the test data. In our model, the generator considers whether to include a sentence in the summary while the discriminator critically reviews the generated summary. This GAN model reduces greedy sentence selection, enhancing summary coherence and quality. We implement a Reinforcement Learning (RL)-based strategy to address an imbalance caused by more fake than real samples in the discriminator. This RL approach, novel in the context of GANs for summarization, views training as a sequence of interconnected decisions, treating each sample as a unique scenario. The network, acting as the decision-making agent, assigns greater rewards or penalties to the minority class to correct the imbalance. The effectiveness of our model was evaluated using the well-regarded CNN/Daily Mail dataset, achieving ROUGE-1, ROUGE-2, and ROUGE-L scores of 52.45, 26.46, and 44.85, respectively. Compared to existing methods, our results demonstrate a significant improvement in summarization quality and operational efficiency, as measured by the ROUGE metric.
Text sentiment is a way of extracting data and transforming it into meaningful sentiment. In this research study, we tried to extract Urdu text data linked to medicine and convert it into a useful format that can be used to create an application. Electronic media quickly provides a large amount of information in any language, but it is unstructured and raw, making easily available data difficult to understand. Urdu is the most sought-after language in Asian countries, and the majority prefer this language. The sole distinction between the Urdu and Hindi languages is their writing script. However, the Roman scripts of both languages are comparable. In the Urdu dataset, pre-processing, feature engineering, and other approaches are utilized to extract clean data that can be easily trained using multiple machine learning models because the application that is going to be built requires only medical-related datasets retrieved from external sources, i.e., websites, newspapers, blogs, and other physical resources, the techniques used are appropriate.
The study examines how native Pashto ESL learners place stress on identical lexemes within English sentences, focusing on pitch, duration, and intensity. The Objectives of this study is to analyze the acoustic properties of stressed syllables, to investigate the influence of Pashto on English stress production, and improve prosodic stress understanding to aid Pashto ESL learners. An experimental and quantitative approach was used, involving (20 × 3 × 7 = 420) from (N = 20) Pashto ESL speakers reading pre-selected words in carrier phrases. The voice sample data was collected from the cloud OneDrive corpora datasets (Abbasi, Abbasi, SRSP-Pak-Eng-43, 2023] [SHEC unpublished raw datasets, Sindh Madressatul Islam University, 2023-SHEC-SRSP-PaK-Eng-43) of Pashto speakers with similar method based on the disyllable stimuli. Seven pairs of disyllabic words were selected as stimuli following the framework of Beckman & Pierrehumbert (Beckman and Pierrehumbert, Phonology 3:255–309, 1986) and Fry (Fry, The Journal of the Acoustical Society of America 27:765–768, 1955, Fry, Language and Speech 1:126–152, 1958). Each word pair consisted of a noun and a verb that had identical spelling forms and differed only in terms of stress placement (noun: stress on the initial syllable; verb: stress on the final syllable). The recordings were analyzed using PRAAT software. The significance of the study aims to enhance communication skills and cultural understanding in language training for Pashto-speaking ESL learners. Pashto speakers often misplace stress in English due to differences in prosodic characteristics between Pashto and English. The study analyzed pitch (F0), duration, and intensity of stressed syllables in English short sentences spoken by Pashto ESL learners. Misplaced stress affects communication abilities and language acquisition outcomes for Pashto speakers. The findings can help develop better instructional strategies and materials for ESL programs targeting Pashto speakers. The findings contribute to the broader field of cognitive science, particularly in understanding the relationship between prosody and stress perception. The implication of the study is to understand prosodic stress patterns can significantly improve the communication abilities of Pashto speakers of English as a Second Language, making their English speech more natural and comprehensible.
The levels of particulate matter (PM10) in Karachi, Pakistan, are hazards to public health and environmental
degradation. In this study, two statistical techniques, land use regression (LUR) and Pearson Correlation Coefficient
(PCC), have been applied to detect PM10 and Moderate Resolution Imaging Spectroradiometer (MODIS) along with
four meteorological parameters. The average values of observed PM10-MODIS aerosol optical depth (AOD)-
predicted PM10 have been analyzed with PCC correlation. Results, the M4 and M7 models are more reliable, where
coefficient of determination (R2 > 0.6) and root mean square error (RMSE) = 2–14. PCC showed a strong positive
correlation with significant levels among observed PM10-MODIS AOD-Predicted PM10. The concentrations of PM10
and AOD approximately increased by 182%, and 208%, respectively, and dropped by 28% to 30%, during COVID-19
(2020). In conclusion, utilize remote sensing with hybrid modeling to mitigate and monitor air pollutants in regions lacking ground-based air quality data resources.
The atmosphere’s fine articulate Matter (PM2.5) poses various health-related risks. Even though multiple efforts have been made to lower the emissions of these substances, the mortality rate is continuously increasing, requiring immediate inclination of the scientific community towards the design and development of advanced predictive models. Conventional statistical approaches have become dormant due to their limitations in capturing the innate relationships between the pollutants, particularly for predicting PM2.5 concentrations. In contrast, machine and deep learning techniques have shown great potential for forecasting air quality, providing more accuracy than its predecessor techniques. The present study investigates the utilization of hybrid approaches by integrating machine learning models with deep learning models to improve the prediction capabilities of PM2.5 concentration. It uses datasets from the World Air Quality Index (WAQI) and the State of Global Air (SOGA) to analyze the performance of the models on both the daily and annual data, respectively. This ensures the model’s effectiveness on a diversified dataset. The present study implements Random Forest (RF), Polynomial Regression (PR), XGBoost, and Extra Tree Regressor (ETR) coupled with Fully Connected Neural Network (FCNN), Long Short-Term Memory (LSTM), and Bi-directional LSTM (Bi-LSTM) for obtaining optimized results. Finally, after a thorough investigation, the hybrid PR model coupled with FCNN (PR-FCNN) is found to be the best model with improved R-squared (R2) values, portraying its potential for predicting PM2.5 concentration accurately. Based on the experimentation, the preset study recommends implementing hybrid approaches, offering better predictive accuracy in forecasting air pollutants, especially PM2.5.
The incorporation of Artificial Intelligence (AI) into the fields of Neurosurgery and Neurology has transformed the landscape of the healthcare industry. The present study describes seven dimensions of AI that have transformed the way of providing care, diagnosing, and treating patients. It has exhibited unparalleled accuracy in analyzing complex medical imaging data and expediting precise diagnoses of neurological conditions. It has also enabled personalized treatment plans by harnessing patient-specific data and genetic information, promising more effective therapies. For instance, AI-powered surgical robots have brought precision and remote capabilities to neurosurgical procedures, reducing human error. In AI, machine learning models predict disease progression, optimizing resource allocation and patient care, whereas wearable devices with AI provide continuous neurological monitoring, and enable early intervention for chronic conditions. It has also accelerated drug discovery by analyzing vast datasets, potentially leading to breakthrough therapies. Chatbots and virtual assistants powered by AI, enhance patient engagement and adherence to treatment plans. It holds promise in further personalization of care, augmented decision-making, earlier intervention, and the development of groundbreaking treatments. The present study mainly focuses on the incorporation of blockchain technology and provides a reasonable understanding of the associated issues and challenges along with its solutions. It will allow AI and healthcare professionals to advance the field and contribute towards the improvement of an individual's well-being when facing neurological challenges.
Financial inclusion is a crucial phenomenon in the current period of development since it drives economic progress in all countries. This article seeks to determine the impact of financial inclusion on banks' performance and risk. This research paper aims to ascertain the impact of financial inclusion on banks' profitability and risk. The study spanned a duration of five years, including from 2018 to 2022, and focused on Pakistani commercial listed banks. A systematic investigation was conducted by gathering data on variables from official websites and financial reports. Regression analysis was used to analyze the data. The findings indicate that the bank's profitability is positively influenced by financial inclusion, but at the same time, it also raises the bank's risk. This suggests that policy synergies are necessary to expedite both financial inclusion and sustainability. However, it is crucial to address the intricate issue of financial inefficiency and the absence of genuine paperwork that arises when financial inclusion expands.
In response to the potential risks posed by natural or human-made disasters, the Pakistan School Safety Framework (PSSF) was created to serve as a comprehensive strategy aimed at safeguarding the well-being of students, teachers, and school staff. A study was undertaken to evaluate the readiness of secondary school teachers in the face of disasters, utilizing the Sustainable Development Goals and the Pakistan School Safety Framework as guiding principles. An assessment questionnaire comprising 27 items aligned with the Sustainable Development Goals (SDGs) was meticulously developed and scrutinized by three subject matter experts before implementation for data gathering. The assessment tool encompassed four variables and a total of 27 items. Data was collected from 64 schools and 320 secondary school instructors and was subsequently analyzed using SPSS and Microsoft Excel. The results revealed that the majority of schools fell into the "Slightly Prepared" category based on the assessment outcomes. Following the study, the researcher provided recommendations to enhance disaster preparedness among secondary school teachers. These included the implementation of regular drills, provision of training sessions, and updating of emergency response plans. Moreover, the researcher emphasized the significance of collaboration with local emergency services and community organizations to bolster preparedness efforts and ensure a coordinated response in the event of a disaster.
Handwritten character recognition falls under the domain of image classification, which has been under research for years. But still, specific gaps need to be highlighted as offline handwritten character recognition (OHCR) with the limitation of the unstructured hierarchy of character classification. However, the idea is to make the machine recognize handwritten human characters. The language focused on in this research paper is English, using offline handwritten character recognition for identifying English characters. There are many publicly available datasets, of which EMNIST is the most challenging. The key idea of this research paper is to recommend a deep learning-based ELBP-CNN method to help recognize English characters. This research paper proposes a deep learning CovNet with feature extraction and novel local binary pattern-based approaches, LBP (AND, OR), that is tested and compared with renowned pre-trained models using transfer learning. These parametric settings address multiple issues and are finalized after experimentation. The same hyperparametric settings were used for all the models under test and E-Character, with the same data augmentation settings. The proposed model, named the E-Character recognizer, produced 87.31% accuracy. It was better than most of the tested pre-trained models and other proposed methods by other researchers. This research paper further highlighted some problems, like misclassification due to the similar structure of characters.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Karachi, Pakistan
Website