Chapter

Impact Assessment of ‘ICT Practices’ on ‘Supply Chain Management Performance’ in Automotive Industry in India

Authors:
  • PAHER University India
  • Junagadh Agricultural University, Gujarat, India
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the era of COVID-19, most of the business declined and a huge loss of jobs due to no demand and automobile sector is not the exception. Nearly 7.5% of India’s GDP comes from the car industry, and supply chain is one of the key factors in a firm’s overall value creation. The efficient and effective supply chain is dependent on information and communication technologies (ICTs) at present, and hence, it implies that ICT is spine of SCM. The major objective of the study was to draw conclusions on how information and communications technology practices affect supply chain management performance in the Indian auto-sector. The outcomes state that ICT practices have high correlation and direct impact on SCM performance, however, it does not have much impact on operational performance. The research also suggested that better and more effective ICT practices result better supply chain performance. The limitation of the research was the respondents’ voluntary cooperation, and the ICT practices are limited to supply chain operational performance in various departments and functions only and not the applications in any vehicles.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
IR 4.0 is a new phase for the current trend of automation and data exchange in the manufacturing industry that focuses on cloud computing, interconnectivity, the internet of things, machine learning, cyber physical learning, and creating smart factories. The purpose of this article was to unveil the key factors of the IR 4.0 in the Malaysian smart manufacturing context. Two key data collection methods were used: (1) primary data from the face-to-face interview and (2) secondary data from the previous study. Significantly, five key factors of IR 4.0 were considered for this study: autonomous production lines, smart manufacturing practices, data challenge, process flexibility, and security. As a result, IR 4.0 for quality management practices might get high impact for the best performance assessment, which is addressed in various ways. Few studies in this area have been conducted in the Malaysian manufacturing sector to recommend the best practices implemented from the managers’ perspectives. For scholars, this enhances their understanding and highlights opportunities for further research.
Article
Full-text available
Teenage siblings of children with autism spectrum disorder (ASD) are at risk of worse mental health outcomes than their peers, yet there have been few interventions focused on improving their psychosocial wellbeing. This study explored the acceptability of an 8-session virtual group mind-body resiliency intervention for teen siblings of children with ASD. We used mixed methods to assess quantitative and qualitative survey results. Participants reported that the intervention had the right amount of sessions (88%), structure (74%), and duration (89%). Most participants felt comfortable during sessions (74%), found it helpful to learn mind-body exercises (74%), and that the intervention helped in coping with stress (71%). Though participants were satisfied with the opportunity to meet peers, they desired more social connection.
Article
Full-text available
Evidence supports early intervention for toddlers with ASD, but barriers to access remain, including system costs, workforce constraints, and a range of family socio-demographic factors. An urgent need exists for innovative models that maximize resource efficiency and promote widespread timely access. We examined uptake and outcomes from 82 families participating in a parent-mediated intervention comprising group-based learning and individual coaching, delivered either in-person (n = 45) or virtually (n = 37). Parents from diverse linguistic, ethnic, and educational backgrounds gained intervention skills and toddlers evidenced significant social-communication gains. Few differences emerged across socio-demographic factors or delivery conditions. Findings highlight the feasibility, acceptability, and promise of group-based learning when combined with individual coaching, with added potential to increase program reach via virtual delivery.
Article
Full-text available
The intelligent wireless system focuses on integrating with the advanced technologies like machine learning and related approaches in order to enhance the performance, productivity, and output. The implementation of machine learning approaches is mainly applied in order to enhance the efficient communication system, enable creation of variable node locations, support collection of data and information, analyze the pattern, and forecast so as to provide better services to the end users. The efficiency of using these technologies tend to lower the cost and support in deploying the resources effectively. The wireless network system tends to enhance the bandwidth, and the application of novel machine learning approaches supports detection of unrelated data and information and enables analysis of latency at each part of the communication channel. The study involves critically analyzing the key determinants of machine learning approaches in supporting enhanced intelligent network communication in the industries. The researchers are aimed at gathering both primary data and secondary data for the study. The respondents are chosen in the industry so that they can provide better inputs and insights related to the area of research. The key determinants considered for the study are machine learning-influenced management of hotspots, identification of critical congestion points, spectrum availability, and management. The analysis is made using SPSS data analysis package based on which it is noted that all the factors make major influences towards the intelligent communication, and hence machine learning supports critically in enhancing the user experience effectively.
Article
Full-text available
K-means is a well-known clustering algorithm often used for its simplicity and potential efficiency. Its properties and limitations have been investigated by many works reported in the literature. K-means, though, suffers from computational problems when dealing with large datasets with many dimensions and great number of clusters. Therefore, many authors have proposed and experimented different techniques for the parallel execution of K-means. This paper describes a novel approach to parallel K-means which, today, is based on commodity multicore machines with shared memory. Two reference implementations in Java are developed and their performances are compared. The first one is structured according to a map/reduce schema that leverages the built-in multi-threaded concurrency automatically provided by Java to parallel streams. The second one, allocated on the available cores, exploits the parallel programming model of the Theatre actor system, which is control-based, totally lock-free, and purposely relies on threads as coarse-grain “programming-in-the-large” units. The experimental results confirm that some good execution performance can be achieved through the implicit and intuitive use of Java concurrency in parallel streams. However, better execution performance can be guaranteed by the modular Theatre implementation which proves more adequate for an exploitation of the computational resources.
Article
Full-text available
Artificial intelligence (AI) is being increasingly applied in healthcare. The expansion of AI in healthcare necessitates AI-related ethical issues to be studied and addressed. This systematic scoping review was conducted to identify the ethical issues of AI application in healthcare, to highlight gaps, and to propose steps to move towards an evidence-informed approach for addressing them. A systematic search was conducted to retrieve all articles examining the ethical aspects of AI application in healthcare from Medline (PubMed) and Embase (OVID), published between 2010 and July 21, 2020. The search terms were “artificial intelligence” or “machine learning” or “deep learning” in combination with “ethics” or “bioethics”. The studies were selected utilizing a PRISMA flowchart and predefined inclusion criteria. Ethical principles of respect for human autonomy, prevention of harm, fairness, explicability, and privacy were charted. The search yielded 2166 articles, of which 18 articles were selected for data charting on the basis of the predefined inclusion criteria. The focus of many articles was a general discussion about ethics and AI. Nevertheless, there was limited examination of ethical principles in terms of consideration for design or deployment of AI in most retrieved studies. In the few instances where ethical principles were considered, fairness, preservation of human autonomy, explicability and privacy were equally discussed. The principle of prevention of harm was the least explored topic. Practical tools for testing and upholding ethical requirements across the lifecycle of AI-based technologies are largely absent from the body of reported evidence. In addition, the perspective of different stakeholders is largely missing.
Article
Full-text available
A large number of autonomous devices is nowadays supported by renewable and green energy sources. A vital sub-circuit in such systems is the power converter circuit, which should efficiently transform and store the available energy. In order to obtain the maximum efficiency under varying energy conditions, various maximum power point tracking (MPPT) methods are used. In this work a complete harvesting module with battery management and MPPT is presented, suitable for a plethora of autonomous applications. A novel, low-complexity and ultra-low power consumption design is proposed, which offers very wide operating voltage and power range with high MPPT efficiency and very low power consumption. It can be combined with different harvesters, such as thermoelectric generators or photovoltaic panels and is able to work under widely varying energy conditions. As supported by experimental results, the proposed module covers a very wide working input power range, from 40 $\mu \text{W}$ up to 4 W, as well as a very wide input voltage range, from 650 mV up to 2.8 V with 96.5% average MPPT efficiency and a total power consumption of 3.9 $\mu \text{W}$ at 3.6 V. The module relies on an embedded ultra-low power microcontroller unit (MCU) to perform the power management and MPPT operations, which can also be used for extra tasks (e.g., sensor reading). Using the proposed module, an autonomous sensor node was built, able to acquire acceleration measurements, and wirelessly communicate with a remote user in order to send an alert or stream the acquired sensor data in real time.
Article
Full-text available
Artificial intelligence can be used to realise new types of protective devices and assistance systems, so their importance for occupational safety and health is continuously increasing. However, established risk mitigation measures in software development are only partially suitable for applications in AI systems, which only create new sources of risk. Risk management for systems that for systems using AI must therefore be adapted to the new problems. This work objects to contribute hereto by identifying relevant sources of risk for AI systems. For this purpose, the differences between AI systems, especially those based on modern machine learning methods, and classical software were analysed, and the current research fields of trustworthy AI were evaluated. On this basis, a taxonomy could be created that provides an overview of various AI-specific sources of risk. These new sources of risk should be taken into account in the overall risk assessment of a system based on AI technologies, examined for their criticality and managed accordingly at an early stage to prevent a later system failure.
Article
Full-text available
Globally, there is a substantial unmet need to diagnose various diseases effectively. The complexity of the different disease mechanisms and underlying symptoms of the patient population presents massive challenges in developing the early diagnosis tool and effective treatment. Machine learning (ML), an area of artificial intelligence (AI), enables researchers, physicians, and patients to solve some of these issues. Based on relevant research, this review explains how machine learning (ML) is being used to help in the early identification of numerous diseases. Initially, a bibliometric analysis of the publication is carried out using data from the Scopus and Web of Science (WOS) databases. The bibliometric study of 1216 publications was undertaken to determine the most prolific authors, nations, organizations, and most cited articles. The review then summarizes the most recent trends and approaches in machine-learning-based disease diagnosis (MLBDD), considering the following factors: algorithm, disease types, data type, application, and evaluation metrics. Finally, in this paper, we highlight key results and provides insight into future trends and opportunities in the MLBDD area.
Article
Full-text available
Prevalence estimates of autism are essential for informing public policy, raising awareness, and developing research priorities. Using a systematic review, we synthesized estimates of the prevalence of autism worldwide. We examined factors accounting for variability in estimates and critically reviewed evidence relevant for hypotheses about biological or social determinants (viz., biological sex, sociodemographic status, ethnicity/race, and nativity) potentially modifying prevalence estimates of autism. We performed the search in November 2021 within Medline for studies estimating autism prevalence, published since our last systematic review in 2012. Data were extracted by two independent researchers. Since 2012, 99 estimates from 71 studies were published indicating a global autism prevalence that ranges within and across regions, with a median prevalence of 100/10,000 (range: 1.09/10,000 to 436.0/10,000). The median male‐to‐female ratio was 4.2. The median percentage of autism cases with co‐occurring intellectual disability was 33.0%. Estimates varied, likely reflecting complex and dynamic interactions between patterns of community awareness, service capacity, help seeking, and sociodemographic factors. A limitation of this review is that synthesizing methodological features precludes a quality appraisal of studies. Our findings reveal an increase in measured autism prevalence globally, reflecting the combined effects of multiple factors including the increase in community awareness and public health response globally, progress in case identification and definition, and an increase in community capacity. Hypotheses linking factors that increase the likelihood of developing autism with variations in prevalence will require research with large, representative samples and comparable autism diagnostic criteria and case‐finding methods in diverse world regions over time. We reviewed studies of the prevalence of autism worldwide, considering the impact of geographic, ethnic, and socioeconomic factors on prevalence estimates. Approximately 1/100 children are diagnosed with autism spectrum disorder around the world. Prevalence estimates increased over time and varied greatly within and across sociodemographic groups. These findings reflect changes in the definition of autism and differences in the methodology and contexts of prevalence studies.
Article
Full-text available
Nucleic acids are emerging as powerful and functional biomaterials due to their molecular recognition ability, programmability, and ease of synthesis and chemical modification. Various types of nucleic acids have been used as gene regulation tools or therapeutic agents for the treatment of human diseases with genetic disorders. Nucleic acids can also be used to develop sensing platforms for detecting ions, small molecules, proteins, and cells. Their performance can be improved through integration with other organic or inorganic nanomaterials. To further enhance their biological properties, various chemically modified nucleic acid analogues can be generated by modifying their phosphodiester backbone, sugar moiety, nucleobase, or combined sites. Alternatively, using nucleic acids as building blocks for self-assembly of highly ordered nanostructures would enhance their biological stability and cellular uptake efficiency. In this review, we will focus on the development and biomedical applications of structural and functional natural nucleic acids, as well as the chemically modified nucleic acid analogues over the past ten years. The recent progress in the development of functional nanomaterials based on self-assembled DNA-based platforms for gene regulation, biosensing, drug delivery, and therapy will also be presented. We will then summarize with a discussion on the advanced development of nucleic acid research, highlight some of the challenges faced and propose suggestions for further improvement.
Article
Full-text available
The degradation of photovoltaic (PV) systems is one of the key factors to address in order to reduce the cost of the electricity produced by increasing the operational lifetime of PV systems. To reduce the degradation, it is imperative to know the degradation and failure phenomena. This review article has been prepared to present an overview of the state-of-the-art knowledge on the reliability of PV modules. Whilst the most common technology today is mono-and multi-crystalline silicon, this article aims to give a generic summary which is relevant for a wider range of photovoltaic technologies including cadmium telluride, copper indium gallium selenide and emerging low-cost high-efficiency technologies. The review consists of three parts: firstly, a brief contextual summary about reliability metrics and how reliability is measured. Secondly, a summary of the main stress factors and how they influence module degradation. Finally, a detailed review of degradation and failure modes, which has been partitioned by the individual component within a PV module. This section connects the degradation phenomena and failure modes to the module component, and its effects on the PV system. Building on this knowledge, strategies to improve the operational lifetime of PV systems and thus, to reduce the electricity cost can be devised. Through extensive testing and failure analysis, researchers now have a much better overview of stressors and their impact on long term stability.
Article
Full-text available
The ability to explain why the model produced results in such a way is an important problem, especially in the medical domain. Model explainability is important for building trust by providing insight into the model prediction. However, most existing machine learning methods provide no explainability, which is worrying. For instance, in the task of automatic depression prediction, most machine learning models lead to predictions that are obscure to humans. In this work, we propose explainable Multi-Aspect Depression Detection with Hierarchical Attention Network MDHAN , for automatic detection of depressed users on social media and explain the model prediction. We have considered user posts augmented with additional features from Twitter. Specifically, we encode user posts using two levels of attention mechanisms applied at the tweet-level and word-level, calculate each tweet and words’ importance, and capture semantic sequence features from the user timelines (posts). Our hierarchical attention model is developed in such a way that it can capture patterns that leads to explainable results. Our experiments show that MDHAN outperforms several popular and robust baseline methods, demonstrating the effectiveness of combining deep learning with multi-aspect features. We also show that our model helps improve predictive performance when detecting depression in users who are posting messages publicly on social media. MDHAN achieves excellent performance and ensures adequate evidence to explain the prediction.
Article
Full-text available
The COVID-19 epidemic has a catastrophic impact on global well-being and public health. More than 27 million confirmed cases have been reported worldwide until now. Due to the growing number of confirmed cases, and challenges to the variations of the COVID-19, timely and accurate classification of healthy and infected patients is essential to control and treat COVID-19. We aim to develop a deep learning-based system for the persuasive classification and reliable detection of COVID-19 using chest radiography. Firstly, we evaluate the performance of various state-of-the-art convolutional neural networks (CNNs) proposed over recent years for medical image classification. Secondly, we develop and train CNN from scratch. In both cases, we use a public X-Ray dataset for training and validation purposes. For transfer learning, we obtain 100% accuracy for binary classification (i.e., Normal/COVID-19) and 87.50% accuracy for tertiary classification (Normal/COVID-19/Pneumonia). With the CNN trained from scratch, we achieve 93.75% accuracy for tertiary classification. In the case of transfer learning, the classification accuracy drops with the increased number of classes. The results are demonstrated by comprehensive receiver operating characteristics (ROC) and confusion metric analysis with 10-fold cross-validation.
Article
Full-text available
Resumo Este artigo tem como objeto o trabalho do assistente social nas empresas, tendo em vista as transformações operadas pela reestruturação da produção e dos processos de trabalho com a adoção de inovações organizacionais e tecnológicas. Com base na revisão da literatura e no conhecimento obtido através de experiências profissionais, sistematizações e investigações, as inferências assinalam que as mudanças nas empresas, em particular, com a expansão das tecnologias da informação e comunicação, conferem outra feição ao trabalho do assistente social, bem como modificam os requisitos do perfil profissional e as condições de trabalho.
Article
Full-text available
The Internet of Things (IoT) is a new paradigm that connects objects to provide seamless communication and contextual information to anyone, anywhere, at any time (AAA). These Internet-of-Things-enabled automated objects interact with visitors to present a variety of information during museum navigation and exploration. In this article, a smart navigation and information system (SNIS) prototype for museum navigation and exploration is developed, which delivers an interactive and more exciting museum exploration experience based on the visitor’s personal presence. The objects inside a museum share the information that assist and navigate the visitors about the different sections and objects of the museum. The system was deployed inside Chakdara Museum and experimented with 381 users to achieve the results. For results, different users marked the proposed system in terms of parameters such as interesting, reality, ease of use, satisfaction, usefulness, and user friendly. Of these 381 users, 201 marked the system as most interesting, 138 marked most realistic, 121 marked it as easy-in-use, 219 marked it useful, and 210 marked it as user friendly. These statistics prove the efficiency of SNIS and its usefulness in smart cultural heritage, including smart museums, exhibitions and cultural sites.
Article
Full-text available
In the modern era, many terms related to artificial intelligence, machine learning, and deep learning are widely used in domains such as business, healthcare, industries, and military. In these fields, the accurate prediction and analysis of data are crucial, regardless of how large the data are. However, using big data is confusing due to the rapid growth and massive development in public life, which requires a tremendous human effort in order to deal with such type of data and extract worthy information from it. Thus, the role of artificial intelligence begins in analyzing big data based on scientific techniques, especially in machine learning, whereby it can identify patterns of decision-making and reduce human intervention. In this regard, the significance role of artificial intelligence, machine learning and deep learning is growing rapidly. In this article, the authors decide to highlight these sciences by discussing how to develop and apply them in many decision-making domains. In addition, the influence of artificial intelligence in healthcare and the gains this science provides in the face of the COVID-19 pandemic are highlighted. This article concludes that these sciences have a significant impact, especially in healthcare, as well as the ability to grow and improve their methodology in decision-making. Additionally, artificial intelligence is a vital science, especially in the face of COVID-19.
Article
Full-text available
The increasing adoption of photovoltaic (PV) technology highlights the need for efficient and large-scale deployment-ready inspection solutions. In the thermal infrared imagery-based inspection framework, we develop a robust and versatile deep learning model for the classification of defect-related patterns on PV modules. The model is developed from big UAV imagery data, and designed as a layer-3 building block that can be implemented on top of any two-stage PV inspection workflow comprising: (1) An aerial Structure from Motion – MultiView Stereo (SfM-MVS) photogrammetric acquisition/processing stage, at which a georeferenced thermal orthomosaic of an inspected PV site is generated, and which enables to locate precisely defective modules on field; then (2) an instance segmentation stage that extracts the images of modules. Orthomosaics from 28 different PV sites were produced, comprising 93 220 modules with various types, layouts and thermal patterns. Modules were extracted through a developed semi-automatic workflow, then labeled into six classes. Data augmentation and balancing techniques were used to prepare a highly representative and balanced deep learning-ready dataset. The dataset was used to train, cross-validate and test the developed classifier, as well as benchmarking with the VGG16 architecture. The developed model achieves the state-of-art performance and versatility on the addressed classification problem, with a mean F1-score of 94.52%. The proposed three-layer solution resolves the issues of conventional imagery-based workflows. It ensures highly accurate and versatile defect detection, and can be efficiently deployed to real-world large-scale applications.
Article
Full-text available
The Selection of the mining method for underground minerals extraction is the crucial task for the mining engineers. Underground minerals extraction is a multi-criteria decision making problem due to many criteria to be considered in the selection process. There are many studies on selection of underground mining method using Multi Criteria Decision Making (MCDM) techniques or approaches. Extracting minerals from the underground involves many geological characteristics also called as input parameters. The geological characteristics of any mineral deposit vary from one location to another location. Thus only one mineral extraction method is not suitable for different deposit characteristics. There are many mineral extraction methods available for different characteristics of the ore deposit. As of now only MCDM approach or Hybrid MCDM approaches or MCDM approaches with fuzzy logic were used for selecting a mining method for underground metal mine. In this study, only fuzzy logic approach is used for selecting a mining method for different deposit characteristics. The proposed model considers five deposit characteristics as input parameters and seven underground mining methods output parameters The developed fuzzy logic based approach is also validated by the deposit characteristics of two Indian mines. The model produced the suitable mining method for extraction of the minerals at the specified Indian mines and the same mining methods are used by the mine authorities.
Article
Full-text available
Problem/condition: Autism spectrum disorder (ASD). Period covered: 2018. Description of system: The Autism and Developmental Disabilities Monitoring (ADDM) Network conducts active surveillance of ASD. This report focuses on the prevalence and characteristics of ASD among children aged 8 years in 2018 whose parents or guardians lived in 11 ADDM Network sites in the United States (Arizona, Arkansas, California, Georgia, Maryland, Minnesota, Missouri, New Jersey, Tennessee, Utah, and Wisconsin). To ascertain ASD among children aged 8 years, ADDM Network staff review and abstract developmental evaluations and records from community medical and educational service providers. In 2018, children met the case definition if their records documented 1) an ASD diagnostic statement in an evaluation (diagnosis), 2) a special education classification of ASD (eligibility), or 3) an ASD International Classification of Diseases (ICD) code. Results: For 2018, across all 11 ADDM sites, ASD prevalence per 1,000 children aged 8 years ranged from 16.5 in Missouri to 38.9 in California. The overall ASD prevalence was 23.0 per 1,000 (one in 44) children aged 8 years, and ASD was 4.2 times as prevalent among boys as among girls. Overall ASD prevalence was similar across racial and ethnic groups, except American Indian/Alaska Native children had higher ASD prevalence than non-Hispanic White (White) children (29.0 versus 21.2 per 1,000 children aged 8 years). At multiple sites, Hispanic children had lower ASD prevalence than White children (Arizona, Arkansas, Georgia, and Utah), and non-Hispanic Black (Black) children (Georgia and Minnesota). The associations between ASD prevalence and neighborhood-level median household income varied by site. Among the 5,058 children who met the ASD case definition, 75.8% had a diagnostic statement of ASD in an evaluation, 18.8% had an ASD special education classification or eligibility and no ASD diagnostic statement, and 5.4% had an ASD ICD code only. ASD prevalence per 1,000 children aged 8 years that was based exclusively on documented ASD diagnostic statements was 17.4 overall (range: 11.2 in Maryland to 29.9 in California). The median age of earliest known ASD diagnosis ranged from 36 months in California to 63 months in Minnesota. Among the 3,007 children with ASD and data on cognitive ability, 35.2% were classified as having an intelligence quotient (IQ) score ≤70. The percentages of children with ASD with IQ scores ≤70 were 49.8%, 33.1%, and 29.7% among Black, Hispanic, and White children, respectively. Overall, children with ASD and IQ scores ≤70 had earlier median ages of ASD diagnosis than children with ASD and IQ scores >70 (44 versus 53 months). Interpretation: In 2018, one in 44 children aged 8 years was estimated to have ASD, and prevalence and median age of identification varied widely across sites. Whereas overall ASD prevalence was similar by race and ethnicity, at certain sites Hispanic children were less likely to be identified as having ASD than White or Black children. The higher proportion of Black children compared with White and Hispanic children classified as having intellectual disability was consistent with previous findings. Public health action: The variability in ASD prevalence and community ASD identification practices among children with different racial, ethnic, and geographical characteristics highlights the importance of research into the causes of that variability and strategies to provide equitable access to developmental evaluations and services. These findings also underscore the need for enhanced infrastructure for diagnostic, treatment, and support services to meet the needs of all children.
Article
Full-text available
Nearly two years since the start of the SARS-CoV-2 pandemic, which has caused over 5 million deaths, the world continues to be on high COVID-19 alert. The World Health Organization (WHO), in collaboration with national authorities, public health institutions and scientists have been closely monitoring and assessing the evolution of SARS-CoV-2 since January 2020 (WHO 2021a; WHO 2021b). The emergence of specific SARS-CoV-2 variants were characterised as Variant of Interest (VOI) and Variant of Concern (VOC), to prioritise global monitoring and research, and to inform the ongoing global response to the COVID-19 pandemic. The WHO and its international sequencing networks continuously monitor SARS-CoV-2 mutations and inform countries about any changes that may be needed to respond to the variant, and prevent its spread where feasible. Multiple variants of the virus have emerged and become dominant in many countries since January 2021, with the Alpha, Beta, Gamma and Delta variants being the most prominent to date.
Article
Full-text available
Purpose – The fashion sector is complex. It involves multiple actors with distinct and potentially conflicting interests, forming a value ecosystem. Thus, knowing the interested parties and belonging to the fashion sector may be a means to promote technological innovation, such as products with wearables. The purpose of this paper to identify the participants of the fashion ecosystem from the perspective of wearable technologies and develop a conceptual model. Design/methodology/approach – The present work aims to identify the participants (actors) and develop a conceptual model of the fashion ecosystem from the perspective of wearable technologies. The systematic literature review is the recommended method to qualitatively analyze documents and identify the interested parties (actors) in the fashion sector in order to design the proposed conceptual model. Findings – From the studies, the conceptual model of the fashion value ecosystem was designed, and the wearable product was considered its core business. The studies identified addressed ecosystems of fashion value in general but not specific to wearable products and their relations with other complementary industries. Research limitations/implications – The model was designed using secondary data only. Its validation is relevant through interviews with experts. Originality/value – In terms of relevance, when conducting a systematic literature review, there were no studies that included wearable technologies in the fashion ecosystems discussed and their relations with other industries. The topic of wearables is an emerging subject that needs further research aiming to insert this technology in productive sectors.
Chapter
Our study focuses on the areas of social and economic sustainability in machine learning. The risk of work disability can be predicted with machine learning and using various data sources. Machine learning techniques appear to be a potential tool to support expert work and decision-making. We will present the five stakeholders of the work disability prediction—the employee, the employer, the occupational health care, the pension fund, and society. All these stakeholders should be taken into account when developing AI to support disability risk prediction. We will compare two methods with different data sources, occupational health care data, and pension decision register data. There is still another stakeholder, the data scientist, who is developing the machine learning algorithms. We will present five important aspects of the data processing and algorithm design phase: non-maleficence, accountability and responsibility, transparency and explainability, justice and fairness, and respect for various human rights. These aspects need to be considered when collecting data, storing it in databases, and sharing it with others.
Conference Paper
Patients with physical disabilities such as losing an arm, hand, or paralysis will have difficulty moving from one place to another. They need someone or a device that can help their mobility. One of the tools that are often used by physically disabled patients to help their activities is an electric wheelchair. The main purpose of this article is to design a wheelchair for physically disabled patients that can be controlled by voice commands using the convolutional neural network method (CNN). CNN is employed to identify voice commands embedded on raspberry Pi 3. The recorded sound data is converted to spectrogram images before being fed to CNN. This method has been proven well in voice commands recognition with an accuracy of more than 90%. There are five different voice commands for wheelchair navigation, which are forward, backward, left, right and stop. Preliminary experimental results indicate that electrically designed wheelchairs can move using speech command.
Article
Managing traffic maintaining order is the most demanding tasks in the contemporary day and age. Emergency vehicles such as an ambulance face lot of hardships when they get stuck in traffic, valuable human life is lost due to poor traffic management. In this paper a model is proposed for calculating traffic heaviness on roads using processing techniques for images with ambulance detection system and controlling model for traffic signals with the information extracted from images of vehicles on roads captured by video camera. The traffic intensity depends on the total vehicles on the road. The proposed model counts the vehicles in the lane and checks for the presence of emergency vehicles , whenever an emergency vehicle is detected that particular lane is allowed to move and the signal is turned to green.
Article
We develop a deep learning model based on Long Short-term Memory (LSTM) to predict blood pressure based on a unique data set collected from physical examination centers capturing comprehensive multi-year physical examination and lab results. In the Multi-attention Collaborative Deep Learning model (MAC-LSTM) we developed for this type of data, we incorporate three types of attention to generate more explainable and accurate results. In addition, we leverage information from similar users to enhance the predictive power of the model due to the challenges with short examination history. Our model significantly reduces predictive errors compared to several state-of-the-art baseline models. Experimental results not only demonstrate our model’s superiority but also provide us with new insights about factors influencing blood pressure. Our data is collected in a natural setting instead of a setting designed specifically to study blood pressure, and the physical examination items used to predict blood pressure are common items included in regular physical examinations for all the users. Therefore, our blood pressure prediction results can be easily used in an alert system for patients and doctors to plan prevention or intervention. The same approach can be used to predict other health-related indexes such as BMI.
Article
Learning the subtype of dyslexia may help shorten the rehabilitation process and focus more on the relevant special education or diet for children with dyslexia. For this purpose, the resting-state eyes-open 2-min QEEG measurement data were collected from 112 children with dyslexia (84 male, 28 female) between 7 and 11 years old for 96 sessions per subject on average. The z-scores are calculated for each band power and each channel, and outliers are eliminated afterward. Using the k-Means clustering method, three different clusters are identified. Cluster 1 (19% of the cases) has positive z-scores for theta, alpha, beta-1, beta-2, and gamma-band powers in all channels. Cluster 2 (76% of the cases) has negative z-scores for theta, alpha, beta-1, beta-2, and gamma-band powers in all channels. Cluster 3 (5% of the cases) has positive z-scores for theta, alpha, beta-1, beta-2, and gamma-band powers at AF3, F3, FC5, and T7 channels and mostly negative z-scores for other channels. In Cluster 3, there is temporal disruption which is a typical description of dyslexia. In Cluster 1, there is a general brain inflammation as both slow and fast waves are detected in the same channels. In Cluster 2, there is a brain maturation delay and a mild inflammation. After Auto Train Brain training, most of the cases resemble more of Cluster 2, which may mean that inflammation is reduced and brain maturation delay comes up to the surface which might be the result of inflammation. Moreover, Cluster 2 center values at the posterior parts of the brain shift toward the mean values at these channels after 60 sessions. It means, Auto Train Brain training improves the posterior parts of the brain for children with dyslexia, which were the most relevant regions to be strengthened for dyslexia.
Article
The massive growth of PV farms, both in number and size, has motivated new approaches in inspection system design and monitoring. This paper presents a review of imaging technologies and methods for analysis and characterization of faults in photovoltaic (PV) modules. The paper provides a brief overview of PV system (PVS) reliability studies and monitoring approaches where fault related PVS power loss is evaluated. Research on infrared thermography (IRT) and luminescence imaging technologies is thoroughly reviewed, with focus on ease of implementation, efficiency and unmanned aerial system (UAS) compatibility. Furthermore, the review will provide novel insight into state-of-the-art electroluminescence (EL), photoluminescence (PL) and ultraviolet fluorescence (UVF) imaging, and how to interpret these images. The development of imaging techniques will continue to be an attractive domain of research that can be combined with aerial scanning for a cost-effective remote inspection that enable reliable power production in large-scale PV plants.
Article
Data freshness ensures accessing recent data that could help in achieving high business values and providing effective customer service. Group data freshness is a challenging aspect in a distributed outsourced environment, as stale data among different entities may mislead the business goal of the system. Generally, three-party data outsourcing model is found in practice: users, data owner, and cloud service provider. The users require to register with the data owner to access data files directly from the cloud service provider. A scheme verifying the freshness of whole outsourced data of its readers is called the group data freshness auditing scheme (GDFAS). Existing GDFASs focus on a probabilistic guarantee and require high computational cost at the data owner. In this paper, an efficient group data freshness auditing scheme is proposed, where the data owner does auditing in a distributed system with the help of the system users. As the data owner is not directly involved in their user’s data access, it needs mechanisms such as auditing data through an additional third-party to ensure the data is fresh. However, the third-party data storage service provider may not be fully trusted by a data owner. In such context, auditing data with respect to its freshness property without involving additional third-party storage service is challenging, but would be more effective in terms of the system’s performance and efficacy. The proposed GDFAS provides real-time data freshness verification using Merkle hash trees. In comparison to the existing scheme, it takes less computational cost at the data owner without involving any third party and less communication cost between the data owner and the service provider. The proposed GDFAS is implemented on the AWS cloud and the auditing cost at the data owner is experimentally evaluated. The proposed GDFAS is analyzed and compared with the relevant existing scheme and is found that the proposed GDFAS outperforms other schemes with respect to its security and efficiency.
Article
We present a literature review of Applied Imagery Pattern Recognition (AIPR) for the inspection of photovoltaic (PV) modules under the main used spectra: (1) true-color RGB, (2) long-wave infrared (LWIR), and (3) electroluminescence-based short-wave infrared (SWIR). Three sequentially linked building blocks underpin this work. The first overviews reference guidelines of image acquisition and the main detectable defect patterns under each spectrum. It also provides key insights regarding the implementation of Unmanned Aerial Vehicles (UAVs) to acquire imagery, especially from a photogrammetric perspective. The second block presents various image pre-processing steps used to prepare inspection-ready datasets. These comprise radiometric correction, segmentation and edge extraction, geometric correction and cell clipping. The third surveys defect detection and classification through digital image processing and machine/deep learning techniques. We elaborate an in-depth topic discussion that, in parallel: highlights the main related challenges, provides core guidelines for AIPR-based PV inspection workflows, and suggests key research avenues for future studies. This review synthesizes the recent advances of the body knowledge. It also constitutes an insightful reference for professionals and academics within the PV operations and maintenance field who are considering the possibilities that digital imagery can offer.
Article
Hydrogen produced from renewable sources (green hydrogen) will be recognized as one of the main trends in future decarbonized energy systems. Green hydrogen can be effectively stored from surplus renewable energy to thus reducing dependency of fossil fuels. As it is entirely produced from renewable sources, green hydrogen generation is strongly affected by intermittent behaviour of renewable generators. In this context, proper uncertain modelling becomes essential for adequately management of this energy carrier. This paper deals with this issue, more precisely, a novel optimal scheduling model for robust optimal scheduling of isolated microgrids is developed. The proposal encompasses a green hydrogen-based storage system and various demand-response programs. Logical rules are incorporated into the conventional optimal scheduling tool for modelling green hydrogen production, while uncertain character of weather and demand parameters is added via interval-based formulation and iterative solution procedure. The developed tool allows to perform the scheduling plan under pessimistic or optimistic point of views, depending on the influence assumed by uncertainties in the objective function. A case study serves to validate the model and highlight the paper of green hydrogen-based storage facilities in reducing fossil fuel consumptions and further exploit renewable sources.
Chapter
Green Computing refers to the environment-friendly use of computing and allied tools. Various techniques have been devised from different perspectives to achieve the goals of Green Computing. This chapter captures the essence of, and gives an insight into, some of the important Green Computing techniques, that exist today, and explores their impact on the present and future usage of computing resources. First, we introduce the concept of Green Computing and its significance. Then we explore a number of Green Computing Techniques, section-wise, under four different categories. The first category is about Green Design Techniques wherein we provide a glimpse of how some of the design techniques, if employed, can go a long way in protecting the environment. It is followed by a section on Green Manufacturing Techniques. In this section, we present the ways of manufacturing that are aimed at minimizing the ecological footprint. Then we cover Green Utilization Techniques that govern the usage of computers and associated resources in an eco-friendly manner. Finally, we discuss Green Disposal Techniques that involve the issues of reuse, recycling, and disposing of computers.
Chapter
This research proposes to Develop the Online Learning to Enhancing Computational Thinking. The research design is model research about Model validation composed. The results collected both in quantitative and qualitative data. The data are analyzed and summarized by synthesizing the protocol, interpretating summaries and descriptive statistics. The outcomes of the study proved that 1) the model has validity in the learning contents, the media, and the design of the model. The model holds all six components whose quality is consistent with the synthesis of a theoretical framework and conceptual framework for designing and developing the online learning environments model. 2) the validity of this model is confirmed by the impact of the learning paradigm on students. The computational thinking shows that the students were able to create knowledge representation and understanding the programming. The students’ opinion towards Online Learning showed that the learning contents, the media, and the design are suitable and supported to enhance the Computational Thinking.