Article

Reactive to Proactive: Employing AI and ML in Automotive Brakes and Parking Systems to Enhance Road Safety

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Our comprehensive Machine Learning (ML) and Artificial Intelligence (AI)-based patent analysis in braking, parking, and related technology areas will reveal the most significant opportunity clusters relevant to these domains. We can use predictive analytics to forecast the development of currently identified opportunity clusters, employing an innovation competition model. This paper is organized as follows: In the first two sections, we systematically reveal how mature ABS, ESC, and parking technologies have settled in the global automotive market, using extensive patent data between 1966 and September 2019. In the third section, after disclosing the structure of our Telecommunications Engineering Centre (TEC), we detail our AI and ML analysis, focusing on the domains determined by our patent categorization strategy. Finally, the last section gives a high-level concluding discussion, summarizing the present approach and its significant contributions. Safety has always been a priority in designing and developing automotive systems. For instance, the function of an anti-lock brake system (ABS) is to prevent the wheels from locking up, thus enabling the driver to maintain steering control. Since its invention, ABS has become a standard safety feature in all vehicles. Since 2019, 65% of passenger car production globally has been equipped with some ABS. Most of ABS development has been accomplished by adding mechanical, electrical, and hydraulic components to vehicles. Although hardware development, like sensor technology, is essential, many breakthroughs have been achieved by inventing a control technique for ABS: FI (complete integral) control in 1978. This success has introduced disproportionate amounts of Intellectual Property (IP) around control algorithm innovation to bring more ABS products to market. Over time, competition occurs in other domains, such as the control algorithm, sensor technology, and system integration, and customers are looking for algorithm innovation to bring them better performance, lower cost, and compact solutions.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The potential value is particularly high in situations in which large volumes of structured and unstructured data can be assembled and analyzed into patterns and used to guide decision-making. In addition to these efficiencies, the use of neural networks has the potential to guide treatments and uncover early interventions, thus improving the quality of care [17]. ...
Article
Full-text available
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.
... While some researchers believe that AI is neutral and does not exhibit any kind of prejudice, many authors have quantified and analyzed bias at the input data, pattern recognition model, and output data levels, suggesting that the subject should not be trivialized. [32] ...
Article
Full-text available
Future technologies will enable nations to successfully address persistent complex challenges, especially those brought by burning issues such as the global medical emergency, aging populations in developed nations, environmental harm, and financial collapses. Among the recent game-changing trends, Proximal Systems is the fast-growing adoption of public cloud-based and big data solutions for structural productivity issues in various government, economic, and defense sectors. Besides mentioning numerous reasons why cloud computing allows enterprises to be more productive and competitive than in-house facilities, this paper reveals the increasing employment of big data and other advanced analytics that are transforming companies into AI-driven learning enterprises. It also discusses the ongoing migration of all Three Letter Agencies (TLAs) to cloud-based services. The enabling Big Data Architectures (BDA) are the shift from monolithic data silos to service-oriented architectures, analytics processing at the data source by local processing or preparing and delivering the retreat for centralized cloud or on-premise low-cost and high-performance processing, and edge computer architectures. All these opportunities require Proximal Systems of data-driven decision-making with automated decision cycles for integrating warfighting, financial, operational, and business analytics. Finally, this article briefly addresses the outcomes of embedding all perspectives into an Agile Federal Design Thinking Governance (FDG) framework for accelerating the adoption of enterprise-wide cloud and advanced analytics. It enables ready access, open FAIR principles, transparency, and collaboration during all phases of Federal investment missions, procedures, and acquisition lifecycle.
... Companies involved in designing and developing automotive electronic control units (ECUs) are deploying the latest state-of-the-art communication technologies, such as internet Ethernet and IoT cloud integration, in their vehicle designs while continuing to invest and innovate in services that their customers use. Automotive tier-1s are rapidly adapting the required automotive standards and cross-domain automotive industry expertise in designing and deploying automotive ECUs [2,23]. ...
Article
Full-text available
Ethernet has emerged as a crucial component in modern in-vehicle networks, serving as a bridge to meet the demand for over-the-air updates and ensure efficient automotive software updates. Leveraging its unique characteristics, introducing unified diagnostic services over Ethernet/IP presents a paradigm shift, offering significant advantages over existing solutions based on other time synchronous bridging protocols. This paper unveils an innovative EtherNet/IP-capable Unified Diagnostic Service architecture designed to efficiently address the over-the-air update requirements. The diagnostic process is intelligently offloaded during runtime, ensuring non-interference with the runtime automotive services. This is achieved through well-defined state machines of TCP/IP and FTP, and a task response completed with a finite state machine with bounded buffer complexity, thereby safeguarding the control-CAN network. As the threat landscape evolves, the automotive network is increasingly vulnerable to cyber-attacks. In response, our architecture incorporates a robust set of preventive and reactive measures. Secure boot, Authentic air updates, and a multi-domain architecture form a formidable first line of defense, enhancing the security of the automotive network. These measures build trust between the CAN FD-enabled Ethernet/IP end devices, ensuring the integrity of the system. The Unified Diagnostic Services (UDS) protocol is standardized by ISO(R)-14229, encompassing the entire range of automotive diagnostic tools. Inevitably, this article concentrates on designing and implementing an Ethernet/IP-capable Diagnostic Service Tool that could efficiently meet the OTA need using industry-standard protocols such as an HTTP-end server. This ensures that our implemented system is more compatible with existing aftermarkets, that the functionalities are unchanged within our tool, and that it can be used for offline activity with COTS hardware [5]. Our implemented system encompasses an industry-grade Ethernet/IP-enabled gigabit physical layer, conformance to joint test specifications of Ethernet/IP specifications by ODVA and Ethernet-based diagnostic services, and a UDS-based diagnostic software tool. In addition, the importance of state machines, states, bounded complexity [1], and formulating attacks so that our system is robust to malicious activity in the compromised environment (Automotive network) are also discussed from the view of proper operation in the runtime, which is a critical barrier for all initial state offloading. Subsequently, testing of the implemented tool under various conditions, such as disabling the interface and downgrading it to 100 Mbps EMAC states, validates the implemented systems' robustness to inaccurate connections and accidental particular states, the evolution of malicious activity, and interoperability between OEM and aftermarket tools.
Article
Full-text available
Today's organizations are grappling with pressing issues related to efficiency. The public sector is innovating, adopting strategies and practices used by the private sector in its relentless quest for a delicate balance between costs and innovation. The private sector, on the other hand, has been challenged to generate profitable growth by questioning the conventional operating income equation of increased revenues less increased expenses, exploring how revenue lines can be enhanced without linearly increasing the corresponding expenses. This exploration is being escalated by the advances in digital technologies. I discuss these issues in the context of service provision, drawing insights from platform-based business models. I propose the concept of "service integration" to characterize the unique capabilities these business models create and apply in the service provider's advantage - enhanced selection and curation of services on offer; seamless on-demand aggregation and integration of complementary services; omnipresent use of service-related data to provide personalized, predictable and pre-emptive service experiences; establishment of trust through providing and demanding quality commitments from servitized businesses and increased scope for accelerators which boost the returns from services, especially in the case of businesses which are "servitized" using platform-based business models. By asking the question, "How well can a business integrate services that consumers or companies need?", I establish a framework to interpret a business' ability to maximize service integration en-cashable consumer value. I conclude with a look at the broader implications and challenges of service integration.
Article
Full-text available
Governments around the world are experimenting with Big Data and predictive analytics. They deploy various software applications like predictive policing, fraud detection, and capacity demand prediction while at the same time developing and investing in broader data analytical infrastructure and analytical skill sets. Implementing Big Data and predictive analytics can be a challenging endeavor, however, as these analytics often rely on open-source algorithms that are unsupervised and black-boxed. Equally challenging is how government institutions endeavor to use a waterfall approach from exploratory to model predictive analytics, yet how often predictions are only perfunctory and unempirical [1]. To shed light on how government finance organizations conceptualize and prepare for such analytical disruption pertaining to predictive analytics specifically, data processes, stakes and concerns were articulated based on in-depth interviews with 23 analysts who work with predictive analytics in a regional government agency. Descriptive coding of the interviews revealed that changes to data processes are being prepared or enacted, but many foreseen stakes and concerns about changes to both data processes and knowledge processes remain unresolved. New agendas to address such issues and better understand the approaches adopted were proposed. Governments across the world are aiming to exploit Big Data and associated predictive analytics to govern more effectively and efficiently. These analytics come in many sorts and varieties. In government finance, the topical applications of predictive analytics have to date mainly been found in fraud detection, capacity demand prediction, budget revenue prediction, and the prediction of homelessness and recidivism. A plethora of software applications built on open-source predictive analytics algorithms exist, encompassing packages for forecasting demand, and estimating regression models including linear and logistic types. However, there is some hesitancy in adopting most of these analytics, as open-source predictive analytics algorithms are rarely supervised and almost always deployed as black-boxed. Black-boxified analysis is countercultural to the emancipation and democratization of knowledge advocated in government, as well as other more mundane concerns about accountability and validity. With black-boxification work piling-up on agency knowledge processes, concerns arise about how this analytical work is handled, displayed, devised and/or aggregated to produce knowledge that meets government quality expectations of reproducibility, replicability, auditability, and trainability.
Article
Full-text available
Healthcare is one of the vast and most crucial industries in this world. In this era, the healthcare industry is producing a huge amount of data every day. Everybody’s life is captured by mobile devices/smart gadgets and smartwatches. Many records of human beings such as activities, heartbeat monitoring, running count, calorie count, number of steps, sleep patterns, daily mood, health status, stress monitoring, symptoms of diseases, blood pressure, temperature, and heart conditions are stored in the form of data. Predicting health issues automatically before noticing other symptoms is the need for the industry. A person’s health record data is analyzed using machine learning models to predict the chances of being affected by diseases automatically in a few seconds. The healthcare datasets that are used to build the machine learning models are completely clean, and no preprocessing is required. In today’s world, automated prediction of diseases in an individual is progressing at a faster pace. Everyone is busy with their life and other needs; nobody has time to visit the hospital for a check-up every day. Therefore, every person needs a system that is capable of predicting diseases automatically in a matter of seconds in this busy world. ML techniques are used to analyze datasets containing a person’s health records and other necessary information based on which disease can be predicted. To predict diseases in an individual, supervised machine learning models such as logistic regression, decision tree, random forest, support vector machine, and naive bayes are applied. In many countries, mobile applications are developed for the prediction of diseases based on symptoms in an individual. Healthcare experts are consulted for prediction in these applications. By contrast, the dashboard predicted in this paper using machine learning models automates disease prediction completely with the highest accuracy and performance compared to the existing applications. Machine learning has evolved into a miraculous tool for the healthcare industry. Many organizations are dedicated to the prediction of diseases and their diagnosis such as diabetes, liver diseases, lung cancer, heart disease, breast cancer, and more diseases using machine learning models. The performance of early disease diagnosis is getting better every day, and researchers are focused on studying people’s health records and enabling the system to predict diseases automatically. Compared to sharp models, high-performing deep learning models for personalized healthcare are not used widely as many healthcare industries have a huge number of unattended data. Elderly individuals and the common public have little or no technical knowledge and knowledge regarding the latest technologies, and ML applications may not specify.
Article
Full-text available
This paper investigates how natural language processing technologies can be leveraged in self-service business intelligence to improve the experience for non-technical users. It begins by discussing the role NLP currently plays in solving BI tool problems. Then, it covers why existing NLP technologies underperform in self-service BI use cases. Subsequently, it proposes a crowd coding approach, Empower, that aims to reduce the communication barrier between non-technical users and NLP engineers. A preliminary deployment of Empower, involving three real-world NLP engineers and seven survey participants, finds initial evidence supporting the feasibility and usability of the crowd coding approach. The results suggest that there appears to be some general interest in the approach, although its potential for scale and ongoing use would likely need to be further investigated. Today, self-service BI platforms promise to democratize data access for non-technical users. Natural language processing-a subfield of 1 artificial intelligence-makes processing natural language data much faster than manual tagging or annotations while solving some of the users' pain points. Unfortunately, our interviews with non-technical users and NLP engineers make it clear that existing NLP tools, regardless of their performance in handling natural language data, are still inadequate for the rapidly changing and quickly evolving self-service BI use cases. On one hand, self-service BI use cases vary by time and by individuals, making it difficult to train and supervise individuals or groups of NLP engineers to match such needs. On the other hand, NLP engineers share no common language or context with the business domain and face a high communication barrier. In this paper, we posit that translating natural language BI tasks into a semantic format may further help technical users solve repetitive BI tasks. Our research represents an initial study in this direction.
Article
Full-text available
The shortage of medical specialists and primary care providers contradicts the evolution of telehealth and telemedicine technology capabilities. Coupled with the rise of the aging population, there is a pressing call for senior healthcare planning and delivery innovation. The availability and awareness of telehealth services contribute to addressing the concern regarding equitable access to healthcare services. In this work, we design and implement a survey 1 to confirm the reality of the perception and availability of telehealth for senior healthcare services. Our findings indicate that even though Medicaid and commercial plans increasingly provide coverage of new telehealth services during public health emergencies and many want to continue the coverage, the actual coverage of the established services decreased, and their plans offered communication channel limitations. Our proposed AI-enabled cyber-physical system technology framework is presented with supporting communication analysis, requirements, patient journey, and work. Commercialization of the proposed solution can contribute to healthier living supported by technological companionship.
Article
Full-text available
Disaster recovery (DR) is a critical aspect of maintaining the availability, integrity, and continuity of cloud databases, which store and manage vast amounts of mission-critical data. With the increasing reliance on cloud platforms for business operations, organizations must implement effective disaster recovery mechanisms to safeguard against data loss, downtime, and system failures. This paper presents a comparative analysis of current disaster recovery approaches in cloud databases, examining the strengths and weaknesses of various strategies, including backup and restore, data replication, and automated failover systems. We evaluate how these mechanisms are applied in different cloud models (public, private, and hybrid), considering factors such as recovery time objectives (RTO), recovery point objectives (RPO), cost, and scalability. The paper also discusses emerging technologies and trends in cloud DR, such as real-time data synchronization, machine learning-based predictive recovery, and multi-cloud disaster recovery solutions. Additionally, the role of cloud service providers’ Service Level Agreements (SLAs) in defining DR expectations is analyzed. By comparing the advantages and limitations of these mechanisms, this paper provides insights into how organizations can select the most appropriate DR strategy based on their specific business needs and risk tolerance. The study concludes by offering recommendations for optimizing disaster recovery plans in cloud environments to ensure business continuity and data protection in the face of unexpected disruptions.
Article
Full-text available
Over the past decade, deep learning has emerged as a revolutionary technology for the analysis of visual data, particularly images. This master's thesis focuses on deep learning approaches to image classification, which is a key task in many applications using visual data analysis. A state-of-the-art deep learning model, namely the Vision Transformer (ViT), is explored for image classification. ViT is trained using transfer-learning techniques on a new dataset of over 350,000 photographs of European buildings in eight cities, obtained across two separate flights from a drone-mounted camera. Initial results demonstrate that models pre-trained on large datasets such as JFT-300M can achieve performance competitively with the fine-tuning of models trained from scratch on smaller datasets and that ViT outperforms convolutional neural networks for drone-captured images. Further, the prospects of deep learning for image classification are discussed, highlighting the potential impact of new research directions within the architectural vision transformer domain (e.g., Swin-Transformer, CrossViTs, T2T-vision Transformer) and new training techniques (e.g., Vision-Language Pre-training models, multi-modality input). The exponential increase in data generated by cameras, mobile devices, and Internet-of-Things (IoT) sensors has escalated the need for automated processing and analysis of visual data. Furthermore, images and video frames are a popular medium for data collection across various domains, including commercial and industrial. Image classification, or finding the most relevant label for a given photograph, is one key task in many applications using visual data analysis. Popular applications include multimedia search engines, mobile applications navigating to points of interest (POI), and anomaly detection in industrial cameras. As a consequence, many datasets have been assembled, containing millions of photographs collected and labeled according to city, object, or scene. Deep neural networks trained end-to-end directly on pixels have become state-of-the-art image classification technology. More recently, architectures based solely on attention mechanisms, eschewing convolutions, have challenged the long-standing dominance of convolutional neural networks.
Article
Investing is a crucial aspect of achieving personal goals, increasing income, and reducing future risks. While blood is essential for survival, investments are necessary to meet future needs and mitigate risks that cannot be predicted. In India, family members have a significant influence on investment decisions, particularly those with working parents, spouses, children, and grandparents. Each family member has their own behaviour that affects investment decision-making. Therefore, this study focuses on behavioural finance attributes such as representativeness, anchoring, loss aversion, risk aversion, herding, and overconfidence to evaluate the extent of their impact on investment decisions. Additionally, the study examines the influence of investment-related information search on decision-making. The researcher employed a quantitative research design to collect data from 100 families and their members.
Article
Full-text available
Biometrics is the technical term for body measurements and calculations. It refers to metrics related to human characteristics. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. The basic premise of biometric authentication is that everyone is unique and an individual can be identified by his or her intrinsic physical or behavioral traits. It allows us to capture biometrics or personal information and make digital payments fast. For the sake of digital banking, biometrics pointed the way to reinvest the identity verification process automatically into a system. As a result, financiers no longer have to substantially subdivide themselves from their customers, especially when ascertaining identity. A procedure that used to take weeks or months has now been reduced to a matter of minutes if not seconds. As an efficiency bid, it is orders of magnitude moreefficient than any human could ever be to preserve the verification process by using existing biometric data.Biometric identifications are discrete physiological or behavioral characteristics that can be measured to identify a person. Digital payment alternatives are gradually replacing the traditional ways of transactions. Biometric authentication is based on inheritance and specific characteristics and is considered to be more secure compared to traditional PIN methods. Fingerprint recognition is the most common biometric identification available in various smartphones. Replacing traditional signatures with moresecure and efficient fingerprint verification leads to the automation of the KYC process in the banking system. In this paper, readers will understand the working and potentiality of biometric authentication with the help of Artificial Intelligence (AI) and Big Data for correctness purposes. This paper will help non-technical readers understand the concepts and power of intelligent AI used in banks for KYC purposes.Keywords: Biometric Authentication,Digital Payments Security,AI in Payment Systems,Big Data Analytics,Real-Time Fraud Detection,Secure Payment Solutions,Machine Learning for Security,Real-Time Authentication,Payment Fraud Prevention,Biometric Data Privacy,AI-Driven Security,Big Data in Financial Transactions,Advanced Payment Technologies,Fraud Prevention Algorithms,Biometric Verification,Real-Time Risk Assessment,AI Security Algorithms,Biometric Payment Systems,Digital Identity Verification,Data-Driven Security Solutions,Secure Payment Authentication,Adaptive Fraud Detection,Biometric and AI Integration,Financial Data Protection,Smart PaymentSecurity
Article
Full-text available
Every second of every hour, billions of Internet of Things-enabled devices are creating massive streams of data individually tailored to the intimate personal habits of their users. Simultaneously, sophisticated cybercriminal organizations, nation-state actors, and rapidly proliferating malware attacks ranging from hijacked personal tablets through Fortune 200 penetrated databases are impacting digital and thus physical assets across the entire political spectrum. This connectivity matrix is generating a massive and ever-expanding volume of network, system, and end-user security event data that combines with personal information from both the private sector and governments to fuel the artificial intelligence insights that we enjoy in our everyday lives. Yet, while the entire cybersecurity compliance lifecycle, including policy, network, system, enforcement, and incident response, generates and uses colossal data quantities, the proprietary, unstructured, and often classified nature of this data flow historically has limited our industry's adherence to AI-driven precepts. In this paper, we introduce the principles of Threat Hooking, a Network Theory-driven approach to detecting and selectively blocking individual components within a collective logical threat. Our data science, Network Security Characterization Model detailed in this paper quantifies a specific element of Network Theory, which provides insight into both Network Health and individualized Threat Status. To demonstrate the innovation and theoretical underpinnings of Threat Hooking, we identify and analyze the massive datasets required from the network data immune system that we developed. After distilling relevant content from current cybersecurity research, we compiled an annotated dataset of live and emulated threat data and reported how AI-identified network artifacts that lead to human interpretable threat event detection can be verified, and if necessary, acted upon by cyber professionals.
Article
Biometrics is the technical term for body measurements and calculations. It refers to metrics related to human characteristics. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. The basic premise of biometric authentication is that everyone is unique and an individual can be identified by his or her intrinsic physical or behavioral traits. It allows us to capture biometrics or personal information and make digital payments fast. For the sake of digital banking, biometrics pointed the way to reinvest the identity verification process automatically into a system. As a result, financiers no longer have to substantially subdivide themselves from their customers, especially when ascertaining identity. A procedure that used to take weeks or months has now been reduced to a matter of minutes if not seconds. As an efficiency bid, it is orders of magnitude more efficient than any human could ever be to preserve the verification process by using existing biometric data. Biometric identifications are discrete physiological or behavioral characteristics that can be measured to identify a person. Digital payment alternatives are gradually replacing the traditional ways of transactions. Biometric authentication is based on inheritance and specific characteristics and is considered to be more secure compared to traditional PIN methods. Fingerprint recognition is the most common biometric identification available in various smartphones. Replacing traditional signatures with more secure and efficient fingerprint verification leads to the automation of the KYC process in the banking system. In this paper, readers will understand the working and potentiality of biometric authentication with the help of Artificial Intelligence (AI) and Big Data for correctness purposes. This paper will help non-technical readers understand the concepts and power of intelligent AI used in banks for KYC purposes.
Article
Biometric data, including keystroke dynamics and voice data, is being employed by banks to authenticate the identity of users making digital payments. This data is being analyzed using artificial intelligence tools to detect any fraud attempts. The biometric data of users making online transactions is compared to their previous data collected when they used mobile banking, internet banking, and other banking services. If this biometric data is found to fluctuate beyond a certain threshold, the transaction is flagged as a possible fraud. This mechanism is proving beneficial as a large number of fraud attempts are being detected in the early stages before losses amounting to crores of rupees occur. Big data, a very large set of information that is complex and difficult to manage, is being used to aid a machine learning-based fraud detection system in making predictions. Looking at the pattern of transactions already made alongside the claims of fraud against these transactions allows for past transactions to be classified as any of the two classes of predicting fraud: ‘fraud’ and ‘not fraud’. Based on these credible past transactions, future transactions can also be predicted, and frauds attempted can be classified as ‘true positive’, ‘false positive’, ‘true negative’, or ‘false negative’. Also, multiple decisions concerning the model of classifying fraud can be considered simultaneously if big data is employed in the machine learning model. This technology is extremely useful for banks to detect fraud early in the transaction process.
Article
Moreover, authentication schemes have also evolved to be multi-modal, such as combining fingerprint and power spectrum of handwriting or validating face and signature to give a better level of assurance to the biometric authentication. The big data paradigm enables storing and managing large data efficiently, and applying artificial intelligence models for these data adds a security layer to the overall system. It aims to enhance security through behavior and skill, and it enhances transaction efficiencies through the reduction of friction. Furthermore, both AI and Big Data technologies are mostly used in many digital biometric authentication processes, such as Behavioral Biometrics like Keystroke Dynamics, Mouse Dynamics, and Gait Analysis; Physiological Biometrics like Thermal imaging for FiO2 estimation; Speech Recognition; Facial Expression Technology; etc., are detailed. Big data and AI play very important roles in many areas. Biometric authentication is getting more attention for secure digital transactions while individuals and organizations tend to deploy big data and AI in the process of authentication systems to achieve secure and completely secured transactions. This paper deals with the importance of big data and AI innovations in biometric authentication for secure digital transactions. Big data and AI concepts have been effectively analyzed and reviewed in the area of biometric authentication and their importance has been effectively shown. Biometric identifiers are the preferred methods of user authentication, which have moved beyond fingerprints, iris, and facial scans.
Article
Real-time transaction monitoring is an essential part of maintaining the security of payment systems. With the increasing number of online transactions, financial institutions face the challenge of detecting and preventing fraudulent transactions in real time. This paper discusses a real-time transaction monitoring system developed using a combination of AI, big data, and biometric authentication. The system has been implemented as a workbench and is divided into five modules. The transaction information is captured and analyzed for fraud detection in the monitoring and analysis module. The transaction analysis is based on big data analysis technology, which incorporates transaction information, customer information, and biometric data for AI modeling. The biometric authentication technology is used to enhance the security of the payment process. The biometric templates are created during customer registration and verified during subsequent transactions. The digital signatures generated using biometric data prevent unauthorized access to customer information. The monitoring and analysis module of the system analyzes the uncertainty and time series of the transaction data. The uncertainty analysis identifies the transaction data that lacks sufficient information for effective fraud detection. Time series analysis is used to detect suspicious transactions occurring at unusual times. The monitoring and analysis module integrates a fraud detection model based on AI technology trained using big data analysis results. It also incorporates additional analysis models to monitor the uncertainty and time-series characteristics of the transaction data. The detected frauds are classified as high, medium, and low risk and reported using different levels of monitoring. High-risk frauds are immediately blocked and reported to the fraud management unit for investigation. Low-risk frauds are flagged for later knowledge updating. The system is trained adaptively, using new knowledge acquired from fraud investigations to retrain the AI model.
Article
The abstract of this article provides a concise overview of the integration of data engineering and AI in retail analytics to enhance the customer experience. It highlights the utilization of big data tools and applications in the retail sector, emphasizing the significance of historical sales data, loyalty schemes, and external data sources for demand forecasting, pricing, and operational planning. Additionally, the abstract discusses the influence of AI and machine learning in detecting demand disruptions, retraining AI models dynamically, and adjusting omnichannel operations to effectively serve customers in both physical stores and online platforms, particularly in the context of the COVID-19 pandemic. The abstract section sets the stage for the subsequent discussions on the specific techniques and challenges associated with leveraging data engineering and AI in retail analytics to enhance the overall customer experience.
Article
This chapter explores generative adversarial networks (GANs), which simultaneously train two models: a generator (G) to generate new instances that resemble a training data set; and a discriminator (D) that estimates the probability that an instance was produced by the generator rather than belonging to the training data. The capabilities of GANs are explored, thrusting GANs into the forefront of AI research and development. Preparation of the data input to the generative adversarial network occurs before the application of the discriminator to the multilayer perceptron neural network designed to solve an example classification problem. Then, utilizing GANs to facilitate big data implementation is suggested, simultaneously imputing the missing data and selecting a smaller more effective training set for an example classification problem. AI problems can be defined as realizations of various data models using a multilayer perceptron neural network, which are here illustrated and employed to solve the "and" (AND) function approximation, a famous AI hard problem. Also defined and described. Note that multi-GANs are employed to select an overall effective representative training data set for classification problems, facilitate big data, and predict missing data values during the preparation of the training set. This work differs from the recent efforts attempting to improve GANs. A hyperparameter tuning methodology. Furthermore, this paper couples the imputation of missing no-label binary data with data in the training set used as an input to a multilayer perceptron neural network, which has typically shown an improved estimate of the models needed for binary and optimization results when samples are selected to represent the "and" (AND) function classification problem training process for mid-and large-scale problems.
Article
Full-text available
Digital payment solutions have recently gained extraordinary popularity across the globe. Many individuals in cities or urban areas have begun to use these payment systems for financial transactions. The combined use of different advanced technologies is indispensable for widespread adoption. Various obstacles to performing financial transactions using digital payment systems must be considered. Among these obstacles, security is the most remarkable point. Although this online payment system is fast and convenient, the chances of fraud are very high. A cybercriminal can hack into the system, stealing personal information like phone numbers and passwords. This article will focus on a new digital payment security system that will take advantage of artificial intelligence (AI), big data, and biometric authentication (BA). When using this security system, a transaction request will be sent. First, it will check whether the phone number is a registered user and if it is, further authentication will be carried out. The system will automatically authenticate the user's identity with the help of facial recognition technology that will analyze the features of the face. Further, the user has to show various gestures that are analyzed by the system. This and other personal information will be converted into a number and then encrypted before being stored in the database. The incoming transaction request will be compared against the personal number. If the user is not unique, the transaction request will be immediately rejected. Secondly, if the number matches but the user is fraudulent, the transaction will also be rejected. Only if both are unique, the system will check for duplicate numbers among the other transactions for the particular period. If other numbers are not found, the transaction is executed; otherwise, it is still rejected. This is done to find out fake and auto-generated numbers. This security system will be helpful to ensure the authenticity of the user in digital payment systems.
Article
Full-text available
Energy research encompasses all aspects of electrical energy, focusing on innovation in energy production and delivery, alternative resources, and efficient devices. It involves studying systems and equipment for converting, providing, and utilizing energy as electricity. Power electronics have become essential to power systems, enhancing quality and efficiency and promoting the development of intelligent, efficient energy solutions. There are various types of power electronics within power systems. The architectural study of converting electrical energy from one form to another falls under power electronics. Globally, electronics recycle or recover more than 80% of the total electricity produced, averaging 3.4 billion kilowatt-hours annually. Power electronics converters, also known as power converters or switching converters, are used to process or convert electrical energy. Electricity exists as AC power and DC power, leading to the classification of distribution systems into AC and DC based on the type of power used. Power system analysis is crucial for designing electrical power systems. It involves calculations and simulations to ensure that the electrical system and its components are appropriately specified to function as intended, endure expected stress, and be protected from failures. Advantages of power electronics include high power density electricity and improved efficiency, reaching up to 99% in energy conversion. Their efficiency and reliability make switching power supplies suitable for medical devices with acoustically sensitive applications. Power system reliability addresses issues like service interruptions and power outages, often guided by specific codes relevant to consumers. Common reliability indices in the US include SAIFI, SAIDI, and CAIDI. The DEMATEL (Decision Making Trial and Evaluation Laboratory) method is applied across various industries such as non-metal mineral products, general equipment manufacturing, coal mining and washing, textiles, and food manufacturing. DEMATEL visualizes and assesses the interactions and dependent relationships between factors through a structural model, identifying critical elements. Evaluation parameters for power electronics include their application in power systems, transportation systems, energy conservation, heating and lighting control, and renewable energy integration. Power electronics in power systems are ranked highest, while energy conservation is ranked lowest.
Article
Deployment of software updates in automotive vehicles is a complex, error-prone, and tedious process compared to conventional consumer electronics. The process has its punitive challenges and must account for several contributing factors, such as consumer safety, privacy, regulatory compliance, resource constraints, and security.Today, infrequent, disengaging, and decentralized dealer-centric software updates put vehicles at risk of not being in the latest software state. The dealer-centric updates introduce laborious management, increased resources, and higher cost overhead for the OEMs. Additionally, quality challenges and diverse vehicle populations further complicate the over-the-air (OTA) deployment process.Moreover, the management complexity, resource constraints, and urgency to quickly respond to newly detected issues are far beyond the scope of the dealer network and usually converge on the vehicle OEMs. Expensive recalls, subsidizing a dealer facility with flash facility tools, and/or financing a third party for field service are not viable monetizable strategies nor represent proactive capacity management for such infrequent and high-touch operational process aspects that surround vehicle software updates.By utilizing the cloud, AI and ML technologies, and infotainment capabilities of the vehicle, vehicle population, and the vehicle on the road aspects combined with the surgeon capability of the vehicle, not only can the operational labor complexities of conducting dealer facility updates be significantly reduced, but proactive vehicle updates can be reached in a near-seamless and unattended manner. The "broad-tailed" vehicle adoption over its lifetime life-cycle results in a cost-effective, operational overhead reduction, seamless alerts, warnings, and reminders, all of which represent the key "smart" vehicle aspects that are capable of driving customer satisfaction.In this paper, we share a brief perspective on how the OTA deployment challenges are being addressed to drive modern software management approaches for current and future electronics solutions within the vehicle domain. We provide a data-driven approach with descriptive analytics for model generation via advanced learning techniques, and deep neural network architectures that ultimately close the gap to the desired state. Our cloud-enabled AI/ML algorithms culminate this paper, thus providing a smooth pathway to reach the modern software update (i.e., fresh condition) within the vehicle electronic domain. We also present different options for market applications and possibilities such as data-driven model deployments on the cloud infrastructure that provide business continuity, customer service, and operational readiness at all times.
Article
Full-text available
In the automotive domain, the trend of ECUs becoming central units facilitating deployment of functionalities sliced up across domains and their interlinking makes MC soberly complex. Some typical functionalities, as examples, are driven control and safety, electrification of the powertrain, and driving assistance. This phenomenon also led to a sharp increase in testing these MCs. Incidents such as those recently seen in highly and fully automated driving automobiles show that rigorous testing of such MCs for readiness to be released to the field is an exponentially increasing challenge. To cope with the MC wiring complexity and its handling efforts, hardware-in-the-loop testing devices are increasingly used to test MC software functionality. Incidentally, many MC customers often first test dumps of the MC software in their software integration labs or they use simulators known as models. The challenges of increasing the number of test cases to pass for the release candidate and the amount of test runs are similar.
Article
Worldwide interest in "hybrid and battery electric vehicles" has increased recently as a result of their ability to save on fuel, lessen reliance on foreign oil, and reduce greenhouse gas emissions. The effectiveness of the sub-systems that these vehicles are built with determines their overall success in large part. It is necessary to estimate these subsystems' parameters with great accuracy to improve their performances. "Battery electric vehicles (BEVs)", an eco-friendly type of vehicle, are crucial given that the automotive industry contributes significantly to carbon emissions. Due to the recent quick growth of the BEV market, it has grown to be a substantial challenge to evaluate BEV alternatives fully from the perspective of the consumer. By examining the fundamental characteristics of each BEV, this evaluation can be made. The use of "multiple criteria decision making (MCDM)" techniques is a useful tool for making the best BEV buying choice. Therefore, six BEVs are selected as options in this work. These vehicles are then ranked using TOPSIS based on technical specifications, such as Battery capacity, Range, Top speed, Quick charge time, Acceleration and Purchasing price. In this study TOPSIS method analyses the rank of Mercedes-Benz EQS as first, Audi e-tron GT as fourth, Porsche Taycan as fifth, Audi e-tron as third, Audi RS e-tron GT as second and Mercedes-Benz EQC as sixth. So, the result from the TOPSIS method shows that Mercedes-Benz EQS is highlighted as the best choice of the selected battery electric vehicles followed by the Audi RS e-tron GT.
Article
Full-text available
Distributed Generation (DG) system These two major categories of DG optimization methodologies are different from the components of the examined studies. Distributed generation (DG) power systems are the most popular technique for extending the power network to rural areas and, more recently, as a sustainable electrification technique The consequences of seasonal load variation and distributed hybrid system architecture without load shedding generation (TG) are explored in light of the dwindling availability of traditional fossil fuels, the fluctuating cost of fuel, and the decrease of environmental pollutants owing to increased demand. Numerous DGs connected to integrated power quality system conditioners. Today, a lot of distributed generation (DG) technology for renewable energy is interface-based. In grid-connected converters, these harmonic functions are taken into account by sensing control, enhancing converter versatility when local controllers use assessment techniques for harmonic distribution system adjustment. As a result, systems ought to implement common current-regulated and voltage-regulated DG harmonics correction functions. A wind-solar hybrid system produces electricity by combining the two renewable energy sources, wind and sunlight. The system is made to produce electricity utilizing both modest wind generators and solar panels. The task of supplying the engine with fuel falls on the fuel system, which consists of a fuel tank, pump, filter, and injectors or a carburetor. Each part of the car needs to be faultless in order for it to function and be as dependable as anticipated. A photovoltaic (PV) system combines one or more solar panels with an inverter and other electrical and mechanical components to generate power from the sun. There are many different sizes available for PV systems, ranging from small rooftop or portable devices to massive utility-scale power plants. In isolated (cold or more temperate) places with no other electrical supply, PV offers a suitable energy source. Photovoltaic systems, for instance, can be used to power: water pipes, communications repeater stations, and more. The components of a typical system include a building sewer, a septic tank, a standard trench, a shallow trench, a chamber trench, a deep wall trench, and an absorbent bed for seepage pits. EDAS approach is proposed for their role category. The top advantage of EDAS compared to other methods for classification is that it has high accuracy performance and less mathematical calculations. In EDAS, each evaluation of substitutions appreciates size and a form standard solution introduces a durable EDAS technique for finding providers depending on the location of character substitution. Strong waste for disposal in site determination suggested a purely intuitive fuzzy model based on EDAS. In this study, EDAS was integrated into analyzer boundaries for RE development [19] Application of EDAS technique in MCDM. First, a basic definition of projects and a distance method are briefly suggested. Next, the augmented EDAS approach is traditional under the real context inspired by the EDAS method. Results: The final result is done by using the EDAS method. Fuel system is highest Value and PV system is lowest value. resulting in Fuel system ranked first, there Fuel system has low rank.
Article
Composites can simultaneously enhance materials and designs while having superior mechanical qualities. Composites can have notably better "strength, stiffness, corrosion, wear, and fatigue resistance" than typical composites, which is important for developing aviation constituent parts. The mechanical qualities of the composite fabric must be crafted to fit its intended application or the exploited circumstances. For "the manufacture of aero planes", many metals and synthetic fibers are preferred today. Thousands of people of polymers must be chosen by engineers, but only 0.05 per cent of those may be used in the aerospace sector and still have the desired properties. The choice of proper raw materials from tens of thousands of components has grown to be a significant problem. In a "Multi-criteria decision-making (MCDM)" situation, the optimal material for an aero plane must be selected from a range of alternatives. The finest components for aero plane parts are chosen in this study using strategies focused
Article
Programming in C. The machine-oriented programming language C is mostly used to create many applications and operating systems, like Windows, as well as other complicated programmers, such the Oracle database, Kit, the Python interpreter, and games. Computer programmers and low-level programming applications are often written in the procedural or structured programming language C. Concatenation, data hiding, data compression, inheritance, and polymorphism are some extra aspects of C++ despite the fact that it is an object-oriented programming language. Multiple entities of the same type can be grouped together into a larger group using the C concept of an array. These entities or components may be user-defined data types or structures, such as integer, float, double, or float data types. C-written programmers compile and execute far more quickly than those written in other languages. This is because there are no additional processing overheads like garbage collection. As a result, when compared to other programming languages, the language is quick. An algorithm is a series of actions carried out in a preset order to address a challenge or finish a task. A function is a section of code that is called and carried out by other software elements. An operating system is made using it. The "C" programming language is used to create operating systems like Apple's OS X, Microsoft's Windows, and Symbian. It is utilized to create platforms for desktop and mobile devices. It is used to create compilers. One C feature that contains symbols for doing mathematical, relational, bitwise, conditional, or logical manipulations is the C operator. There are many built-in operators in the C programming language that can be used to carry out different tasks as needed by the application. C has an advantage over other dynamic languages since it is a statically typed programming language. Also, C is a compiler-based programming language, in contrast to Python and Java, which are interpreter-based. It expedites the compilation and execution of code. Weighted Sum works by multiplying the designated field values Indian Technical Institution or appraising the alternatives
Article
Full-text available
We will review how Ethernet and open standard AVB/TSN are evolving for automotive and how they offer real implementation benefits from both a hardware and software level. We discuss how AVB/TSN IP can be deployed at the application level, making it easier to develop, test, and optimize use cases with Ethernet as the network backbone. These can range from in-vehicle multi-resolution GUI and multiple safety-critical ADAS to high-performance multi-camera sensing engine features. We also look at the potential for machine learning-based implementations inside switched-Ethernet ECUs, running software that manages congestion and competes for time-critical services with the more traditional automotive traffic. Companies developing in the in-vehicle network solution space can learn where to appropriately position themselves in the increasingly software-designed ecosystem-driven future of automotive electronics. We show how HW & SW developed AVB/TSN implementation, reducing the complexity of E/E architectures, resulting in a more effective ADAS and improving road safety. It also allows OEMs, car manufacturers, and Tier 1's to rapidly deploy that system features most important to their customers' requirements at launch. The automotive ADAS features deployment race is about to shift up a gear, enabling them to jointly deliver vehicles with the highest driver/user acceptance and confidence in the latest ADAS features.
Article
Full-text available
The trend in the automotive industry has shifted from wanting the connected car, which uses the internet to fulfill the infotainment needs of the driver and the passengers, to acquiring the capability to manage the massive amount of vehicle data to enable new profitable opportunities such as maintenance-as-a-service. This real-time maintenance is possible using machine learning (ML) applications to develop predictive maintenance (PdM) algorithms. This creates a new realm focusing on preventing the unscheduled broken state of expensive automotive parts such as the clutch of an automatic transmission, as the breaking of a single part can affect the behavior of the whole vehicle. This paper aims to help move the PdM industry even further, with an up-to-date insight into new available technologies and highlight potential applications for vehicle PdM, with a list of use cases that can be studied for future development. Additionally, for each use case, the most suitable data sources are also listed. Such a list is extremely helpful to researchers and developers, especially in the vehicle maintenance field, to understand exactly which sensor has to be developed and installed, in which area it is available, and with which resolution and accuracy.
Article
Full-text available
Automotive systems are becoming increasingly complex, with new technology being included to meet safety, performance, standardization, and cost targets. Control systems are an essential part of the increment just of such technologies. Artificial Intelligence (AI) has been proposed to play an important role in vehicle control, helping to create self-driving solutions and enhancing the overall vehicle stability and efficiency, particularly in extreme operating conditions. By adopting suitable supervisory control actions, AI can help recover vehicle operations when these are outside the range of standard control solutions and have the onset scenario of different failures. In addition to these benefits, designed AI tools, in particular Neural Networks, appeared to be adopted and developed for diagnostics purposes, where learning from collected 'experience observations data, often not possible to be generated with simulations or under controlled conditions, is required. This paper presents a review of designed AI tools applied to automotive vehicle control optimization, diagnostics, and fault detection purposes.
Article
Full-text available
Safety in the transportation sector has been of particular concern to policymakers, industry participants, and the scholarly community. The continued steady progress in scientific knowledge and technological progress has resulted in a decrease in fatal accidents. The ever-decreasing cost of computing power has created the preconditions for a new round of innovative solutions that use AI-driven technologies to enhance automotive safety. In this study, we provide a scientific-technical survey of AI-driven innovations in vehicle safety, underscore potential barriers to large-scale implementation, and provide policy recommendations. The results are intended to assist policymakers, researchers, and practitioners in a wide range of domains in understanding the potential effects of AI-driven technologies on vehicle safety.
Article
The trend in the automotive industry has shifted from wanting the connected car, which uses the internet to fulfill the infotainment needs of the driver and the passengers, to acquiring the capability to manage the massive amount of vehicle data to enable new profitable opportunities such as maintenance-as-a-service. This real-time maintenance is possible using machine learning (ML) applications to develop predictive maintenance (PdM) algorithms. This creates a new realm focusing on preventing the unscheduled broken state of expensive automotive parts such as the clutch of an automatic transmission, as the breaking of a single part can affect the behavior of the whole vehicle. This paper aims to help move the PdM industry even further, with an up-to-date insight into new available technologies and highlight potential applications for vehicle PdM, with a list of use cases that can be studied for future development. Additionally, for each use case, the most suitable data sources are also listed. Such a list is extremely helpful to researchers and developers, especially in the vehicle maintenance field, to understand exactly which sensor has to be developed and installed, in which area it is available, and with which resolution and accuracy.
Article
In this paper, a model of predictive auto-maintenance is developed. This model is then used for auto-maintenance in the electrical and electronic systems of an autonomous vehicle. Along with traditional sensors, an advanced X-ray vision system is used to detect faults in the system. It uses massively scaled integration (MSI) or very large-scale integration (VLSI) integrated circuitry and other designs to enable the creation of systems containing many components. Commercial applications are being researched and developed, especially in robotics.In this paper, reconfigurable devices are employed to create fault-tolerant digital systems for the predictive auto-maintenance subsystem of the autonomous vehicle and achieve the highest level of auto-maintenance.The development of an intelligent, predictive auto-maintenance model is a challenging task that requires the use of modern technologies, ideas, and concepts. Despite recent progress in predictive maintenance, this work takes inspiration and begins with physics models. It seeks to examine the type of deterioration seen in electronic components and connections. Relationships between local failures and global system malfunctions are examined. The aim is to eliminate recurrent system malfunctions through the application of focused auto-maintenance.The development involves the design and modeling of AI system behavior and takes into account the principles underlying human reasoning on the behavior of technical systems. UML diagrams that visualize human reasoningduring occasional self-reflective discussions of the forward reasoning process are presented.
Article
This paper presents a novel approach to transmitting Ethernet signals over inexpensive distances for vehicle telemetry, offering a small, cost-effective, and secure method. The proposed method leverages IP to provide Ethernet diagnostic and control functionality. Notably, this method enables real-time machine learning diagnostics, a feature of particular relevance in the context of autonomous vehicles. Furthermore, the technology we propose has the potential to be directly applied to various other vehicle technology applications, including payload support and security. This versatility underscores its value and potential impact on the automotive industry. This paper describes a small, cost-effective, and secure method for transmitting Ethernet signals over inexpensive distances for vehicle telemetry, providing Ethernet diagnostic and control functionality using IP. The developed method will also allow machine learning diagnostics to operate over this communication channel in real-time, particularly in autonomous vehicles. Furthermore, the technology we propose has the potential to be directly applied to various other vehicle technology applications, including payload support and security. This versatility underscores its value and potential impact on the automotive industry. Key Points: Ethernet technology is widely used worldwide, and automotive use diversifies to support vehicle data rates of 1 Gbps and beyond. Conventional vehicle systems are challenging to handle at these rates and are often implemented via a direct-attach cable, where cost is a big issue
Article
Full-text available
Supply chain management is an approach used by firms to ensure that their business can be highly effective, and profitable and that operations run smoothly. This involves managing the movement of raw materials inwards and finished goods outwards. Logistics is a key component of this. It also involves managing the flow of products between companies, which can involve the movement of products between a manufacturer, a wholesaler, and a retailer. Supply chain management is therefore the integration of these flows between companies. Several innovative techniques can be used to optimize supply chain operations, particularly given recent advances in information technology. The function of logistics is known as activities that are related to the flow of products between companies, such as the transportation and warehousing of goods. Activities that take place within companies, such as inventory management and materials handling, are not considered to be logistical activities but part of the supply chain. The level of interest in supply chain management has risen quite dramatically over the last few years. This is partly due to advances in information technology, which have enabled closer integration of the supply chain. As well as increasing competition between companies, on both a national and an international level, has led to an increasing emphasis on the need for companies to concentrate on their core competencies, and to look to outside suppliers to provide other goods and services. This has led to the increased use of external suppliers.
Article
Full-text available
Public procurement in Europe represents, on average, 16.9% of the GDP and is the cornerstone of the European Single Market. Simplifying public procurement and reducing procurement administrative costs for the public and private sectors can deliver substantial benefits at the national and European levels. However, the complexity and diversity of public procurement processes, as well as the huge expenditure at hand, implement automatic systems tailored to specific procurement needs necessary. This paper shows how artificial intelligence, and in particular machine learning techniques, can be used to modernize public procurement. It presents implemented systems and showcases pilot projects. The results of an extensive evaluation are also reported. The paper also argues that public procurement should be used more strategically by public administrations. This means aligning procurement actions with overall business objectives and using procurement to leverage supplier innovation and create a competitive advantage. Such advanced objectives are seldom achieved through the lowest price model. The paper also contains several recommendations for both the supply and demand sides to help realize the full potential of public procurement. On the supply side, recommendations relate to a better understanding of how artificial intelligence can be used in procurement activities, working with AI systems, and creating AI systems. On the demand side, recommendations involve the careful planning of how and when to use AI in procurement activities.
Article
Full-text available
Speed and accuracy of decision-making at the operational and tactical levels are critical in warehouse management. This paper conceptually presents decision support systems (DSS) powered by artificial intelligence (AI) at two levels – embedded warehouse management at the operational level and extended warehouse management at the tactical level. For enhanced efficiency, suggestions are categorized at the tactical level into system-front-end/back-end-heavy lifting, other back-end system suggestions, and system extensions. Several AI technologies such as expert system rule engines, machine learning models, and natural language understanding models can be applied at both levels. Efforts required for data preparation and model training are highlighted. Warehouse management takes place in a dynamic environment. New inventory arrives, and orders for shipping out inventory are constantly issued. There is a large number of decisions to be made regularly to coordinate the flow of materials in and out of a warehouse. Speed and accuracy of operational and tactical decision-making are important in warehouse management. This paper begins by discussing decision support systems (DSS) enabled by artificial intelligence (AI) for efficient decision-making at both the operational and tactical levels. Subsequently, several AI technologies that can be applied to offer intelligence at both levels are discussed. Throughout the paper, suggestions are made about how to apply these technologies to enhance efficiency. Furthermore, the effort required in terms of data preparation and model training is discussed. The pathways presented are only feasible with a supporting, intelligent IT infrastructure. Intelligence needs to be built not only within the warehouse system but also extended out to the surrounding ecosystem. The paper wraps up by either highlighting or reiterating the suggestions and insights to help stakeholders make the most of the possibilities AI offers for decision-making in warehouse management. With the rapid growth of e-commerce, flexible, adaptable AI-driven DSS could be the solution needed to help warehouse management keep up with the ever-increasing pace and dynamism of the industry.
Article
Full-text available
Artificial intelligence and machine learning are being implemented by a constantly growing number of companies to develop a more efficient supply chain. The immense volume of data that companies are producing and sourcing along the supply chain can now be analyzed in real time, enabling better decision-making processes. This paper will explore how the utilization of these technologies is revolutionizing supply chain management. Two specific areas, demand forecasting, and inventory management, will be explored in greater depth. The paper will then highlight the current trends and challenges of AI and ML in supply chain management and offer concluding remarks. Supply chain management is a complex system that connects multiple companies and encompasses the flow of goods, services, information, and finances. To cope with its complexity, more and more companies are turning to technologies like artificial intelligence (AI) and machine learning (ML) to gain a competitive edge. ML is a branch of AI that consists of systems and algorithms that can learn from data to improve decision-making. The number of companies that claim to be using ML has grown by more than 300% since 2015, with the overall AI market considered to be worth around $2 trillion. In the supply chain industry, companies are using ML to optimize delivery routes and times, predict delays and detect variances in quality at an early stage. The use of AI technologies can optimize and execute supply chain tasks promptly, making it more capable than traditional supply chain setups.
Article
Full-text available
By focusing on the shifting role of the supply chain to drive overarching business strategy, this research presents the journey of three companies in the quest to achieve digital leadership in the supply chain domain. Digital excellence in the supply chain has become more than a desirable goal for companies engaged in production, distribution, and retail activities. It is now a vehicle for communication and for expressing external value. Achieving digital excellence in the supply chain has traditionally meant bolstering the chain's internal performance with a beam of information technology-led light. In today's era of advanced technologies, which include artificial intelligence, machine learning, Internet of Things applications, robotics, and blockchain at its core, that well-lit path has become a highway with multiple lanes stretching toward endless possibilities. Companies are reaping huge benefits by developing innovative products and services or using next-generation technologies to transform their businesses. The leaders in each industry have already taken positions by demonstrating the power of digital excellence, but all companies that seek growth are wending their way toward supply chain digitalization, with varying degrees of speed and commitment. In this industry report-based research work, we describe the approaches and lessons learned from three very different companies as they seek supply chain digital leadership. What unites these companies is a belief in the transformative power of advanced technology and a conviction that supply chain performance is a key driver of overall business success.
Design of Fuzzy Logic Controller for Antilock Braking System
  • A Chatterjee
  • S Majumder
A. Chatterjee and S. Majumder, "Design of Fuzzy Logic Controller for Antilock Braking System," in 2017 International Conference on Circuits, Controls, Communications and Computing (I4C), Kolkata, India, 2017, pp. 1-5. doi: 10.1109/I4C.2017.8053890
The Design of Intelligent Parking Brake System Based on the Internet of Things
  • Y Han
  • Q Liu
  • W Zhu
  • Y Li
Y. Han, Q. Liu, W. Zhu, and Y. Li, "The Design of Intelligent Parking Brake System Based on the Internet of Things," in 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China, 2016, pp. 603-606. doi: 10.1109/WCICA.2016.7578445
Fuzzy Logic-Based Intelligent Vehicle Speed Control System with Improved Road Safety
  • A Pratap
  • S Prakash
  • R Kumar
A. Pratap, S. Prakash, and R. Kumar, "Fuzzy Logic-Based Intelligent Vehicle Speed Control System with Improved Road Safety," in 2015 5th International Conference on Communication Systems and Network Technologies (CSNT), Gwalior, India, 2015, pp. 791-795. doi: 10.1109/CSNT.2015.191
Modeling and Simulation of Electro-Hydraulic Brake System for Road Vehicles
  • K N Lokesh
  • S Jagadish
  • S Kumar
K. N. Lokesh, S. Jagadish, and S. Kumar, "Modeling and Simulation of Electro-Hydraulic Brake System for Road Vehicles," in 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Madurai, India, 2015, pp. 1-5. doi: 10.1109/ICCIC.2015.7435753
Automobile Crash Avoidance System Using Ultrasonic Sensor
  • G Kumar
G. Kumar, "Automobile Crash Avoidance System Using Ultrasonic Sensor," in 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), Coimbatore, India, 2014, pp. 1-5. doi: 10.1109/ICGCCEE.2014.6920964
Implementation of Neural Network in Smart Vehicle Speed Control
  • P Bhaskar
  • R Kulkarni
  • A Deshmukh
P. Bhaskar, R. Kulkarni, and A. Deshmukh, "Implementation of Neural Network in Smart Vehicle Speed Control," in 2013 International Conference on Machine Intelligence and Research Advancement (ICMIRA), Shantiniketan, India, 2013, pp. 1-5. doi: 10.1109/ICMIRA.2013.95
Intelligent Parking System Using Image Processing
  • A R Manjunath
  • H S Hemanth
  • B C Kumar
A. R. Manjunath, H. S. Hemanth, and B. C. Anil Kumar, "Intelligent Parking System Using Image Processing," in 2013 International Conference on Communication Systems and Network Technologies (CSNT), Gwalior, India, 2013, pp. 292-295. doi: 10.1109/CSNT.2013.64
Research of Vehicle Intelligent Parking System Based on ARM
  • Y Zhang
  • W He
  • H Li
Y. Zhang, W. He, and H. Li, "Research of Vehicle Intelligent Parking System Based on ARM," in 2012 Third International Conference on Digital Manufacturing & Automation (ICDMA), Guilin, China, 2012, pp. 683-686. doi: 10.1109/ICDMA.2012.166
Design and Implementation of Antilock Braking System
  • A Chandel
  • M A Khan
  • V Garg
A. Chandel, M. A. Khan, and V. Garg, "Design and Implementation of Antilock Braking System," in 2011 International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 2011, pp. 327-331. doi: 10.1109/ICECCT.2011.595813