Figure - available from: Complexity
This content is subject to copyright. Terms and conditions apply.
Source publication
Recent advances in modern manufacturing industries have created a great need to track and identify objects and parts by obtaining real-time information. One of the main technologies which has been utilized for this need is the Radio Frequency Identification (RFID) system. As a result of adopting this technology to the manufacturing industry environ...
Citations
... In the domain of industrial automation, innovative AIdriven approaches have been applied to optimise Radio Frequency Identification system (RFID) network planning in flexible manufacturing systems. Study Azizi (2017) introduced a hybrid algorithm combining the redundant antenna elimination technique with ring probabilistic logic neural networks to address the dual challenges of achieving full tag coverage and minimising antenna deployment. The experimental results demonstrated that this method outperformed traditional GAs, achieving almost 100% network coverage while reducing both antenna count and signal interference. ...
The complex relationship between laser drilling parameters and hole properties complicates parameter selection and requires advanced optimisation methods. In this study, a fully connected artificial neural network (ANN) with three hidden layers was used to model the relationships between laser drilling parameters and the properties of holes produced in a 0.3 mm thick Haynes 242 alloy plate. The laser parameters varied in the following ranges: power (P) from 40 to 100%, frequency (ν) from 4 to 50 kHz, pulse duration (ti) from 16 to 350 ns, and laser delay time (tΣ) from 2 to 20 ms. Hole properties, including inlet (din) and outlet (dout) diameters, standard deviation of the inlet diameter (SDin), and eccentricity (εin), were determined using scanning electron microscopy image processing. For each laser setting, 9 holes were drilled with entrance diameters ranging from 15 to 60 μm. The trained ANN accurately predicted the hole properties based on the given laser drilling parameters, with deviations of no more than 2–3%. In addition, the ANN was combined with a genetic algorithm (GA) to solve the inverse problem of determining optimal laser parameters for a target din with minimum SDin and maximum εin. For a target din of 27 μm, the optimal parameters were P = 46%, ν = 49 kHz, tΣ = 19 ms and ti = 348 ns. The resulting sample showed deviations of 3.38% in din, 8.99% in εin and 7.34% in SDin, confirming the effectiveness of the proposed approach. By improving machining quality, minimising defects and increasing precision, this methodology can serve as a robust framework for optimising laser drilling processes in various materials, including steels and composites.
... They are total generation cost minimization, minimization of power loss, and gross cost minimization. Azizi et al. (2017b) applied an optimization technique on the industrial real-based problem for a manufacturing application. Further, Azizi (2020a) tested artificial intelligence (AI) to get optimal solutions over robotic models. ...
Determining effective power generation while reducing emissions, voltage deviations, and preserving transmission line voltage stability is the goal of the proposed effort. In this presentation, the combined heat and power of economic dispatch (CHPED) system is implemented in the IEEE-30 bus to assure the best possible power flow in the transmission line while fulfilling the load demand. As the source of fossil fuel is improvising day by day it is an important aspect to combine renewable energy sources for effective power generation. Renewable energy sources including wind, solar, electric vehicles, and tidal are integrated with proposed systems to lessen the need for fossil fuels in the production of electricity. The system became more complex as a result of the presence of wind uncertainty, valve point impact, and transmission losses. To enhance system performance in dealing with non-linearity, the multi-trial vector-based monkey king evolution technique uses training-based optimization to guide control choices. To improve the search capability of the proposed technique, the suggested method mixes chaotic-op-positional-based learning (COL) with MMKE (COMMKE). The suggested COMMKE algorithm has been tested for three distinct test systems for a proposed system with and without renewable sources. In terms of convergence rate and best possible solution to the objective functions, the proposed algorithm outperforms other optimization techniques. The robustness of the recommended optimization technique has been evaluated by statistical analysis. To make this scrutiny in a rigorous manner such that the robustness of the proposed technique can be judged more reliably, an analysis of variance (ANOVA) test is employed. To address the superiority of the intended method, a comparison with tried-and-true optimization strategies has been made.
... In an industrial landscape rapidly transformed by technological advancements, the concept of smart factories becomes increasingly attainable, driven by the synergy of cyber-physical systems (CPS), the internet of things (IoT) (Azizi & Hashemipour, 2017;Mikołajczyk et al., 2023), and cloud-based manufacturing (Aron et al., 2023). This shift is a leap from Industry 3.0, where the introduction of computers marked a significant change, to Industry 4.0 (Azizi, 2020), characterized by interconnected computing and embedded intelligent systems, boosting efficiency and decision-making autonomy (Azizi, 2019b). ...
In the evolving landscape of the fourth industrial revolution, the integration of cyber-physical systems (CPSs) into industrial manufacturing, particularly focusing on autonomous mobile manipulators (MMs), is examined. A comprehensive framework is proposed for embedding MMs into existing production systems, addressing the burgeoning need for flexibility and adaptability in contemporary manufacturing. At the heart of this framework is the development of a modular service-oriented architecture, characterized by adaptive decentralization. This approach prioritizes real-time interoperability and leverages virtual capabilities, which is crucial for the effective integration of MMs as CPSs. The framework is designed to not only accommodate the operational complexities of MMs but also ensure their seamless alignment with existing production control systems. The practical application of this framework is demonstrated at the Platform 4.0 research production line at Arts et Métiers. An MM named MoMa, developed by OMRON Company, was integrated into the system. This application highlighted the framework’s capacity to significantly enhance the production system's flexibility, autonomy, and efficiency. Managed by the manufacturing execution system (MES), the successful integration of MoMa exemplifies the framework's potential to transform manufacturing processes in alignment with the principles of Industry 4.0.
... The manufacturing sector faces high electrical energy demand and cost challenges. With a significant percentage of global electricity consumption and carbon dioxide emissions attributed to manufacturing, there is a growing need for sustainable practices in the field (Azizi 2017). Engineers actively seek solutions to reduce the operational costs associated with electrical energy usage in production lines (Priarone and Ingarao 2017). ...
Assembly line balancing is assigning tasks to workstations in a production line to achieve optimal productivity. In recent years, the importance of energy studies in assembly line balancing has gained significant attention. Most existing publications focused on energy consumption in robotic assembly line balancing. This paper focuses on assembly line balancing with energy consumption in semi-automatic operation. The algorithm serves to improve the exploration to achieve a high-quality solution in a non-convex combinatorial problem, such as assembly line balancing with energy consumption. A novel approach called the Substituted Tiki-Taka Algorithm is introduced by incorporating a substitution mechanism to enhance exploration, thus improving the combinatorial optimization process. To evaluate the effectiveness of the Substituted Tiki-Taka Algorithm, a computational experiment is conducted using assembly line balancing with energy consumption benchmark problems. Additionally, an industrial case study is undertaken to validate the proposed model and algorithm. The results demonstrate that the Substituted Tiki-Taka Algorithm outperforms other existing algorithms in terms of line efficiency and energy consumption reduction. The findings from the case study indicate that implementing the Substituted Tiki-Taka Algorithm significantly increases line efficiency while simultaneously reducing energy consumption. These results highlight the potential of the proposed algorithm to positively impact manufacturing operations by achieving a balance between productivity and energy efficiency in assembly line systems.
... The RPLN algorithm was successfully applied in developing a cost-effective 'Radio Frequency Identification (RFID)' network and addressing the RFID network planning challenge. Further to this work, Azizi [34,35] introduced a novel hybrid AI optimization technique that merges 'redundant antenna elimination (RAE)' with RPLN. This novel approach was applied to address the complex issue of 'RFID network planning (RNP)' in the manufacturing industry. ...
The component’s measurement is a step in the manufacturing process where the product’s quality is significantly impacted by measurement uncertainty factors like operator skill, the number of measuring points, and the number of samples. To minimize the effects of measurement uncertainties, proper training, measuring instrument calibration, and standardized procedures are important. This work introduces a novel methodology ‘ANN-Regression-WASPAS’ used for estimating the uncertainty in hole diameter measurements. To measure the hole diameters, an experiment was designed using a Taguchi L27 orthogonal array. The ANN model was used for predicting the variations in hole diameter measurement. Further to this, a regression model was used to define the relationships between predicted values, actual values, and input factors. To mitigate measurement uncertainty, an estimated matrix was constructed by identifying the minimum values between the actual hole diameters and predicted hole diameters. The WASPAS method was used to optimize the obtained estimated matrix, and its Taguchi analysis was utilized for further confirmation. The experimental findings showed that the ‘ANN-Regression-WASPAS' method performed better than the traditional WASPAS approach using actual measured data, leading to a reduction of about − 1.67% in the uncertainty of hole diameters. Furthermore, the ANN-Regression approach decreased the percentage uncertainty of the actual measured data by − 5.62%. Finally, using the proposed approach, the uncertainty in hole diameter measurements was estimated to be 0.74%, which was regarded as satisfactory. The proposed methodology offers benefits to metrology researchers, quality control engineers, manufacturing engineers, design engineers, and optimization experts.
... The coverage of the tags, the total power sent by the antennas, and the overall interference of the network must all be computed by a trustworthy RFID network model. Multiple management packets must be transmitted in such networks, which might use up a lot of resources and decrease the network's effectiveness [19]. To establish coalitions and choose the appropriate coalition head, the interest linkages, physical ties, and energy availability of M2M devices are taken into account. ...
... Indirect trust: It [35] is modelled as in Eq. (17), in which, K → agent group interacting with o + 1 , a → hop, V y → feedback credibility.V y is modelled as in Eq. (18), wherein S y → similarity. The S y among hops is modelled in Eq. (19). ...
... In Eq. (19), l → constant for similarity deviation, and → punishment and reward term, and ℜ y (o, o + 1) → personalized dissimilarity. ...
As the nodes in wireless sensor networks (WSNs) are powered by batteries, energy efficiency is a major concern. Since communication consumes the most energy, efficient routing becomes an effective solution. A typical routing strategy is to use hierarchical clustering algorithms. In hierarchical routing schemes, a clustered arrangement of sensor nodes (SNs) is used to facilitate data merging and aggregation. The Cluster heads are in charge of obtaining data from the cluster’s SNs and gathering and delivering the data they receive at BS. These data are combined and pooled at the Cluster head (CH) level, resulting in considerable energy savings. However, there are certain reliability issues in cluster head selection (CHS) and optimal routing. To model a CHS and optimal routing in WSN, this work carries out the CHS using improved kernel Fuzzy C means clustering for selecting the CH by considering energy and distance. Then, optimal routing is done by deploying a novel optimization scheme named Customized Pelican with Blue Monkey Optimization (CP-BMO), which offers the optimal routes by considering trust and risk constraints. Also, the CP-BMO model holds better energy consumption which is 35.65%, 29.42%, 38.79%, 25.31%, 39.85%, 39.16%, 9.55%, 29.39%, 39.32% and 39.32% superior to ABO, RHSO, PELICAN, BMO, F-FLY, AEFA, ML-AEFA, ETERS, GA and QOBOA respectively. At last, the simulation outcomes confirm the developed approach’s effectiveness on PDR, network lifetime, etc.
... Currently, technology is moving towards advanced surveillance systems based AI models that operate without human intervention (Azizi, 2020;Latifinavid and Azizi, 2023). The fusion of AI and image processing in surveillance systems based on biometric traits offers several key advantages, including real-time identification, continuous monitoring, and non-intrusive authentication (Azizi and Azizi, 2019;Azizi et al., 2017). Recently, the scientific community of computer vision and artificial intelligence has become increasingly interested in Person Re-Identification (PRe-ID) due to its various security-based applications, such as: Cross-camera person A. Chouchane et al. because the captured views using the surveillance cameras are recorded under non-controlled conditions. ...
Video surveillance image analysis and processing is an important field in computer vision and among the
challenging tasks for Person Re-Identification (PRe-ID). The latter aims at finding a target person who has
already been identified and appeared on a camera network using a powerful description of their pedestrian
images. The success of recent research on person PRe-ID is largely backed into the effective features extraction
and representation with a powerful learning of these features to correctly discriminate pedestrian images. To
this end, two powerful features, Convolutional Neural Network (CNN) and Local Maximal Occurrence (LOMO)
are modeled on a multidimensional data in the proposed method, High-Dimensional Feature Fusion (HDFF).
Specifically, a new tensor fusion scheme is introduced to take advantage and combine two types of features in
the same tensor data even if its dimensions are not the same. To improve the accuracy, we use Tensor CrossView Quadratic Analysis (TXQDA) to perform multilinear subspace learning followed by the Cosine similarity
for matching. TXQDA efficiently ensures the learning ability and reduces the high dimensionality resulting
from high-order tensor data. The effectiveness of the proposed method is verified through experiments on
three challenging widely-used PRe-ID datasets namely, VIPeR, GRID, and PRID450S. Extensive experiments
show that the proposed method performs very well when compared with recent state-of-the-art methods.
... This technique is used in the research approach. In order to optimize the network of industrial applications in contemporary manufacturing, Azizi, A. [41] introduced a novel hybrid artificial intelligence (AI) algorithm. This algorithm suggests a hybrid approach that combines the advantages of various AI techniques to improve the optimization process. ...
The experimentation and analysis of a machine learning (ML) algorithm for automatic problem detection and real-time heat exchanger optimization (RTHEO) is presented in detail in this paper. Preparing datasets, optimizing heat exchanger parameters, identifying problems, self-training, and applying machine learning algorithms are the main areas of study. The suggested method utilizes MATLAB Simulink simulations that use polynomial regression to suggest parameters and a combination of algorithms to identify anomalies. The program continuously obtains real-time data, compares it with previous patterns, and notifies the user if any anomalies are detected. The algorithm adapts to observations through self-training, ensuring accurate and reliable predictions. The implementation is carried out using MATLAB, with robust error-handling mechanisms for real-world applications. The simulation procedure is discussed, and a future experimental setup is proposed to verify the program’s performance.
... In 2017, A. Azizi [13], a "hybrid AI optimization technique" was created to carry out RNP optimization as a hard learning problem, the method combines the optimization techniques Ring Probabilistic Logic Neural Networks (RPLNN) and Redundant Elimination (RAE). ...
... Rights reserved. (13) and (14), where β H is a random number between (0,1) the tag t attracts reader r 1 who advances to reader r ′ 1 , [18]. ...
The difficulty of deploying an RFID network optimally is brought on by the quick advancement of radio frequency identification (RFID) technology. It has been established that the RFID network planning (RNP) problem, which includes various constraints and objectives, is NP-hard. The research has given a lot of attention to the use of (EC) Evolutionary Computation and (SI) Swarm Intelligence for solving RNP, but the suggested methods experienced trouble regulating the number of readers deployed in the network. The complexity and cost of the network, however, are considerably influenced by the number of installed readers. In this study, we introduced a hybrid (BCPSO) algorithm that seeks to maximize tag coverage while minimizing interference, using as little power and as little load balancing as possible with a little number of readers. The suggested algorithm is a hybrid algorithm made up of a Chaotic Particle Swarm Optimization (CPSO) used to find the best positions for readers that incorporate the chaos approach into the PSO algorithm to increase randomness in the search for the PSO algorithm and addressed the issue of local minima, and a Balanced Iterative Reducing and Clustering using the Hierarchies (BIRCH) algorithm used to automatically count the number of readers and initialize the readers' coordinates. The proposed hybrid BCPSO algorithm improves the coverage of tags by 100% with the fewest readers while avoiding interference, and it also improves load balancing in three cases: case 30 by 99.94%, case 50 by 42.4%, and case 100 by 46.23%, according to comparison results between the proposed algorithm and the other algorithms in Jaballah and Meddeb (J Ambient Intell Human Comput 12:2905–2914, 2021. 10.1007/s12652-020-02446-5). and were compared the proposed hybrid BCPSO algorithm to the proposed algorithm in Cao et al. (Soft Comput 25: 5747–5761, 2021. 10.1007/s00500-020-05569-1), The comparison results demonstrate that the proposed hybrid BCPSO algorithm increases the power in cases 50 and 100 by 72.35% and 65.91%, respectively, while improving the coverage of tags by 100% with the fewest readers and interference avoidance.
... Section 2.2); when using natural connection density, GPU memory would be consequently filled up faster. However, if one considers simple connection objects like the ones used in weightless neural networks such as the Ring Probabilistic Logic Neural Networks discussed in [38], the maximum network size would naturally increase. The novel approach is currently limited to simulations on a single GPU, and future work is required to extend the algorithm to employ multiple GPUs as achieved with the previous algorithm [13]. ...
Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies while still meeting the flexibility demands of explorative network modeling.