Lean Logistics is a work philosophy to identify and eliminate waste from the supply chain. This is used in different areas to avoid lack of material or information. The purpose of this paper is to investigate the adoption of Lean and its principles applied to logistics to guide a material supply process. A survey was conducted among 25 experts located in the state of Nuevo León, Mexico, and 21 usable responses were obtained. A factor analysis was performed using the principal components method to identify the factors that affect the orientation of Lean Logistics. It was found that 76.19 percent of the experts’ response considers the concepts of Lean Logistics in the sourcing process taking into account management’s commitment. Areas of opportunity were identified and it was observed that Lean philosophy tends to be accepted within this area. The survey was limited to state Nuevo León. A future study should explore more areas, contemplate more variables and all the principles that apply in a Lean Supply Chain. Lean Logistics is applicable to eliminate activities that do not add value and optimize resources for a process. Communication and effective training of area managers are necessary to influence resistance to change and involve employees to know and understand the improvements in the methodology and concepts of Lean Logistics. This research sought to provide factors and variables of greater impact. The results provide information to improve logistics operations and guide the control of the variables that are involved in the material supply process.
Internet of Things (IoT) networks have been widely deployed to achieve communication among machines and humans. Machine translation can enable human-machine interactions for IoT equipment. In this paper, we propose to combine the neural machine translation (NMT) and statistical machine translation (SMT) to improve translation precision. In our design, we propose a hybrid deep learning (DL) network that uses the statistical feature extracted from the words as the data set. Namely, we use the SMT model to score the generated words in each decoding step of the NMT model, instead of directly processing their outputs. These scores will be converted to the generation probability corresponding to words by classifiers and used for generating the output of the hybrid MT system. For the NMT, the DL network consists of the input layer, embedding layer, recurrent layer, hidden layer, and output layer. At the offline training stage, the NMT network is jointly trained with SMT models. Then at the online deployment stage, we load the fine-trained models and parameters to generate the outputs. Experimental results on French-to-English translation tasks show that the proposed scheme can take advantage of both NMT and SMT methods, thus higher translation precision could be achieved.
Since the start of the current century, artificial intelligence has gone through critical advances improving the capabilities of intelligent systems. Especially machine learning has changed remarkably and caused the rise of deep learning. Deep learning shows cutting-edge results in terms of even the most advanced, difficult problems. However, that includes a trade-off in terms of interpretability. Although traditional machine learning techniques employ interpretable working mechanisms, hybrid systems and deep learning models are black-box being beyond our understanding capabilities. So, the need for making such systems understandable, additional methods by explainable artificial intelligence (XAI) has been widely developed in last years. In this sense, this study purposes a Convolutional Neural Networks (CNN) model, which runs a new form of Grad-CAM. As providing numerical feedback in addition to the default Grad-CAM, the numerical Grad-CAM (numGrad-CAM) was used within the developed CNN model, in order to have an explainability interface for brain tumor diagnosis. In detail, the numGrad-CAM-CNN model was evaluated via technical and physicians-oriented (human-side) evaluations. The model provided average findings of 97.11% accuracy, 95.58% sensitivity, and 96.81% specificity for the target brain tumor diagnosis setup. Additionally, numGrad-CAM integration provided 90.11% accuracy according to the other CAM variations in the same CNN model. The physicians used the numGrad-CAM-CNN model gave positive responses in terms of using the model for an explainable (and safe) diagnosis decision-making perspective for brain tumors.
The Internet of Things (IoT) has been making people’s lives more efficient and more comfortable in the past years, and it is expected to get even better. This improvement may benefit from the use of blockchain to enhance security, scalability, reliability and auditability. Recently, different blockchain architectures were proposed to provide a solution that is better suited for IoT scenarios. One of them, called appendable-block blockchains, proposed a data structure that allows to include transactions in blocks that were already inserted in the blockchain. This approach allows appendable-block blockchains to manage large amounts of data produced by IoT devices through decoupled and appendable data structures. Nevertheless, consensus algorithms can impact throughput and latency in scenarios with large amount of produced transactions, since IoT devices can produce data very quickly (milliseconds) while these data might take some time to be included in a block (seconds). Consequently, it is important to understand the behaviour of different consensus algorithms over appendabble-block blockchain in these type of scenarios. Therefore, we adapted the appendable-block blockchain to use and compare the impact of different consensus algorithms: Practical Byzantine Fault Tolerance (PBFT), witness-based, delegated Byzantine Fault Tolerance (dBFT) and Proof-of-Work (PoW). The results show that both dBFT and PBFT can achieve fast consensus (< 150ms) in the context of appendable-block blockchains. We also present a discussion regarding attacks in each consensus algorithm to help one to choose the best solution (considering performance and security issues) for each scenario.
The global energy transition process has generated a set of modifications in the generation and consumption of energy. Environmental objectives have gained great relevance for regions, countries and companies. The fishing sector has been identified as having a broad environmental impact, which is why the transition to cleaner energy sources in this sector has been considered. One of the proposed strategies is based on the transition from the diesel engines of the ships to the use of liquefied natural gas (LNG), however, this transition requires guaranteeing the supply of fuel as well as the process of reconversion of units in operation and the impulse of LNG gas engines for new units. This work presents a proposal for the design of an LNG gas supply chain for the fishing industry in the State of Tampico in Mexico that allows evaluating the feasibility of the transition from the use of diesel to natural gas in fishing vessels. The main results show the feasibility of the transition in the fuel supply and economic and environmental benefits for the fishing industry. However, there is a significant challenge in converting units in operation to the use of natural gas due to the lack of public policies that promote and support its use in this sector.
Condition monitoring of industrial equipment has become a critical aspect in Industry 4.0. This paper shows the design, implementation and testing of a low-cost Industrial Internet of Things (IIoT) system designed to monitor electric motors in real-time. This system can be used to detect operating anomalies and paves the way for building predictive maintenance models. The system is built using low-cost hardware components (wireless multi-sensor modules and single-board computers as gateways), open-source software and open cloud services, where all the relevant information is stored. The module collects real-time vibration data from electric motors. Vibration analyses in the temporal and frequency domains were carried out in both modules and gateways to compare their capabilities. This approach is also a springboard to using edge/fog computing to save cloud resources. A system prototype has been tested in the laboratory and in an industrial dairy plant. The results show that the proposed system can be used for continuous monitoring of any rotatory machine with similar accuracy to professional monitoring devices but at a significantly lower cost.
Existing studies on pilot contamination attack often assume that the attack and jamming strategies of adversaries are fixed. The enemy has not made any strategic corrections to the detection plan. In this paper, we analyze how an intelligent malicious user considers the role of legitimate user and adjusts attacking strategy during training phase in wireless communication to improve his eavesdropping performance. By defender-attacker interaction as a Stackelberg game, Bob as the leader chooses his pilot training power, while a full-duplex eavesdropper as the follower determines the pilot contamination power according to the observed Bob’s ongoing training signals transmission. Two equilibriums under different strategy spaces are analyzed. Simulation results show that the proposed scheme can defend against an intelligent active eavesdropper with a higher secrecy rate and utility.
With the dramatic increase in the number of users and the widespread use of smartphones, most of the internet content today is provided by cellular connections. The purpose of many active queue management algorithms developed for the cellular Long-Term Evolution network is to prevent forced packet drops in the Evolved Node B (eNodeB) Radio Link Control buffer and to improve delay and end-to-end throughput values. Although the algorithms developed in the literature improve some of the end-to-end throughput, delay, and packet data fraction values during bottleneck and congestion, they cannot balance these values. The proposed virtual queue management algorithm recalculates the average queue value and the packet dropping probability according to different traffic loads to solve the queue delay and queue overflow problem providing a balance between throughput, delay, and packet data fraction. Simulation results illustrate that the proposed algorithm reduces the delay of the packets and increases fairness among users compared to the Drop-tail, Random Early Drop, Controlled Delay, Proportional Integral Controller Enhanced, and Packet Limited First In First Out Queue algorithms.
Nowadays, users are becoming more reserved in uploading their own data to the servers of service providers for fear of personal information disclosure. In order to meet the need on privacy security, Federated Learning (FL) is proposed. As the nearest servers to users, edge servers are quite suitable for the execution of FL tasks, which results in a new concept called Federated Edge Learning (FEEL). It allows users to conduct model training locally without uploading their own data, thus avoiding privacy disclosure. However, the issue of Energy Consumption (EC) for training on devices becomes a main concern for users because the learning process is carried out on the devices with limited battery capacity. Then, from the standpoint of service providers, they pay more attention to the model performance (i.e., test dataset accuracy of the trained machine learning model) and usually have diverse requirements on it. Both EC and model performance are important metrics for FEEL, and from our engineering experience, service providers and users may have a conflict of interest in FEEL. In this paper, after modeling the two metrics, we identify the common factor (i.e., size of the training data) between them and are convinced that there is a tradeoff. Then, we add a workload constraint that regulates the common factor to the formulated problem and propose a resource optimization and device scheduling strategy to solve it, thus achieving the tradeoff between EC and model performance of FEEL. This strategy is based on the steepest descent method and approximation algorithm. The approximation ratio is also proved. By regulating the training workload threshold, the values of the two metrics can be dynamically adjusted. Achieving the tradeoff makes it possible to meet the needs of service providers and users at the same time. More specifically, it can minimize the energy consumption of training devices on the premise of meeting diverse requirements of service providers on model performance, which can not be realized by other FEEL frameworks that do not achieve the tradeoff. Simulation results show that our proposed strategy is able to achieve this tradeoff compared with two existing FEEL frameworks. Compared with another existing FEEL framework who also realizes the tradeoff, the tradeoff achieved by our proposed strategy is more biased towards EC. At the end of the simulation part, we summary the characteristics of the proposed and other three existing FEEL frameworks.
Caching the most likely to be requested content at the mobile devices in a cooperative manner can facilitate direct content delivery without fetching content from the remote content server and thus alleviate the user-perceived latency, reduce the burden on backhaul and minimize the duplicated content transmissions. In addition to content popularity, it is also essential to consider the users’ dynamic behaviour for real-time applications, which can further improve the communication chances between user devices, leading to efficient content service time. The majority of previous studies consider stationary network topologies, in which all users remain stationary during data transmission, and the user can receive desired content from the corresponding base station. In this work, we study an essential issue: caching content by taking advantage of user mobility and the randomness of user interaction time. In a cooperative caching problem, we consider a realistic scenario with user devices moving at various velocities. We formulate the cache placement problem as maximization of saved delay with capacity and deadline constraints by considering the contact duration and inter-contact time among the user devices. We designed on-policy learning integrated fuzzy logic-based caching scheme to solve the high dimensionality of the proposed Integer linear programming problem. The proposed caching schemes improve the long-term reward and higher convergence rate than the Q-learning mechanism. Extensive simulation results demonstrate that the proposed cooperative caching mechanism significantly improves the performance in terms of reward, acceleration ratio, hit ratio and offloading ratio compared with existing mechanisms.
Agriculture Farming activity near to rivers and coastal areas sometimes imply spills of chemical and fertilizers products in aquifers and rivers. These spill highly affect the water quality in rivers’ mouths and beaches close to those rivers. The presence of these elements can worse the quality for its normal use, even for its enjoying. When this polluted water reaches the sea can also have problematic consequences for fauna and flora. For this reason, it is important to rapidly detect where these spills are taking place and where the water does not have the minimum of quality to be used. In this article we propose the design and implementation of a LoRa (Long Range) based wireless sensor network for monitoring the quality of water in coastal areas, rivers and ditches with the aim to generate an observatory of water quality of the monitored areas. This network is composed by several wireless sensor nodes endowed with several sensors to physically measure parameters of water quality, such as turbidity, temperature, etc., and weather conditions such as temperature and relative humidity. The data collected by the sensors is sent to a gateway that forwards them to our storage database. The database is used to create an observatory that will permit the monitoring of the environment where the network is deployed. We test different devices to select the one that presents the best performance. Finally, the final solution is tested in a real environment for checking its correct operation. Two different tests will be carried out. The first test checks the correct operation of sensors and the network architecture while the second test show us the devices performance in terms of coverage.
The ultra-reliable and low-latency communication (URLLC) in the fifth generation (5G) communication has emerged many potential applications, which promotes the development of the internet of things (IoTs). In this paper, the URLLC system adopts the duty-cycle muting (DCM) mechanism to share unlicensed spectrums with the WiFi network, which guarantees the fair coexistence. Meanwhile, we use the mini-slot, user grouping, and finite block length regime to satisfy the low latency and high reliability requirements. We establish a non-convex optimization model with respect to power and spectrum, and solve it to minimize the power consumption at the devices, where the closed-form expressions are given by several mathematical derivations and the Lagrangian multiplier method. Numerical simulation results are provided to verify the feasibility and effectiveness of the proposed scheme, which improves the system spectrum efficiency and energy efficiency.
In wireless ad hoc networks, neighbor discovery is necessary as an initial step. In this work we present LECDH (Low Energy Collision Detection Hello), an energy-aware randomized handshake-based neighbor discovery protocol for static environments. We carried out simulations through Castalia 3.2 simulator and compared LECDH with an existing protocol EAH (Energy Aware Hello) used as reference. We conclude that the proposal outperforms the reference protocol both in one-hop and multi-hop environments in terms of Energy consumption, Discovery time, Number of discovered neighbors, Throughput, and Discoveries per packet sent, for high duty cycles. Moreover, for low number of nodes in LECDH, as the duty cycle is reduced the performance is better according to all 5 metrics in both environments. Overall, we found that our proposal follows more realistic assumptions and still allows nodes to succeed at discovering all their neighbors almost with probability 1. Moreover, a qualitative comparison of the reference solution and our proposal is included in this paper.
A current trend in the evolution of mobile communication networks consists in integrating Non-Terrestrial Networks (NTN) with the Terrestrial ones. One option to implement the NTN part of this hybrid architecture using Unmanned Aerial Vehicles (UAV) that relay the uplink radio signals through optical wireless backhaul links. A good choice for the radio uplink waveform is conventional SC-FDMA, which mitigates the PAPR and enables a longer battery lifetime at the transmitter side. For the optical backhaul link, which is based on low-cost Visible Light Communication (VLC) technology, a non-orthogonal implementation of SC-FDMA is proposed. By doing so, it is possible to improve the end-to-end throughput by reducing the communication bandwidth (to make it fit the LED frequency response), mitigate the effect of light reflections, and increase the energy efficiency in the backhaul link. Since VLC relies on non-coherent IM/DD, the non-orthogonal SC-FDMA waveform must rotate the phase of the IDFT subcarriers, in order to obtain real-valued signal samples at the output. Two strategies for relaying the data in the UAV node are evaluated, namely: Detect-and-Forward and Decode-and-Forward. The first one recovers the modulation part (i.e. partial regeneration), whereas the second one regenerates the transmitted message up to the bit level (i.e., total regeneration). This paper studies the combination of relaying strategy and NB-IoT Modulation and Coding Scheme (MCS) that maximizes the end-to-end throughput at different UAV altitudes.
Hadoop is an open source from Apache with a distributed file system and MapReduce distributed computing framework. The current Apache 2.0 license agreement supports on-demand payment by consumers for cloud platform services, helping users leverage their respective different hardware to provides cloud services. In cloud-based environment, there is a need to balance the resource requirements of workloads, optimize load performance, and the cloud compute costs to manage. When the processing power of clustered machines varies widely, such as when hardware is aging or overloaded, Hadoop offers a speculative execution (SE) optimization strategy, by monitoring task progress in real time, in the starting identical backup tasks on different nodes when multiple tasks under a job are not running at the same speed, providing the first to go. The completed calculations maintain the overall progress of the job. At present, the SE strategy’s incorrect selection of backup nodes and resource constraints may result in poor Hadoop performance, and subsequent tasks cannot be completed execution and other problems. This paper proposes an SE optimization strategy based on near data prediction, which analyzes the prediction of real-time task execution information to predict the required running time, select backup nodes based on actual requirements and approximate data to make the SE strategy achieve the best performance. Experiments prove that in a heterogeneous Hadoop environment, the optimization strategy can effectively improve the effectiveness and accuracy of various tasks and enhance the performance of cloud computing. Platform performance can benefits consumers better than before.
In mobile edge computing (MEC) systems, enhancing the learning capabilities of edge nodes through Artificial Intelligence (AI) can improve the efficiency of dynamically allocating resources. For the scenario of edge service migration, various tasks in lightweight IoT devices are offloaded to edge nodes and the services on edge nodes are migrated adaptively to another available node nearer to the users while they move. To speed up network application loading during service migration, this paper proposes an intelligent trace-driven predicting approach (ITPA) that improves the efficiency of I/O scheduling and the hit ratio of caching when migrating services between resource-constrained edge nodes. First, based on the characteristics of sequential access to the binary codes of an application during its startup progress, the request loading list is generated by tracing key I/O requests at that phase. Then, an intelligent algorithm is designed to search and select the key I/O requests in the loading list. Finally, the efficiency of data acquisition is improved by implementing a prefetch strategy for the client side and a three-level caching strategy for the server side. Experimental results show that the ITPA reduces the service startup time during stateless migration.
Active distribution networks (ADNs) can solve the problem of grid compatibility and large-scale, intermittent, renewable energy applications. As the core part of ADNs, advanced metering infrastructure (AMI) meets the reliability requirements of the system for monitoring, diagnosis and control by extensive data acquisition and effective data transmission. The fifth-generation (5G) New Radio (NR) with ultra-reliable low-latency communication (URLLC) can be applied in ADNs for data transmission. However, in ADNs, the electromagnetic environment is complex, and the interference is diverse and time-varying. This scenario creates great challenges to data transmission in 5G communication networks. In this paper, we model the data transmission in 5G, design a rolling solution framework from predicting interference to improving data repetition, and then allocate wireless resources. To adapt resource allocation to time-varying interference, we propose an interference prediction algorithm to accurately estimate the interference distribution in the whole scheduling cycle. Moreover, to meet the second-level, resource scheduling requirement, we model resource allocation as a dynamic programming problem with the goal of maximizing energy efficiency and solve it by a DDQN-based reinforcement learning algorithm.
IETF 6TiSCH, which is composed of the IEEE802.15.4e and IPv6 RPL protocols, is a highly reliable and low-power industrial wireless network protocol stack. IEEE802.15.4e is the medium access control (MAC) layer protocol of the protocol stack—it defines a time-slotted channel hopping communication mode. IPv6 RPL is the network layer protocol of the protocol stack—it allows multiple nodes to form a multi-hop network. Scheduling is vital to the 6TiSCH protocol stack, as it defines the MAC layer cells to send/receive by the network packets. Herein, we propose an efficient distributed scheduling function (EDSF) for 6TiSCH wireless networks; it fully considers the use probability and distance of cells rather than random selection. Additionally, a schedule collision detection algorithm is proposed to detect two pairs of neighbor nodes that use the same cell. It fully utilizes the historical statistical data from the cell packet delivery ratio. Finally, we implement the EDSF scheme and verify its performance through experiments on a 6TiSCH simulator. The experimental results show that our proposed scheme can achieve a low end-to-end latency without additional costs.
The development of new technologies such as the Internet of Things and cloud computing tests the transmission capabilities of communication networks. With the widespread application of multiple wireless access technologies, it has become popular for modern communication devices to be equipped with multiple network access interfaces. The increasing of various network attacks significantly reduces the robustness of multipath TCP (MPTCP) transport systems. To address this problem, this paper proposes a network traffic anomaly detection model based on MPTCP networks, called MPTCP-EMD. The model combines multi-scale detection and digital signal processing theory to implement anomaly detection based on the self-similarity of MPTCP network traffic. It uses the empirical modal decomposition (EMD) method to decompose MPTCP traffic data and reconstruct the valid signal by removing high-frequency noise and residual trend term. Using the idea of sliding windows, the model then compares the changes in the Hurst exponent of the MPTCP network under different attack conditions to determine whether anomalies have occurred. The simulation results show that the EMD method can be used for anomaly detection of MPTCP network traffic. The Hurst exponent of the attacked MPTCP network significantly exceeds the range of the unattacked network, and exhibits significant jitter.
Using Wi-Fi signals to sense target activity is a promising study field, accounting for convenience concerns. However, it remains challenging to recognize target activity as a way of high-precision and stability due to the multi-path effect in Wi-Fi signals. In this paper, we propose a robust framework named WiPD, for accurate activity recognition based on Wi-Fi phase difference data. Firstly, a novel feature representation mechanism named visualized spectrum matrix (VSM) for Wi-Fi activity recognition is proposed. VSM is generated by performing a Short Time Fourier Transform operation on Wi-Fi phase difference data. Then, we design a neural network with the input type of VSM, namely, WiPD-Net, in which the activity features are extracted by both four convolutional neural network submodules and two WiPD-Block submodules. Experiment results show that our proposed WiPD-Net outperforms the existing baselines on our dataset and one public dataset. In particular, WiPD-Net can reach up to an accuracy of 99.80%, and achieve a good migration performance among five Wi-Fi environments.
Similarity search in streaming time series is a challenging problem due to tight requirements in processing streaming data and replying feedback, e.g., quickly processing a time-series stream of high speed, and accurately replying found results to a query system. These difficulties urge researchers of time-series data mining to have a framework at hand for building systems of similarity search in streaming time series based on a platform specializing in handling streaming data. In the paper, we introduce a framework of similarity search in streaming time series based on Spark Streaming. Subsequently, a prototype system implementing the framework would be proposed to demonstrate the feasibility of the framework for building similarity search systems which can work efficiently and effectively in streaming context. In addition, the prototype system takes advantages of SUCR-DTW to perform similarity search efficiently in streaming environment under Dynamic Time Warping. The experimental results obtained from the prototype system demonstrate that the Spark job of similarity search in streaming time series is accomplished quickly and accurately. The subsequences of streaming time series, which are similar to predefined queries, are found in near real time. They are the same as those obtained from the execution of similarity search in streaming time series by another reference system. Furthermore, the prototype system has high scalability, stably works while processing time-series streams of high steady rate. These experimental results also underline the value of the combination of Spark Streaming and SUCR-DTW to handle the challenging problem.
Multi-access Edge Computing (MEC) aims to reduce mobile services latency and free users from resource constraints by deploying cloud services closer to users. However, with the change of network condition, the service requirements of users cannot be fulfilled due to the fixed deployment of MEC nodes. In this case, the placement of MEC nodes attracts more and more researchers’ attentions. Particularly, in the circumstance with Network Function Virtualization (NFV), MEC functions are allowed to be deployed on any edge node that has the NFV Infrastructure (NFVI), and these MEC-function-enabled edge nodes can become MEC nodes. In this case, how to deploy these MEC nodes flexibly to cope with the dynamic changes of network load becomes very important. In this paper, we propose an Online Adjustment based MEC node Placement mechanism (OAMP). First, the node placement problem is constructed as a class of set coverage problem based on the average historical load of nodes. The backtracking algorithm of depth-first search is used to obtain the optimal initial placement strategy. Then, based on users’ QoE (quality of experience), the fuzzy neural network is used to determine whether the deployment of MEC nodes needs to be adjusted. Finally, the number and location of MEC nodes are updated intelligently by Deep Q-Network (DQN) algorithm. The proposed OAMP aims to solve where to deploy MEC nodes, and how to adjust the deployment in response to dynamic changes in the network. Simulation results show that OAMP can effectively reduce the deployment cost while ensuring users’ QoE, and achieve lower Service Level Agreement (SLA) violation rate.
With the development of new generation information technology, many traditional factories begin to transform to smart factories. How to process the huge volume data in the smart factories so as to improve the production efficiency is still a serious problem. Based on the characteristics of smart factory, a fog computing framework suitable for smart factory is proposed, and Kubernetes is used to automatically deploy containerized smart factory applications. First, in the scene of fog computing, an improved interval division genetic scheduling algorithm IDGSA (Interval Division Genetic Scheduling Algorithm) based on genetic algorithm is proposed to schedule and allocate tasks in smart factory. We consider the optimization of task execution time and resource balance at same time and combined with IDGSA, the optimized scheduling decision is given. Second, we further design an architecture of cloud and fog collaborative computing. In this scenario, we propose the IDGSA-P (Interval Division Genetic Scheduling Algorithm with Penalty factor) for optimization based on IDGSA. Finally, we carry out simulation experiments to verify the performance of the proposed algorithms. The simulation results show that compared with Kubernetes default scheduling algorithm, IDGSA can reduce data processing time by 50% and improve the utilization of fog computing resources by 60%. Compared with traditional genetic algorithm, with fewer iterations, IDGSA can reduce data processing time by 7% and improve the utilization of fog computing resources by 9%. And compared with the conventional Joines&Houck method, the proposed IDGSA-P algorithm can converge much faster and archived better optimization results. Further, the simulation shows that IDGSA-P in cloud and fog collaborative computing can reduce the total task delay by 18% and 7%, respectively, when compare to only-cloud and only-fog computing.
In order to obtain scientific quantitative decision-making of physical education integrated ideological and political teaching in college, and improve the teaching effect of college physical education integrated ideology and politics, a teaching effect evaluation model of physical education integrated ideology and politics based on principal component analysis is proposed. Clear teaching contents, activities planning and organization, sports teams, such as evaluation index, according to each evaluation index, data collection and initial evaluation, in combination with principal component analysis (pca), assimilation process evaluation index, and correlation matrix eigenvalue and characteristic vector and determine the number of principal components, fuse the sports education teaching effect evaluation model of the building. Experiments show that the model of the linear correlation coefficient and the rank correlation coefficient of average 97.3% and 96.7%, respectively, were higher than other methods, fusion and its various professional sports education teaching effect are present must rise, the score of one of the English major students score increased 0.75, the lowest building professional and a higher level of ascension 0.12. The accuracy of the evaluation results of the model in this paper is higher, which can effectively improve the effect of college sports integrated ideological and political teaching.
Diabetes is considered among the major critical health conditions (chronic disease) around the world. This is due the fact that Glucose level could change drastically and lead to critical conditions reaching to death in some advance cases. To prevent this issues, diabetes patient are always advised to monitor their glucose level at least three times a day. Fingertip pricking - as the traditional method for glucose level tracking - leads patients to be distress and it might infect the skin. In some cases, tracking the glucose level might be a hard job especially if the patient is a child, senior, or even have several other health issues. In this paper, an optimum solution to this drawback by adopting the Wireless Sensor Network (WSN)-based non-invasive strategies has been proposed. Near-Infrared (NIR) -as an optical method of the non-invasive technique - has been adopted to help diabetic patients in continuously monitoring their blood without pain. The proposed solution will alert the patients’ parents or guardians of their situation when they about to reach critical conditions specially at night by sending alarms and notifications by Short Messages (SMS) along with the patients current location to up to three people. Moreover, a Machine Learning (ML) model is implemented to predict future events where the patient might have serious issues. This model prediction is best practice in this chronic health domain as it has never been implemented to predicted a future forecast of the patient chart. Multivariate Time-Series data set (i.e. AIM ’94) has been used to train the proposed ML model. The collected data shows a high level of accuracy when predicting serious critical conditions in Glucose levels.
In order to improve the quality of online distance education and students' online learning, an intelligent online distance education decision-making method based on cloud computing is proposed. Using computing, we provide decision-making resources for online education decision-making through the resource layer; adopt the G1 deviation maximization method to calculate the combination weight, and determine the optimal decision-making scheme in combination with bipolar binary semantics and cloud model; moreover, we provide human–computer interaction windows to view the decision-making scheme at the application layer; and complete the optimal decision-making for intelligent online education by providing the management function of cloud computing services. Experimental results show that this method can effectively obtain a decision-making scheme for network education. After the application of this method, the students' learning ability and academic performance have been significantly improved.
In order to realize the best matching search of mobile intelligent education system resources, a resource search method of mobile intelligent education system based on distributed hash table is proposed. Firstly, combine the chord system based on distributed hash table and vector space model to form a resource discovery mechanism. After locating multi-attribute resources, search the location resources based on the resource search model of chord and VSM, and then solve the similarity between query vectors and location resource vectors by establishing the vector relationship between location resources and user queries, Finally, according to the resource similarity solution results, the resources with the greatest relevance to the search content are obtained. The test results show that the value of search request blocking rate is far lower than its threshold, the search performance is good, and the matching degree of resource search results is high.
We investigated the consistency between engineering education in universities and corporate needs for such education, and found that there exist problems in current engineering education such as low-level participation by enterprises, decoupling of teaching and industry demands, and difficulties for enterprises to participate in teaching reforms. In response to these problems, we propose a practical ability training platform, which features “university-enterprises co-construction”. The platform adopts the method of “credit bidding” to improve the curriculum system that is combined with the enterprise teaching mechanism. Moreover, we establish the university-enterprises collaborative teaching management and operation guarantee mechanism. By the proposed engineering education method, the practical ability of students and the satisfaction of enterprises to graduates are greatly improved.
In order to improve the training accuracy of sports teaching and training, and promote the scientific and standardized sports teaching and training, a correcting assistant system based on. NET platform is designed. Based on the Microsoft. NET platform, a three-tier architecture is constructed. Among them, the data access layer uses the functions of ADO. NET and. NET XML to realize the exchange of database information, provides services for the business logic layer, designs the motion correction module of the business logic layer, uses Kinect to extract the skeleton angle data characteristics of the trainer's training actions in the motion collection module, adopts the dynamic time planning algorithm to match the corresponding frames, calculates the training scores, realizes the reappearance of motion correction through the 3D reconstruction module, and finally displays the motion correction through the user interface layer. Experimental results show that the system can collect training action and mark the key points, the average score is 8 points, the corresponding frame is matched accurately, and the training action with very satisfactory (A) level reaches more than 90%.
Several recent research has centered on maximizing Internet of Things (IoT) devices’ lifetime by deploying data reduction techniques on IoT nodes to reduce data transmission. Data compression methods can be seen as a direct way of achieving energy efficiency. The trade-off between compression ratio and data distortion is usually considered when using a lossy compressor. This paper proposes a light SZ compressor with a maximal compression ratio without considering this trade-off. The proposed approach was tested on ESP Wroom 32 and WiFi LoRa 32 microcontrollers. Given the importance of data quality arriving at the gateway for analysis, the proposed lossy compressor with a high compression ratio can discard important data features and patterns. This paper solves this problem by proposing a method for data enhancement based on the U-Net architecture. Therefore, the contribution of this paper is twofold: (1) Efficient data reduction approach for energy optimization at the level of IoT nodes. (2) 1D U-Net-based data recovery approach at the level of the edge.
In a multi-controller software-defined networking (SDN) architecture, solving the controller placement problem (CPP) has a direct effect on the generated control overhead in the network. We aim to minimize the control overhead exchanged in the network, especially in software-defined multihop wireless networks (SDMWN), i.e., a network that is built on multihop communications using a wireless medium. We solve this problem both optimally, using a nonlinear optimization model, and via a heuristic algorithm. The proposed heuristic approach is based on the genetic algorithm (GA). The objective of both the proposed optimization problem and the proposed GA algorithm is to find a given number of controllers, controller placements and assignments of controllers to devices while minimizing the generated control overhead in the network. Our results show the impact of different metrics, including the number of controllers, the arrival rate of new flows and the capacity limit of wireless links on the control overhead and the average number of controller-device and inter-controller hops. In addition, our results demonstrate that the GA-based heuristic approach can derive the same optimal solution for a small network with much less computational overhead, and can solve larger networks in a short period of time, making it feasible for non-trivial network sizes.
Inverse kinematics is an important basic theory in walking control of biped robot. This study focuses on the parameter setting using the improved algorithm in inverse kinematics. By analyzing the process of whether can the robot legs arrive at the expected positions from different initial positions, the parameter value range is determined. It must be noted that, the parameter values exhibit clear physical significance. The robot legs can move stably within the allowable value range. Furthermore, the superiority of the improved algorithm was validated by 3D simulation of leg motion. Moreover, the present study can provide theoretical basis for optimizing the leg motion of biped robot and developing the related prototype.
In order to effectively identify the pattern of personalized adaptive learning in online education and improve the recommendation satisfaction of personalized learning resources in online education platform, this paper studies the pattern recognition method of personalized adaptive learning in online education. The learning behavior pattern data in the online education platform are mined, preprocessed, clustered and made correlation analysis, and the obtained data are used to construct the learner’s personalized adaptive learning characteristics model; on this basis, the framework of learning pattern recognition model is constructed to recognize the personalized adaptive learning pattern from four aspects: cognitive level, learning style, interactive behavior pattern characteristics and online social learning characteristics. The experimental test results show that this method can effectively identify the personalized adaptive learning patterns of learners, including interactive learning behavior patterns and online social learning patterns. The personalized learning resources recommended by the online education platform according to the identification results of this paper have obtained the learners’ satisfaction score of a high level at 93.27%.
Due to the low recognition accuracy and slow convergence speed of the traditional basketball shooting trajectory recognition methods, this paper proposes a basketball shooting trajectory recognition method based on transfer learning to accurately analyze the behavior pattern of shooting trajectory in the monitoring scene. The improved Hough method is used to obtain the basketball position, combined with the basketball speed, the cerebellar model neural network is constructed, the recursive unit is added with the recursive neural network, and then the variable weight is designed to improve the network structure. Combined with transfer learning, the speed of improving network optimization is accelerated, the missing information is made up, and the recognition of basketball shooting trajectory is realized. Experiments show that this method can accurately identify the basketball shooting trajectory with the minimum coordinate error, effectively improve the accuracy and time of network training, and improve the convergence speed and recognition accuracy.
The current data dynamic migration algorithm ignores the attribute characteristics of data in the process of data layout, which leads to more iterations of data in perceptual virtual network, longer downtime of dynamic migration and lower migration efficiency. To solve this problem, a dynamic migration algorithm of perceptual data in virtual network based on machine learning is proposed. Through machine learning algorithm to mine the attribute characteristics of virtual network perception data, Moran's I index is obtained to analyze the correlation index of perception data. By calculating the spatial location of data perception, the data center with less workload in virtual network is selected, and the data center of each data center is calculated. By determining the target node, selecting the migration sensing data and setting the migration factor as the limiting condition, the dynamic migration of sensing data is realized. Experimental results show that the proposed algorithm can effectively reduce the number of iterative replication rounds, shorten the downtime of dynamic migration, and improve the efficiency of virtual network migration in the environment of high dirty page rate and low dirty page rate.
Long distance education is an important part during the COVID-19 age. An intelligent privacy protection with higher effect for the end users is an urgent problem in long distance education. In view of the risk of privacy disclosure of location, social network and trajectory of end users in the education system, this paper deletes the location information in the location set to protect the privacy of end user by providing the anonymous set to location. Firstly, this paper divides the privacy level of social networks by weighted sensitivity, and collects the anonymous set in social networks according to the level; Secondly, after the best anonymous set is generated by taking the data utility loss function as the standard, it was split to get an anonymous graph to hide the social network information; Finally, the trajectory anonymous set is constructed to hide the user trajectory with the l-difference privacy protection algorithm. Experiments show that the algorithm presented in this paper is superior to other algorithms no matter how many anonymous numbers there are, and the gap between relative anonymity levels is as large as 5.1 and 6.7. In addition, when the privacy protection intensity is 8, the trajectory loss rate presented in this paper tends to be stable, ranging from 0.005 to 0.007, all of which are less than 0.01. Meanwhile, its clustering effect is good. Therefore, the proportion of insecure anonymous sets in the algorithm in this paper is small, the trajectory privacy protection effect is good, and the location, social network and trajectory privacy of distance education end users are effectively protected.
Due to the current video key frame extraction algorithm is affected by the lens conversion, the extraction accuracy is poor. For this reason, a precise extraction algorithm of video action key frames for online aerobics teaching is studied. According to the color components of the video color RGB space, in order to ensure that the color distance is suitable for human vision. A non-uniform quantized HSV space method is adopted, and a one-dimensional feature vector is introduced to convert the online teaching video of aerobics into a one-dimensional histogram of 72 bins, which realizes the segmentation of video shots and reduces the impact of shot conversion. Sort the gray values of the histogram pixels of the video after the segmentation is completed, and construct the dynamic frames of the aerobics online teaching video. Sequence search constructs the processing dynamic frame, extracts the feature vector of the video sequence, and uses the multi-layer core aggregation algorithm to extract the key frame of the video action according to the extracted feature vector. Experimental results show that the algorithm can effectively extract the key frames of aerobics video action, the fidelity of the extracted key frames is higher than 0.9, and the precision and recall are both higher than 99%.
Aiming at the problems of low coverage of teaching resource recommendation results, long running time of the platform and low accuracy of Resource Recommendation in traditional methods, this paper designs an intelligent real-time news communication platform and applies it to education and teaching. The ia as + PAAS + SaaS standard three-tier cloud architecture is adopted to design the overall architecture of the platform, and the tracking and clustering of news is realized through the content acquisition and editing module, distributed news clustering module and news communication effect tracking module. TF-IDF algorithm is used for news data feature selection, and the feature correlation degree is calculated. According to the calculation results, a news communication data recommendation model based on the improved LDA model is established to realize the effective recommendation of resources. Finally, the designed platform is applied to practical teaching and its application effect is analyzed. The experimental results show that the resource recommendation accuracy of this method is high, up to 90%, the running time of the platform is always about 1.0 s, and the resources can cover more fields, basically up to 100%, which fully verifies its application value.
In order to improve the accuracy and performance of classroom teaching effect evaluation, an intelligent teaching mode classroom teaching effect evaluation method is proposed. Based on the characteristics of intelligent teaching mode, an intelligent teaching effect evaluation index system including five indexes of basic quality, teaching attitude, teaching method, teaching ability and teaching effect is constructed. After obtaining the scores of each index by expert scoring, it is input into the cuckoo search algorithm extreme learning machine evaluation model and solved by objective function, obtain the final score of teaching effect evaluation. The experimental results show that the proposed method can effectively improve the evaluation accuracy of classroom teaching effect of intelligent teaching mode, and provide a new method for classroom teaching effect evaluation.
In order to improve the quality of distance education and solve the problem of slow data processing of the teaching system, an intelligent distance education assistance system based on WEB is developed in this paper. After verification, students, teachers and administrators log into the distance intelligent teaching assistance system and transmit various information to the interface of the teaching and administration subsystems. The submitted information is merged using the Bayesian model for integrating educational resources in the digital cloud to create a distance education database that supports the system with data. At the same time, the evaluation of the business logic of the data is performed. After the data is converted to other formats in the subsequent convertible database, it returns to the user interface to provide browsing and consulting functions for users. The experimental results show that the designed system can realize the remote auxiliary function of intelligent education and effectively improve the quality of teaching. The real-time data acquisition rate of the system in this paper is always equal to the set value. The average acceleration of the system in this paper is 5.5 and the data processing efficiency is higher. The minimum safety factor of the system is between 7.8 and 8.5, and the system has high stability. The user satisfaction of the system is over 93%, and the accuracy of the collected data is relatively high. The auxiliary system designed in this paper can provide a stable and efficient application environment for distance education.