RetractedArticlePDF Available

Application of Artificial Intelligence Technology in Computer Network Security Communication

Wiley
Journal of Control Science and Engineering
Authors:

Abstract and Figures

In order to cope with the frequent challenges of network security issues, a method of applying artificial intelligence technology to computer network security communication is proposed. First, within the framework of computer network communication, an intelligent protocol reverse analysis method is proposed. By converting the protocol into an image and establishing a convolutional neural network model, artificial intelligence technology is used to map the data to the protocol result. Finally, use the model to test the test data to adjust the model parameters and optimize the model as much as possible. The experimental results show that compared with the test model, the results obtained after training with the deep convolutional neural network model in this paper have increased the accuracy by 2.4%, reduced the loss by 38.2%, and reduced the running time by 42 times. The correctness and superiority of the algorithm and model are verified.
This content is subject to copyright. Terms and conditions apply.
Research Article
Application of Artificial Intelligence Technology in Computer
Network Security Communication
Fulin Li
Guangdong University of Science and Technology, Dongguan, Guangdong 523000, China
Correspondence should be addressed to Fulin Li; 1512440331@st.usst.edu.cn
Received 19 May 2022; Revised 22 June 2022; Accepted 3 July 2022; Published 21 July 2022
Academic Editor: Jackrit Suthakorn
Copyright ©2022 Fulin Li. is is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In order to cope with the frequent challenges of network security issues, a method of applying artificial intelligence technology to
computer network security communication is proposed. First, within the framework of computer network communication, an
intelligent protocol reverse analysis method is proposed. By converting the protocol into an image and establishing a con-
volutional neural network model, artificial intelligence technology is used to map the data to the protocol result. Finally, use the
model to test the test data to adjust the model parameters and optimize the model as much as possible. e experimental results
show that compared with the test model, the results obtained after training with the deep convolutional neural network model in
this paper have increased the accuracy by 2.4%, reduced the loss by 38.2%, and reduced the running time by 42 times. e
correctness and superiority of the algorithm and model are verified.
1. Introduction
With the development of 5G, 6G technology has also begun
to be studied. e Internet has spread all over the world and
has become a part of contemporary life. As one of the future
development directions, various IoT devices such as smart
homes are developing even faster. Communication between
different IoT devices [1], collaborative processing, and in-
formation transmission are all realized by sending data
packets on the network.
In recent years, the frequency of botnets, darknets, illegal
transactions, and network intrusions has gradually in-
creased. As a bridge of communication between these
means, the analysis of protocols can help to seize the life-
blood of network security and ensure network security.
Network protocols can be divided into two categories
according to their protocol format, process openness, and
other conditions: public protocols and nonpublic protocols.
Public protocols refer to those protocols that disclose the
protocol format and content, and are generally widely used
by people. For example, common network protocols such as
TCP, UDP, DNS, and SMTP. e nonpublic protocol refers
to the format set for some needs, which is usually a unique
and untouched protocol type, so it is also often referred to as
a private network protocol and an unknown protocol for-
mat. However, according to the current research, the tra-
ditional protocol reverse analysis method has low processing
efficiency for the obtained binary bit stream data set. e
method is relatively simple and has certain limitations,
which cannot meet the needs of secure communication in
today’s network systems. In addition, common protocol
reverse analysis tools can basically only parse common
protocol types. For that kind of unknown and unrecognized
data packets, due to the lack of corresponding prior
knowledge, are very difficult to analyze.
Although the reverse analysis technology of the known
protocol format [2] already exists, the reverse analysis of the
unknown protocol format, the related work is still less, or the
limitations are relatively large. erefore, the main research
of this paper is to apply artificial intelligence technology to
unknown network protocols for feature extraction and then
perform intelligent reverse analysis. Figure 1 lists the basic
applications of artificial intelligence technology in the field
of computer network information security.
Hindawi
Journal of Control Science and Engineering
Volume 2022, Article ID 9785880, 6 pages
https://doi.org/10.1155/2022/9785880
2. Literature Review
Netzob is a semiautomatic method proposed by Wang et al.
to automate some of the reasoning process of the protocol
structure. Netzob focuses on automating the reasoning
process and does not involve the work of experts. A detailed
lexical model and method are designed for this purpose.
Netzob uses an unweighted method of arithmetic mean for
group method processing. A cluster message is defined as a
symbol. A symbol refers to a group of messages that have the
same format and role from the perspective of the protocol
[3]. Alireza et al. used the Needleman Wunsch algorithm for
each symbol in the network to achieve ordering of common
strings. Generic strings are defined as static fields and al-
ternative fields for the rest of the message. A field refers to a
set of tokens that have a common meaning from a protocol
perspective. A symbol consists of several fields, each of
which can accept one or more values [4].
AutoReEngine is a method proposed by Dinh et al.
AutoReEngine receives network traffic of a single protocol as
input. AutoReEngine mainly includes four steps: data pre-
processing, protocol keyword extraction, message format
extraction, and state machine inference. In the data pre-
processing step, the input traffic is divided into a flow and
the packets in the flow are reassembled into messages.
Protocol keyword extraction is mainly carried out in two
steps [5]. Liu and Yangjun proposed that in the first fre-
quency string extraction step, the Apriori algorithm be used
to input and extract message sequences from field format
candidate keywords. At this time, the length-1 item in the
Apriori algorithm consists of 1 byte, the transaction consists
of each message sequence, and the support units include the
session support rate (Rssr) and the site-specific session set
support rate (Rset) [6]. Bistron and Piotrowski report that
Rssr represents the proportion of candidate sequences that
encompass the entire stream, and Rset represents the pro-
portion of candidate sequences that encompass the entire
site-specific session. A site-specific session refers to a group
of streams with the same server. In other words, for item
groups and candidate item groups that appear gradually
from length-1 to length-K, determine Rsr and Rset, where
frequently occurring item groups are not extracted
according to the default Apriori algorithm. Two terms that
satisfy both sets of threshold session support rate (Tssr) and
threshold point specific session sets (Tsets) are simulta-
neously determined. Byte sequences containing the final set
of all frequently extracted items are extracted and enclosing
strings are determined for these byte sequences [7].
FieldHunter is a method proposed by Mathew, which
receives network traffic of a single protocol as input-
FieldHunter first receives network flow as input and divides
the network flow into network messages. FieldHunter di-
vides the unit of network message into TCP’s PUSH flag and
UDP’s one packet. e syntax inference step first checks
whether it is a text-based or binary-based protocol and
performs the tokenization of the message differently in the
message tokenization module. A key step of FieldHunter is
semantic reasoning [8]. Misra heuristically finds fields
corresponding to predefined meaning types in the semantic
reasoning step, where six predefined meanings are used:
message type, message length, host identifier, session
identifier, transaction identifiers, and accumulators [9]. And
Vollertsen et al. believe that the main way to judge whether a
field corresponds to each type of meaning is to use com-
pletely different concepts for different field types in vertical
analysis; that is, each field has statistical characteristics in
different traces. For example, to derive fields corresponding
to host identifiers, the system provides a field that always
contains the corresponding unique value for each source IP
address for different traces [10].
In recent years, protocol reverse engineering has
achieved fruitful results in various fields. Especially in the
field of network security, the emergence of automatic
protocol reverse technology has brought dawn to network
analysis. By studying network protocols, Dou et al. took
reverse engineering as the entry point of protocol analysis,
from the perspective of traffic syntax analysis and instruction
timing analysis as protocol reverse analysis, but due to the
wide research area, the research depth of the protocol is
insufficient [11]. In the paper, Iwendi et al. proposed a re-
verse protocol analysis technology based on network traffic.
By analyzing the characteristics of traffic syntax and in-
struction execution timing, the state machine analysis of the
protocol was carried out, but the two lacked systematic
analysis of the protocol[12]. is paper is different from the
above. is paper mainly studies from the aspect of
grammar, mainly conducts a reverse analysis of the feature
information of the protocol grammar, and starts from
Artificial intelligence
technology
Database
security check
Risk assessment
of sensitive
information
Data desensitization
management
Sensitive data
monitoring
Data breach
protection
Intelligent
assisted judgment
Machine learning / Knowledge map
Cognitive computing / Semantic computing
Data mining
Figure 1: Application of artificial intelligence technology in the field of computer network information security.
2Journal of Control Science and Engineering
different angles and different algorithms to verify each other,
so as to systematically analyze the protocol grammar
intelligently.
3. Research Method
3.1. Feature Extraction Algorithm Based on Neural Network.
A convolutional neural network (CNN) is a deep learning
architecture that works in a similar way to how the human
eye sees things and then feeds back. ey have great potential
for applications in image classification, natural language
processing, image caption generators, etc. In the past period,
CNN was unable to solve complex problems due to lack of
computing power [13]. But with the advent of graphics
processing units (GPUs) and their use in machine learning,
CNNs have re-emerged and surpassed other architectures in
computer vision tasks. CNN has attracted attention in many
fields, and medical diagnosis is no exception. Image clas-
sification plays a key role in computer vision. It includes
preprocessing image data, image segmentation, extracting
key features, and classifying images into corresponding
classes. Using CNN to classify images effectively and ac-
curately, this technology can be applied to medical diagnosis,
face recognition, security and other fields [14].
Since the convolutional neural network has a better effect
on image processing, and the convolutional operation is
required when using the convolutional neural network, the
convolutional layer can only identify the image data of the
matrix type. erefore, it is necessary to convert the input
data to an image. On the basis of data preprocessing, the
protocol data is put together every 8 bits and converted into
image data between 0–255. Each protocol will generate 40
image data between 0–255 [15]. A one-stage convolution
Piotrowskial network consists of a convolutional layer and a
max-pooling layer. e convolution kernel of the convo-
lution layer is kernel_size, which includes the number of
filters and strides. e input input_ranges of the first con-
volutional layer, a pool_size after the max-pooling layer.
When the input data set of the neural network is small, it is
easy to overfit, which makes the model fall into the local
optimal solution and reduces the training effect of the model.
is article uses the dropout function to prevent this from
happening [16]. In order to make the model training faster
and better solve complex function problems, the ReLU
activation function is used here for processing. e ReLU
activation function is shown in the following formula:
ReLU(x) x, if(x>0)
0,if(x0)
􏼨 􏼩.(1)
e advantages of the ReLU activation function are: (1)
when backpropagating, the gradient disappearance can be
avoided. (2) Due to the particularity of the ReLU function,
the output of the input on the left side of the Xaxis is 0, so the
effect of some neurons disappears, thereby reducing the
number of parameters in the network and alleviating the
problem of overfitting. (3) Compared with the sigmoid
activation function and the tanh activation function, the
derivation is simple.
e sigmod function is an exponential function, which
requires derivation during backpropagation, which is dif-
ficult to calculate. Using the ReLU function will cost less. e
second section of the convolutional neural network is similar
to the first section, and the size of the convolution kernel is
still 3 ×3, but the number of convolution kernels here has
been increased to 128. en, through the regularization
method of dropout, some redundant information is ran-
domly deleted to prevent the model from overfitting, thereby
improving the generalization ability of the model [17]. e
result is then fed into the flatten layer. e flatten layer is
used to “flatten” the input data, that is, to map the multi-
dimensional input to one dimension, while the fully con-
nected layer, the function of the fully connected layer, is to
use a series of functions to calculate all the feature-extracted
data sets and map each dataset to the corresponding label
classification, so that the expected results are as close as
possible to the actual results. e fully connected layer plays
a classification role in the entire neural network layer. It just
performs a matrix multiplication, which is equivalent to
spatial transformation of the features and statistical ex-
traction and integration of the previous information [18].
en use the activation function to perform nonlinear
mapping so that the data of this class corresponds to the
result one-to-one. It can also change the dimensions without
pressing, and can turn high-dimensional information into
low-dimensional information, and at the same time, it can
retain useful information. For the last layer of full con-
nection, it is the explicit expression of the classification
category. e fully connected layer consists of two parts.
First, the data of the upper layer is flattened (Flatten) and
then input into the fully connected network. e fully
connected network has two layers. e first layer has 128
nodes, and the last layer has 8 nodes. e first layer of the
fully connected network has 128 nodes, and the activation
function is still the ReLU function. e last layer has 8 nodes,
and the activation function is the Softmax function [19].
3.2. Model Training and Prediction. e process of model
training using a neural network based on artificial intelli-
gence technology is shown in Figure 2.
e cross-entropy loss function is used as the loss
function for model training, and the stochastic gradient
descent method is used to optimize the model. e initial
learning rate is set to 0.1, and the indicator to measure the
model is selected as accuracy. e amount of data selected
during training is 64, and the loop is 100 times [20]. Use
tensorboard as a callback function. Cross-entropy loss
function e cross-entropy loss function is to reflect the
effect of model training by calculating the difference between
the actual output and the expected output of the model, and
by continuously adjusting parameters and calculations, the
value of the loss function is reduced, so that the actual value
is closer to the expected value. e cross-entropy loss
function is shown in the following formula[21]:
C 1
n􏽘
x
[yln a+(1y)ln(1a)].(2)
Journal of Control Science and Engineering 3
In the formula: yis the expected output of the model, ais
the actual output of the model, nis the number of categories
of the output, and xis the input of the model. e neural
network algorithm is mainly used for classification and
identification, and each piece of data has one and only one
category. e general activation function is mainly used for
binary classification, like the sigmoid function. e Softmax
function is an extension of the Sigmoid function, which can
do multiple classifications and is not limited by the number
of categories. e sigmoid function is defined by the fol-
lowing formula[22]:
S(t) 1
1+e1.(3)
e image of the sigmoid function is similar to softmax,
and it also maps the input data to (0, 1). In addition, the
sigmod function is monotonically increasing, and the re-
ciprocal form is very simple, which is a more suitable
function. However, the sigmoid function can only do two
classifications, and softmax is an extension of sigmoid. It
maps the k-dimensional input variable xto a probability-like
interval, and then selects the largest subscript according to
the output probability. e corresponding label is the most
data category. e formula of the softmax algorithm is
shown in the following formula[23]:
σ(z)jezj
􏽐K
k1ezk.(4)
Because Softmax is an exponential function, when the
input value is large, the value will increase exponentially, and
when the input is negative, it will be greatly reduced, and the
effect of model classification will be improved when the
degree of discrimination is increased. Softmax is a contin-
uously differentiable function, which can be better applied in
the gradient descent algorithm [24].
4. Result Analysis
4.1. Experiment Environment. In this article, set
nbytes 320, so nimage nbytes/8 40. In this paper, 8
protocols are selected for identification, so N8. e test
dataset Dcontains 8 kinds of labels, corresponding to the
ARP-like protocol, DNS-like protocol, HTTP-like protocol,
ICMP-like protocol, OICQ-like protocol, SSDP-like pro-
tocol, tcp-like protocol, and udp-like protocol. In the con-
volutional neural network module, set kernel_size 3,
filters1 64, strides 1, input_ranges 5×8×240000 and
pool_size 2. e dropout regularization method is neces-
sary, let dropout 0 : 25. e number of second convolu-
tional neural network filters is filters2 128. e number of
nodes in the first layer of the fully connected network is
node1 128 and the number of nodes in the last layer is
node2 8. e learning rate of the resulting module is
learn_rate 0 : 1, and the number of training epochs 100.
e total amount of data used in this paper is shown in
Table 1. Put all the protocols together to get the train data set,
randomly shuffle the order of the train data set, and then take
the first 78,000 shuffled sequences for training, and then use
the 2000 for testing [25].
4.2. Experiment Result. After analyzing and training the
protocol and testing 1029 unknown protocols, we compared
the three aspects of accuracy, loss, and running time. e
experimental results are shown in Figure 3–5 below. It can be
seen that the recognition effect of the convolutional neural
network method for unknown protocols is very good, and
the recognition rate is above 99%.
e analysis is as follows: During the experiment, the
training set adopts the CNN deep neural network algorithm,
and the test set adopts the transfer learning algorithm
(DNN). e experimental results including the comparison
of the training set are shown in Figures 3–5 above. It can be
seen from the figure that the performance of CNN and DNN
is quite different. e accuracy of CNN is about 2.4% higher
than that of DNN, while the loss is reduced. 38.2%, and the
running time is reduced by 42 times. e accuracy of transfer
learning is obviously not as good as that of CNN in the early
stages, and it is not as stable as CNN in the later tests. is is
because when using the convolutional neural network in this
paper, it is hoped that the model should be as close as
possible to the distribution of the training data, the predicted
data, and the distribution of the real data, so the cross-
entropy loss function is often used to calculate the two-class
loss function.
When using a neural network for protocol syntax
analysis, a large-scale training set is usually required. If batch
gradient descent is used, the amount of computation will be
very large and require a lot of resources. In this case, the
stochastic gradient descent method is used instead of batch
gradient descent. e stochastic gradient descent algorithm
first needs to randomly select a group from the sample data
for training, sort it according to the loss degree of the output,
and then extract a group. It continues to operate in the above
method until it drops to a certain threshold. erefore,
during training, it is possible to obtain a satisfactory model
without training all the data. Using the CNN algorithm can
quickly analyze the results when the sample size is large. And
the time complexity of CNN is basically stable at O(knp),
where kis the number of iterations, and pis the average
number of nonzero features of each sample. Using the
stochastic gradient descent method, although the accuracy
will decrease and there may be many detours, the overall
trend is towards the minimum loss value, which will save a
lot of time and make the algorithm faster. When predicting
the result, the CNN algorithm puts the remaining 2000
pieces of data into the test set to test and outputs its label and
accuracy. When converting the label, since the similarity is
stored in the model prediction, and the highest similarity can
be identified as the label of the protocol, so this chapter only
needs to find the position with the highest similarity and find
the protocol type it represents. It can be seen that the CNN
Neural
network
construction
Image feature
extraction
Data
visualization
Image feature
storage
Figure 2: Neural network feature extraction process.
4Journal of Control Science and Engineering
deep neural network algorithm can quickly and efficiently
identify unknown protocols and then output the predicted
protocol type and similarity. In the comparison experiment
with DNN, it was found that it could achieve significantly
superior performance indicators.
5. Conclusion
is paper mainly does some research work on the bitstream
protocol. Including the intelligent reverse analysis method of
the bitstream protocol and the feature extraction method of
the bitstream protocol, on one hand, the research work is to
convert the data protocol frame into an image, and then use
the deep neural network algorithm in artificial intelligence
technology to train the image data frame and pass the
training. A good model identifies the protocol type adopted
by the unknown protocol frame so as to extract the char-
acteristic string in the network protocol frame to ensure the
security of computer network communication.
is paper takes the bitstream protocol data frame as the
research object and multiprotocol identification as the goal
and focuses on network communication security under the
support of artificial intelligence technology. However, due to
the limitations of the experimental environment and con-
ditions, the experimental data set in this paper is mainly
obtained in real time through the Wireshark tool. In the
follow-up research, the previous research can also be
Table 1: Protocol dataset.
Protocol type Total number of data frames e total size of the data frame(KB)
ARP 10000 880
DNS 10000 854
HTTP 10000 867
ICMP 10000 856
OICQ 10000 848
TCP 10000 865
UDP 10000 855
Train 80000 7096
Accuracy
0.965
0.970
0.975
0.980
0.985
0.990
0.995
1.000
20 40 60 80 1000
epochs
DNN
CNN
Figure 3: Accuracy comparison chart of the training set and test
set.
Loss
0.00
0.02
0.04
0.06
0.08
0.10
0.12
20 40 60 80 1000
epochs
DNN
CNN
Figure 4: Comparison chart of training set and test set loss.
Operation time (s)
0
100
200
300
400
500
600
10000 20000 30000 40000 500000
File size (KB)
CNN
DNN
Figure 5: Comparison chart of the running time of a training set
and a test set.
Journal of Control Science and Engineering 5
improved and deepened from the following aspects: 1. is
paper focuses on the feature mining and automatic iden-
tification of the bitstream protocol data; that is, the analysis
of the syntax of the protocol. e next step can be from the
protocol. e semantics and timing directions provide a
more comprehensive analysis of the bitstream protocol data.
2. e system designed in this paper is a protocol identi-
fication system based on the B/S architecture, and the cross-
platform compatibility is relatively poor. In the future, a C/S
architecture protocol identification system needs to be
studied to improve the compatibility of the platform.
Data Availability
e data used to support the findings of this study are
available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
References
[1] C. Zhao, “Application of virtual reality and artificial intelli-
gence technology in fitness clubs,” Mathematical Problems in
Engineering, vol. 2021, Article ID 2446413, 11 pages, 2021.
[2] C. He and B. Sun, “Application of artificial intelligence
technology in computer aided art teaching,” Computer-Aided
Design and Applications, vol. 18, no. S4, pp. 118–129, 2021.
[3] Y. Wang, J. Ma, A. Sharma et al., “An exhaustive research on
the application of intrusion detection technology in computer
network security in sensor networks,” Journal of Sensors,
vol. 2021, Article ID 5558860, 11 pages, 2021.
[4] F. Alirezaf, F. Reza, R. Javad, and M. Roberto, “Application of
internet of things and artificial intelligence for smart fitness: a
survey,” Computer Networks, vol. 189, no. 5, pp. 2105–2107,
2021.
[5] D. L. Dinh, H. N. Nguyen, H. T. ai, and K. H. Le, “Towards
ai-based traffic counting system with edge computing,”
Journal of Advanced Transportation, vol. 2021, Article ID
5551976, 15 pages, 2021.
[6] S. Liu and L. Yangjun, “Application of human movement and
movement scoring technology in computer vision feature in
sports training,” IETE Journal of Research, vol. 8, no. 6,
pp. 1–7, 2021.
[7] M. Bistron and Z. Piotrowski, “Artificial intelligence appli-
cations in military systems and their influence on sense of
security of citizens,” Electronics, vol. 10, no. 7, p. 871, 2021.
[8] A. Mathew, “Artificial intelligence and cognitive computing
for 6g communications & networks,” International Journal of
Computer Science and Mobile Computing, vol. 10, no. 3,
pp. 26–31, 2021.
[9] B. B. Misra, “Advances in high resolution gc-ms technology: a
focus on the application of gc-orbitrap-ms in metabolomics
and exposomics for fair practices,” Analytical Methods,
vol. 13, no. 20, pp. 2265–2282, 2021.
[10] A. R. Vollertsen, A. Vivas, B. Van Meer, A. Van Den Berg,
M. Odijk, and A. D. Van Der Meer, “Facilitating imple-
mentation of organs-on-chips by open platform technology,”
Biomicrofluidics, vol. 15, no. 5, Article ID 051301, 2021.
[11] Z. Dou, J. Tian, Q. Yang, and L. Yang, “Design and analysis of
cooperative broadcast scheme based on reliability in mesh
network,” Mobile Information Systems, vol. 2021, Article ID
5554563, 18 pages, 2021.
[12] C. Iwendi, S. U. Rehman, A. R. Javed, S. Khan, and
G. Srivastava, “Sustainable security for the internet of things
using artificial intelligence architectures,” ACM Transactions
on Internet Technology, vol. 21, no. 3, pp. 1–22, 2021.
[13] N. Sun, T. Li, G. Song, and H. Xia, “Network security
technology of intelligent information terminal based on
mobile internet of things,” Mobile Information Systems,
vol. 2021, no. 8, 9 pages, Article ID 6676946, 2021.
[14] P. R. Jena and R. Majhi, “An application of artificial neural
network classifier to analyze the behavioral traits of small-
holder farmers in Kenya,” Evolutionary Intelligence, vol. 14,
no. 2, pp. 281–291, 2021.
[15] F. H. Khan, M. A. Pasha, and S. Masud, “Advancements in
microprocessor architecture for ubiquitous AI—an overview
on history, evolution, and upcoming challenges in AI
implementation,” Micromachines, vol. 12, no. 6, p. 665, 2021.
[16] Y. Wang, B. Bai, X. Hei, L. Zhu, and W. Ji, “An unknown
protocol syntax analysis method based on convolutional
neural network,” Transactions on Emerging Telecommunica-
tions Technologies, vol. 32, no. 5, 2021.
[17] S. Lee, A. Abdullah, N. Z. Jhanjhi, and S. H. Kok, “Honeypot
coupled machine learning model for botnet detection and
classification in iot smart factory—an investigation,” MATEC
Web of Conferences, vol. 335, no. 1, 2021.
[18] A. Chehri, I. Fofana, and X. Yang, “Security risk modeling in
smart grid critical infrastructures in the era of big data and
artificial intelligence,” Sustainability, vol. 13, no. 6, p. 3196,
2021.
[19] X. Du, W. Susilo, M. Guizani, and Z. Tian, “Introduction to
the special section on artificial intelligence security: adver-
sarial attack and defense,” IEEE Transactions on Network
Science and Engineering, vol. 8, no. 2, pp. 905–907, 2021.
[20] G. Kabanda, “Performance of machine learning and big data
analytics paradigms in cybersecurity and cloud computing
platforms,” Global Journal of Computer Science and Tech-
nology, vol. 21, no. 2, p. 2128, 2021.
[21] A. Efe, “Usage of artificial intelligence to improve secure
software development,” e Journal of International Scientific
Researches, vol. 6, no. 1, pp. 46–57, 2021.
[22] A. Sharma, R. Kumar, M. W. A. Talib, S. Srivastava, and
R. Iqbal, “Network modelling and computation of quickest
path for service-level agreements using bi-objective optimi-
zation,” International Journal of Distributed Sensor Networks,
vol. 15, no. 10, Article ID 155014771988111, 2019.
[23] S. Shriram, B. Nagaraj, J. Jaya, S. Shankar, and P. Ajay, “Deep
learning-based real-time AI virtual mouse system using
computer vision to avoid COVID-19 spread,” Journal of
Healthcare Engineering, vol. 2021, Article ID 8133076, 8 pages,
2021.
[24] R. Huang, “Framework for a smart adult education envi-
ronment,” World Transactions on Engineering and Technology
Education, vol. 13, no. 4, pp. 637–641, 2015.
[25] X. Liu, J. Liu, J. Chen, F. Zhong, and C. Ma, “Study on
treatment of printing and dyeing waste gas in the atmosphere
with Ce-Mn/GF catalyst,” Arabian Journal of Geosciences,
vol. 14, no. 8, pp. 737–746, 2021.
6Journal of Control Science and Engineering
... The application of artificial intelligence technology has penetrated into every corner of society, greatly improving the efficiency and intelligence level of all walks of life [2][3]. In the medical field, AI analyzes medical images through deep learning to assist doctors in disease diagnosis, especially in early cancer recognition, showing higher accuracy than humans. ...
Article
Full-text available
Artificial intelligence technology has shown significant application advantages in cyberspace security, especially in the fields of the Internet of Things and system security. It enhances threat identification and risk assessment through self-directed learning and pattern recognition. Artificial intelligence technology is currently applied in many fields, such as firewalls, junk advertising, and computer security monitoring, and has played an important technical support role. The future development prospects of artificial intelligence technology are very broad. If integrated with computer network security technology, it can effectively reduce various dangers and improve computer network security. This study summarizes the concept, development, and cross-field application of artificial intelligence and focuses on its application advantages in network security. The paper further puts forward the cyber security strategy based on artificial intelligence technology. It emphasizes that the use of artificial intelligence technology can effectively improve the efficiency of network defense and automatic processing of intrusion. Finally, the conclusion points out the importance of artificial intelligence technology in network security and looks forward to future development.
... Tis article has been retracted by Hindawi following an investigation undertaken by the publisher [1]. Tis investigation has uncovered evidence of one or more of the following indicators of systematic manipulation of the publication process: ...
Article
Full-text available
The mouse is one of the wonderful inventions of Human-Computer Interaction (HCI) technology. Currently, wireless mouse or a Bluetooth mouse still uses devices and is not free of devices completely since it uses a battery for power and a dongle to connect it to the PC. In the proposed AI virtual mouse system, this limitation can be overcome by employing webcam or a built-in camera for capturing of hand gestures and hand tip detection using computer vision. The algorithm used in the system makes use of the machine learning algorithm. Based on the hand gestures, the computer can be controlled virtually and can perform left click, right click, scrolling functions, and computer cursor function without the use of the physical mouse. The algorithm is based on deep learning for detecting the hands. Hence, the proposed system will avoid COVID-19 spread by eliminating the human intervention and dependency of devices to control the computer.
Article
Full-text available
Organ-on-chip (OoC) and multi-organs-on-chip (MOoC) systems have the potential to play an important role in drug discovery, disease modeling, and personalized medicine. However, most devices developed in academic labs remain at a proof-of-concept level and do not yet offer the ease-of-use, manufacturability, and throughput that are needed for widespread application. Commercially available OoC are easier to use but often lack the level of complexity of the latest devices in academia. Furthermore, researchers who want to combine different chips into MOoC systems are limited to one supplier, since commercial systems are not compatible with each other. Given these limitations, the implementation of standards in the design and operation of OoCs would strongly facilitate their acceptance by users. Importantly, the implementation of such standards must be carried out by many participants from both industry and academia to ensure a widespread acceptance and adoption. This means that standards must also leave room for proprietary technology development next to promoting interchangeability. An open platform with standardized interfacing and user-friendly operation can fulfill these requirements. In this Perspective article, the concept of an open platform for OoCs is defined from a technical perspective. Moreover, we discuss the importance of involving different stakeholders in the development, manufacturing, and application of such an open platform.
Article
Full-text available
The purpose of the research is to evaluate Machine Learning and Big Data Analytics paradigms for use in Cybersecurity. Cybersecurity refers to a combination of technologies, processes and operations that are framed to protect information systems, computers, devices, programs, data and networks from internal or external threats, harm, damage, attacks or unauthorized access. The main characteristic of Machine Learning (ML) is the automatic data analysis of large data sets and production of models for the general relationships found among data. ML algorithms, as part of Artificial Intelligence, can be clustered into supervised, unsupervised, semi-supervised, and reinforcement learning algorithms. Abstract-The purpose of the research is to evaluate Machine Learning and Big Data Analytics paradigms for use in Cybersecurity. Cybersecurity refers to a combination of technologies, processes and operations that are framed to protect information systems, computers, devices, programs, data and networks from internal or external threats, harm, damage, attacks or unauthorized access. The main characteristic of Machine Learning (ML) is the automatic data analysis of large data sets and production of models for the general relationships found among data. ML algorithms, as part of Artificial Intelligence, can be clustered into supervised, unsupervised, semi-supervised, and reinforcement learning algorithms. The Pragmatism paradigm, which is in congruence with the Mixed Method Research (MMR), was used as the research philosophy in this research as it epitomizes the congruity between knowledge and action. The researcher analysed the ideal data analytics model for cybersecurity which consists of three major components which are Big Data, analytics, and insights. The information that was evaluated in Big Data Analytics includes a mixer of unstructured and semi-structured data including social media content, mobile phone records, web server logs, and internet click stream data. Performance of Support Vector Machines, Artificial Neural Network, K-Nearest Neighbour, Naive-Bayes and Decision Tree Algorithms was discussed. To avoid denial of service attacks, an intrusion detection system (IDS) determined if an intrusion has occurred, and so monitored computer systems and networks, and then raised an alert when necessary. A Cloud computing setting was added which has advanced big data analytics models and advanced detection and prediction algorithms to strengthen the cybersecurity system. The research results presented two models for adopting data analytics models to cybersecurity. The first experimental or prototype model involved the design, and implementation of a prototype by an institution and the second model involved the use service provided by cloud computing companies. The researcher also demonstrated how this study addressed the performance issues for Big Data Analytics and ML, and its impact on cloud computing platforms.
Article
Full-text available
The recent years have witnessed a considerable rise in the number of vehicles, which has placed transportation infrastructure and traffic control under tremendous pressure. Yielding timely and accurate traffic flow information is essential in the development of traffic control strategies. Despite the continual advances and the wealth of literature available in intelligent transportation system (ITS), there is a lack of practical traffic counting system, which is readily deployable on edge devices. In this study, we introduce a low-cost and effective edge-based system integrating object detection models to perform vehicle detecting, tracking, and counting. First, a vehicle detection dataset (VDD) representing traffic conditions in Vietnam was created. Several deep learning models for VDD were then examined on two different edge device types. Using this detection, we presented a lightweight counting method seamlessly combining with a traditional tracking method to increase counting accuracy. Finally, the traffic flow information is obtained based on counted vehicle categories and their directions. The experiment results clearly indicate that the proposed system achieves the top inference speed at around 26.8 frames per second (FPS) with 92.1% accuracy on the VDD. This proves that our proposal is capable of producing high-accuracy traffic flow information and can be applicable to ITS in order to reduce labor-intensive tasks in traffic management.
Article
Full-text available
In order to ensure reliable data transmission of sender, the tree-based network is extended to Mesh network by collaborative technology, and the SW-HBH-CARQ broadcast scheme is introduced for the first time. Then, in order to solve the problems of broadcast storm and energy saving, SW-HBH-oiCARQ, SW-HBH-ieCARQ, and SW-HBH-oieCARQ are further proposed. Then, the four broadcasting schemes are analyzed quantitatively, and the performance indicators such as system energy consumption and time delay are obtained. Finally, the relationships between the performance indexes and the parameters are discussed by numerical simulation. At the same time, the effectiveness of the proposed schemes in energy saving is verified by comparing with the corresponding noncollaborative broadcasting scheme. However, the latency of proposes scheme is greater than uncooperative scheme, which is the cost of saving energy and resolving broadcast storms.
Article
Full-text available
Artificial intelligence (AI) has successfully made its way into contemporary industrial sectors such as automobiles, defense, industrial automation 4.0, healthcare technologies, agriculture, and many other domains because of its ability to act autonomously without continuous human interventions. However, this capability requires processing huge amounts of learning data to extract useful information in real time. The buzz around AI is not new, as this term has been widely known for the past half century. In the 1960s, scientists began to think about machines acting more like humans, which resulted in the development of the first natural language processing computers. It laid the foundation of AI, but there were only a handful of applications until the 1990s due to limitations in processing speed, memory, and computational power available. Since the 1990s, advancements in computer architecture and memory organization have enabled microprocessors to deliver much higher performance. Simultaneously, improvements in the understanding and mathematical representation of AI gave birth to its subset, referred to as machine learning (ML). ML includes different algorithms for independent learning, and the most promising ones are based on brain-inspired techniques classified as artificial neural networks (ANNs). ANNs have subsequently evolved to have deeper and larger structures and are often characterized as deep neural networks (DNN) and convolution neural networks (CNN). In tandem with the emergence of multicore processors, ML techniques started to be embedded in a range of scenarios and applications. Recently, application-specific instruction-set architecture for AI applications has also been supported in different microprocessors. Thus, continuous improvement in microprocessor capabilities has reached a stage where it is now possible to implement complex real-time intelligent applications like computer vision, object identification, speech recognition, data security, spectrum sensing, etc. This paper presents an overview on the evolution of AI and how the increasing capabilities of microprocessors have fueled the adoption of AI in a plethora of application domains. The paper also discusses the upcoming trends in microprocessor architectures and how they will further propel the assimilation of AI in our daily lives.
Article
Full-text available
With the continuous development and progress of virtual reality technology in recent years, the application of virtual reality technology in all aspects of real life is no longer limited to the military field, medical, or film production fields, but it gradually appears in front of the public, into the lives of ordinary people. The human-computer interaction method in virtual reality and the presentation effect of the virtual scene are the two most important aspects of the virtual reality experience. How to provide a good human-computer interaction method for virtual reality applications and how to improve the final presentation effect of the virtual reality scene is also becoming an important research direction. This paper takes the virtual fitness club experience system as the application background, analyzes the function and performance requirements of the virtual reality experience system in the virtual reality environment, and proposes the use of Kinect as a video acquisition device to extract the user’s somatosensory operation actions through in-depth information to achieve somatosensory control. This article adopts a real human-computer interaction solution, uses Unity 3D game engine to build a virtual reality scene, defines shaders to improve the rendering effect of the scene, and uses Oculus Rift DK2 to complete an immersive 3D scene demonstration. This process greatly reduces resource consumption; it not only enables users to experience unprecedented immersion as users but also helps people create unprecedented scenes and experiences through virtual imagination. The virtual fitness club experience system probably reduces resource consumption by nearly 70%.
Article
An important goal of computer vision research is to give computers human-like cognitive abilities so that the computer can recognize the environment in the visual field, understand the content of emotions, and take appropriate actions. Most of the information that people can perceive is obtained through vision. It is the human goal to make computers have the same visual functions as humans. At present, although computers do not have efficient, flexible, and intelligent human vision, with the joint efforts of researchers, people are gradually achieving their goals. The purpose of this thesis is to study the application of human movement and movement scoring technology in computer vision features in sports training. In this paper, the support vector machine (SVM) method is used to recognize the human movement in the video. After the recognition, the score is then performed, which can help sports athletes to carry out standardized training. The results of this paper show that the use of SVM method can basically achieve a recognition accuracy of more than 80% of human motion actions, so that motor actions can be scored better and more conveniently. Computer vision technology for action scoring can help athletes standardize various sports skills and improve their ability to compete by improving various sports skills.
Article
Artificial intelligence (AI) has been widely adopted in various applications such as face detection, speech recognition, machine learning, etc. Due to the lack of theoretical explanation, recent works show that AI is vulnerable to adversarial attacks, especially deep neural networks could be easily fooled by adversarial examples that are in the form of subtle perturbations to the inputs. The intrinsic vulnerability of AI might incur severe security problems in areas like automatic driving, face payment, voice command control, etc. Adversarial learning is one typical defense method, which can migrate such security risk of AI by training with generated adversarial examples. However, this method cannot defend the growing number of adversarial attacks. The special section of “Artificial Intelligence Security: Adversarial Attack and Defense” focused on the state-of-art adversarial attack and defense methods, and explored how these security problems could affect other areas such as cyberspace security, and internet-of-things.
Article
In this digital age, human dependency on technology in various fields has been increasing tremendously. Torrential amounts of different electronic products are being manufactured daily for everyday use. With this advancement in the world of Internet technology, cybersecurity of software and hardware systems are now prerequisites for major business’ operations. Every technology on the market has multiple vulnerabilities that are exploited by hackers and cyber-criminals daily to manipulate data sometimes for malicious purposes. In any system, the Intrusion Detection System (IDS) is a fundamental component for ensuring the security of devices from digital attacks. Recognition of new developing digital threats is getting harder for existing IDS. Furthermore, advanced frameworks are required for IDS to function both efficiently and effectively. The commonly observed cyber-attacks in the business domain include minor attacks used for stealing private data. This article presents a deep learning methodology for detecting cyber-attacks on the Internet of Things using a Long Short Term Networks classifier. Our extensive experimental testing show an Accuracy of 99.09%, F1-score of 99.46%, and Recall of 99.51%, respectively. A detailed metric representing our results in tabular form was used to compare how our model was better than other state-of-the-art models in detecting cyber-attacks with proficiency.