ArticlePDF Available




In recent times, Big Data computing has become quite an important asset for firms. In fact, almost all industries are generating data in large amounts. Even though it has significant potential, it is still insecure and is a target of different security issues and problems. It has been identified in this research that Big Data computing requires great protection from different security challenges. There is a need for firms to improve their technological infrastructures, utilize advanced protocols, limit access, and even utilize a surveillance team for monitoring the status of data. It will enable firms to identify issues and resolve them quickly before confidential data can be accessed.
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
m@International Research Journal of Modernization in Engineering, Technology
and Science
Mr. Gopala Krishna Sriram*1
*1 Software Architect, EdgeSoft Corp, McKinney, TX USA
In recent times, Big Data computing has become quite an important asset for firms. In fact, almost all industries are
generating data in large amounts. Even though it has significant potential, it is still insecure and is a target of different
security issues and problems. It has been identified in this research that Big Data computing requires great protection from
different security challenges. There is a need for firms to improve their technological infrastructures, utilize advanced
protocols, limit access, and even utilize a surveillance team for monitoring the status of data. It will enable firms to
identify issues and resolve them quickly before confidential data can be accessed.
Keywords: Big Data Computing, Security Issues, Security Challenges, Technological Infrastructures, Advanced
Protocols, Surveillance Team, Confidential Data, Etc.
I. Introduction
In almost every field, the most important asset for organizations has been ‘big data’ over the last few years. Data is the
most important asset for not only computer science or technological industry but also other public as well as private
organizations such as education, healthcare, or the engineering sector. To carry out the daily activities of companies, data
is the essential thing, and businesses management can make the best decisions and achieve their goals on the information
basis extracted from the data. It is estimated that 90 percent of the total data in recorded human history is created in the
last recent years. Data’s five exabytes were created by humans in 2003, and at present, this amount of information is
created within two days.
Even though there has been a significant interest in big data and a large number of firms have adopted it, there are major
security challenges associated with it. In fact, due to its security issues, businesses are concerned and they rely on the use
of different techniques for improving security. In this paper, there will be a focus on the review of big data computing, its
security concerns, and how these security concerns can be addressed[ CITATION Mor16 \l 1033 ].
Generally, Big Data is utilized for the management of datasets that are large in size and are beyond the ability of a
common software to manage and analyze in an efficient manner. With the advancement in technologies, it is expected that
the generation of data will double in the coming years [ CITATION IDC12 \l 1033 ]. The authors examined the strategic
journey regarding big data privacy protection. The authors have stated that big data can be stored effectively and
efficiently with the use of a number of strategies. However, the important thing is to consider the big data privacy lookup
[ CITATION Dey12 \l 2057 ]. The authors for their research have surveyed the privacy techniques, obstacles, and
requirements that are associated with the protection of data. An important role is being played by big data protection as it
enables the confidentiality of data. Compared to data privacy, data security is different. Privacy concentrates only on a
specific person using the data for making sure that it is being used in the right way. In Big Data analytics, privacy is
essential due to a number of reasons. Weak protection techniques prove to be inefficient when it comes to Big Data
[ CITATION MSP14 \l 2057 ].
Following environments might be capable of penetrating the privacy of a user in the technology of big data: First, during
the transmission of data over the internet, personal information is shared with external sources. With this information,
third parties might be able to harm the user. Secondly, personal data is sometimes collected for business purposes but it is
exploited instead [ CITATION Was12 \l 2057 ]. For instance, online shopping vendors seem to collect personal
information, and using it, they can predict the habits and even activities of users. Fourthly, data trickling can take place in
the processing and storing stages. In the third and second phases of the data lifecycle, data privacy is more significant.
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
Typical database management systems are sufficient for the framework of big data because of the necessity of managing
large volumes of data and data heterogeneity [ CITATION Dey12 \l 2057 ].
Privacy of big data faces many issues which are classified into 4 categories including Integrity Security, Data
Administration, Data Privacy, and Data Security.
Framework security: The technology of Big Data follows the infrastructure of distributed computing and several users
work in parallel in it. It implies that the identification of intruders is very important. At present, most of the institutions
have transferred to NoSQL databases from the traditional ones to handle semi-structured and unstructured data. NoSQL
appears to offer architecture flexibility for the data that is multi-sourced but it is vulnerable to attacks.
Data Privacy: Various sources are used for collecting the data privacy has to be maintained in the analytic stage
[ CITATION Fra08 \l 2057 ]. Encryption techniques can be utilized for protecting data.
Data administration: BD or big data is collected from countless sources making it contain numerous end-users.
Gradually, complexity in big data increases. In big data, complexity will be concerned with provenance metadata because
of the provenance graph.
Integrity security: Filtering process and input validation pose a significant challenge to the application of big data. Due
to the data size, it is quite tough to determine whether the data is derived from a valid source or not. If the source is legit
then the data has to be eliminated so that it doesn’t possess a risk to the whole system. Security monitoring in real-time is
chosen for alerting institutions at the primitive stages of attacks. SIEM systems also appear to play a significant role in
helping the organization in identifying the issues and resolving them immediately [ CITATION Vij17 \l 1033 ].
In spite of the fact that information obtained through data mining can be quite useful for various applications, individuals
have demonstrated an increasing concern about the coin’s other side which is concerned with the threats of privacy posed
by the technique. The privacy of an individual might be risked because of unauthorized access to personal information,
the utilization of personal information for purposes not concerned with business, and undesired discovery of information
that is embarrassing [ CITATION LaV11 \l 2057 ]. For example, Target received some complaints from an angry
customer that coupons about baby clothes were sent to his teenage daughter. This information was actually obtained by
mining the customer data. Analyzing this case, it can be analyzed that a conflict between privacy security and data mining
is present. For dealing with the issues related to data mining, PPDM or privacy-preserving data mining has gained
attention. Its objective is all about safeguarding personal information from unsanctioned or unsolicited disclosure while
preserving the data utility [ CITATION Was12 \l 2057 ].
PPDM’s consideration is two-fold. First of all, sensitive data, like a cell number of a customer must be used directly for
mining. Secondly, sensitive results of mining whose disclosure will be resulting a violation of privacy should be excluded.
They examined information security in big data, including privacy and data mining [ CITATION Lee131 \l 2057 ].
Authors have shown their concern about the security of sensitive information of individuals threatened by the
development and growing popularity of data mining technologies. Authors have identified the four distinct kinds of users
involved in the application of data mining: (i) data provider, (ii) data collector, (iii) data miner, and (iv) decision-maker.
Authors have reviewed the game-theoretical approaches as well as explored the approaches regarding privacy-preserving
for every kind of user. Authors have provided useful insights into the PPDM's study by differentiating the different users’
responsibilities concerning the sensitive information’s security [ CITATION XuL14 \l 1033 ].
The authors critically analyzed the big data challenges, along with analytical methods. A holistic view of big data
practices is presented by the authors and the big data applications in the normative literature slice [ CITATION Lae17 \l
2057 ]. Authors have adopted the SLR methodology that is, according to the authors, a most convenient tool to conduct
the descriptive review of existing literature. The authors have briefly explained the challenges in big data through
synthesizing and systematic analysis of literature. The authors have briefly explained the data, management, and process
challenges regarding big data. All the procedure was done to provide a useful direction to future research [ CITATION
Fra08 \l 2057 ].
Although the advantages of big data are both substantial and factual, there still are numerous issues and challenges that
have to be addressed for fully realizing the actual potential of big data [ CITATION Jia18 \l 2057 ]. Addressing them
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
will be quite helpful in realizing the true capability of big data and utilizing it to its maximum potential. It can be said that
some of the issues are a function of the specifications of big data, some, bits present analysis models and analysis, and
some, through the boundaries of the present system of data processing [ CITATION Guy172 \l 2057 ]. Therefore, there
are different aspects that are the reason for prevalent issues. Extant studies which surround the challenges of big data have
actually paid attention to all the difficulties of recognizing the notion of big data, decision making of the information
which is collected and generated, problems related to privacy, and all ethical considerations which are associated with the
mining of such data [ CITATION Ran15 \l 2057 ]. It is actually asserted by authors that creating a sustainable solution
for multifaceted and large data is quite an issue that businesses are facing and trying to resolve it by consistently learning
and then applying innovative and new approaches. For instance, one of the largest issues about the infrastructure of big
data is high costs. Equipment of hardware is quite costly even with the presence of technologies of cloud computing
[ CITATION Jia18 \l 2057 ].
In addition, for sorting through the data, so that important data can be developed, human analysis is required frequently
[ CITATION Mor16 \l 2057 ]. Although the technologies of computing are needed for facilitating these data, keeping
up the pace, talents, and human expertise required by business leaders for leveraging big data are still lagging behind,
which proves to be another significant issue. Just as the authors suggest, some issues of big data can be classified into
three categories on the basis of the lifecycle of data: Data challenges relating to the specifications of data such as
dogmatism, discovery, quality, veracity, velocity, variety, and volume etc. Challenges of the process are associated with
how techniques: how results can be provided, how can the right analysis model be selected, how data can be transformed,
how data can be integrated, and how it can be captured. Challenges of management cover, for instance, ethical,
governance, security, and privacy aspects [ CITATION Ran15 \l 2057 ].
In the context of Big Data computing, a major challenge is concerned with the management of information while handling
rapid and massive data streams [ CITATION Nor20 \l 2057 ]. Therefore, there is a need for security tools to be
scalable and flexible for simplifying the incorporation of technological evolutions and managing the changes that might
occur in the requirements of applications. In addition to it, there is a need for finding a balance between dynamic analysis,
system performance, and multiple requirements of security. It should be noted that traditional techniques of security like
data encryption tend to decrease performance and they also consume a significant amount of time. At the same time, these
techniques are not efficient. Therefore, most of the time, the attacks on security are identified after the damage has been
done and sustained [ CITATION JCO15 \l 2057 ].
The platforms of Big Data tend to imply and indicate the management of parallel computations and various applications.
Thus, for real-time analysis and data sharing, the key element is performance. The combination of different techniques
and methods might bring hidden risks and issues that are mostly underestimated and not evaluated [ CITATION
Paw15 \l 2057 ]. Therefore, the platforms of Big Data might bring new security vulnerabilities and risks that are not
evaluated. In addition to it, the value of data is concentrated on a number of data centers and clusters. These rich data
mines are quite attractive for industries, governments, and commerce. There is no doubt that they constitute a target of
several penetrations and attacks [ CITATION San10 \l 2057 ]. At the same time, most of the security risks tend to
come from end-point users, partners, and employees. Therefore, there is a critical need for the deployment of advanced
mechanisms for the protection of clusters of Big Data. In this regard, there is a responsibility of data owners to set clear
policies and clauses associated with security [ CITATION Cli13 \l 2057 ].
For ensuring data security and privacy, there is a need for achieving data anonymization without influencing the quality of
data or the performance of the system. Traditional techniques of anonymization, however, are based on a number of
computations and iterations that consume significant time. In addition, several iterations tend to influence data
consistency and they also decrease the performance of the system, especially when heterogeneous data sets are to be
managed [ CITATION Rus11 \l 2057 ]. It is quite difficult to analyze and process Big Data when it is anonymized. It
is worth noting that some security techniques and methods are not compatible with technologies of Big Data such as the
MapRecude paradigm [ CITATION Pet142 \l 2057 ]. For ensuring the security of Big Data, there is a need for
verifying the compatibility of different security technologies with Big Data methods. Actually, the reliability and
precision of data analysis tend to depend on the integrity and quality of data. Thus, there is a need for verifying the
integrity and authenticity of sources of Big Data before the data is to be analyzed. Considering the fact that large volumes
of data are created on a consistent basis, it is quite tough and complex to assess the integrity and authenticity of all the
data sources [ CITATION Sip14 \l 2057 ].
In addition to it, for extracting complete information from different sources of Big Data, there is a need for analysts to
manage heterogeneous and incomplete data streams that exist in different formats. They are required to filter data in an
efficient manner and they have to contextualize and organize data as well before they can perform an evaluation of the
data. Government agencies and private organizations have to respect a number of industry standards and security laws
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
that have the objective of enhancing and improving the management of security and confidentiality of data. It is, however,
important to note that some ICTs might even involve different entities across a number of nations. Therefore, enterprises
have to manage different regulations and laws as they operate. Big Data analytics might conflict with a number of privacy
guidelines and concepts. For instance, different data sets can be correlated by analysts from different entities for revealing
sensible or personal data with the use of anonymization techniques. As a consequence, such analyses might help in
identifying confidential information [ CITATION Siv17 \l 2057 ].
In the case of social networks, there is no doubt that a huge amount of comments, videos, photos, and clicks are created
with the use of social networks. Usually, they are the first or primary source of information for a number of entities. On
social networks, Big Data constitute an important mine for different governments to better analyze and manage national
security issues and risks. In fact, some governments and associations tend to evaluate social networks for supervising
public opinions. Still, the analysis of such data is not simple. It requires computation power that is not really possessed by
an individual firm. For the enhancement of security of Big Data, firms and organizations tend to depend on advanced
analysis of dynamic security [ CITATION Soo16 \l 2057 ]. The underlying objective is concerned with analyzing and
extracting security events in real-time for enhancing transactional and online security for the prevention of attacks. It is
important to note that some of the common techniques used by firms for the protection of Big Data include:
Anonymization: A common technique that is used by firms and organizations for the protection of data is data
anonymization. It helps in protecting data across distributed and cloud systems. A number of solutions and models are
utilized for the implementation of this technique including l-diversity, k-anonymity, m-invariance, and t-closeness. The
sub-techniques are based on Bottom-Up Generalization and Top-Down Specialization [ CITATION Vij17 \l 2057 ].
Data Cryptography: Another common technique that is used for the protection of data is data encryption. It is utilized
for ensuring the confidentiality of Big Data. In contrast with typical techniques for encryption, it should be noted that
Homographic Cryptography allows computation even on the data that is encrypted. As a consequence, this method
ensures the confidentiality of information which enables the extraction of insights through computations and analyses
[ CITATION Ana18 \l 2057 ].
Centralized Security Management: A number of firms make the use of the cloud for the storage of data. The underlying
goal is concerned with taking advantage of a centralized security mechanism and the standard compliance infrastructure
[ CITATION Hon13 \l 2057 ]. Still, it is quite difficult to achieve the status of zero risk. The cloud tends to get the
attention of hackers and attackers because it represents a base or hub of confidential information [ CITATION XuL14 \l
2057 ].
Data Access Monitoring
Actually, there is an increasing rise of security issues and threats due to the increasing rate of exchange in data over the
cloud and distributed systems. For facing these challenges, it has been proposed that controls at the data phase should be
integrated [ CITATION Dia17 \l 2057 ]. However, it has also been identified that this integration is not sufficient for
combatting security threats. In addition to it, there is a need for access controls to be granulated in such a manner that
access is limited by responsibilities and roles. There generally exist a number of methods for ensuring data confidentiality
and access control including federated identity management, smart cards, and certificates. The consistent monitoring of
security threats can prove to be efficient in the sense that threats and problems can be identified quickly and they can be
managed without experiencing major difficulties and issues [ CITATION Nic17 \l 2057 ].
Security Surveillance
There is no doubt that there exists a need for ensuring consistent surveillance or detecting security incidents and related
issues and problems in real-time. For ensuring the surveillance of Big Data security, there are a number of solutions that
can be considered and used by firms. These solutions include dynamic analysis of security threats, Security Information
and Event Management, and Data Loss Prevention. These solutions are actually based on correlation and consolidation
methods between different data sources. There is also a significant need to carry out regular audits for performing and
verifying different security policies and the recommended practices for employees and users [ CITATION Zik11 \l
2057 ].
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
Generally, for every research, there are a number of methods and techniques that are considered and utilized for
performing research. In this case as well, there are some certain methods that have been considered and utilized. These
methods are primarily concerned with qualitative methods. Actually, qualitative and quantitative methods or techniques
are recognized as two primary methods of research that are available for researchers to utilize. In fact, it would not be
wrong to say that these methods are often considered by authors for performing their research and conducting their
studies. Both of these methods and techniques are quite different from each other. In this case, qualitative methods have
been considered in the form of literature reviews.
Literature reviews are recognized as a common form or type of qualitative technique and it can be said that they are
mostly considered due to their efficiency and their simplicity. Even though they are quite simple, they deliver the
outcomes that are required in an efficient manner. In fact, it would not be wrong to say that literature reviews enable a
researcher to ensure that the topic or concept at hand is explored in an effective manner. For instance, it presents and
provides a researcher with an authentic and systematic method of exploring different researches and studies. Through this
thorough exploration, the concept or topic can be detailed in an effective manner. In this case, a literature review has been
selected and performed because it seems to suit the nature of the research.
Actually, quantitative methods in the form of questionnaires could have been considered if the timescale of the research
had been extensive. In addition to it, it would have been considered if the nature of the research had been different.
However, in this case, the use of questionnaires for performing surveys was not considered suitable. In fact, it can be said
that their use was not considered to be effective. It would not have delivered the required outcomes. In fact, it can be said
that if questionnaires had been considered, it would have caused and resulted in quantitative data. However, it would not
have resulted in qualitative and conceptual information. Therefore, in this case, quantitative methods have not been
Typically, such methods and techniques are considered and utilized when there is a need to ensure that quantitative
information is obtained on the topic at hand. In this research, there was not actually a need to obtain quantitative
information. Instead of it, there was a need to evaluate the concept and in order to do it, literature reviews were
recognized and considered to be the most suitable technique of performing the research. In fact, it would not be wrong to
say that the use of literature reviews enabled the exploration of the topic in an effective manner. A number of credible
journal articles were considered and they were thoroughly examined for performing the research and obtaining all the
necessary information that was required to be included in this study [ CITATION Nic17 \l 2057 ].
In accordance with the information obtained from the literature review, it can easily be said that big data has a number of
benefits for organizations. In fact, it enables businesses to perform operations and tasks that were not really possible
before. However, similar to how different benefits are offered by big data, there are also some drawbacks, and one of the
key drawbacks of big data is actually associated with its weak privacy. There are undoubtedly some significant privacy
issues and risks that are present when it comes to big data. Actually, in modern times where processes such as
personalized marketing and filter bubble are present, many people fear that their privacy is at risk [ CITATION JCO15 \l
2057 ].
It is important to note that a large part of insights associated with big data includes predictions that are made regarding the
details of consumers. In fact, most of the time, these details are quite personal in terms of their nature, which is one of the
reasons why even the chance or possibility of them falling into the hands of wrong people, is enough to eliminate any
possible trust that consumers have in different firms and organizations. Realizing the importance of privacy and the value
that is possessed by sensitive information, it is necessary to the survival of a firm that they consider different measures for
preventing the obstruction of privacy of consumers [ CITATION MSP14 \l 2057 ].
In addition to it, in spite of obtaining and receiving significant scrutiny, thorough and complete anonymity over the web is
more than just a little difficult. Big data analytics are considered by firms and due to it, the possibility of using anonymous
files becomes impossible. Considering the fact that big data insights are generally based on different types of raw datasets,
there exists a significant possibility that consumers might have their identity exposed. Even if there is a chance that a data
file is completely anonymous, a number of security teams seem to consider and combine these data files to ensure that
connections can be made. Thus, the identification of a person is made quite simple and easy. Furthermore, being
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
anonymous is complicated more by the fact that almost every SME that performs online business depends on the software
that is hosted by different third parties in the cloud. These firms have different privacy practices which again presents a
significant risk to privacy [ CITATION LaV11 \l 2057 ].
In a possible attempt to secure and protect their sensitive information from cybercriminals and hackers, most firms make
the use of data masking as a procedure. It is actually a process through which actual information is hidden by less
important data sets and information. Normally, data masking is considered and utilized for veiling sensitive information
different unauthorized people. In most firms, the primary or basic function that is served through data masking is actually
the protection or security of confidential information from being exposed and leaked. If it is not considered and used
properly, data masking is capable of compromising security to a significant extent [ CITATION Zik11 \l 2057 ].
Actually, the use of big data is increasing significantly and it would not be wrong to say that firms make the use of big
data for a number of purposes. However, big data experience different privacy concerns and risks. As it has been
discussed above, the protection of big data is not simple and it is quite a complex process. For the protection of big data, it
is important to note that there are a number of steps that are required to be considered and taken. One of the most
important steps for a firm to consider is to have the necessary infrastructure for the management of big data. Without the
required infrastructure, it is highly possible that the intended data might be exposed to the wrong people who could make
the use of this information for their own specific purposes [ CITATION Siv17 \l 2057 ].
Other than having the required infrastructure, it is necessary for firms to make sure that the right security practices are in
place that can help in securing big data. With the presence and use of different security protocols, big data can be
protected. However, it increases the cost of utilization of big data to a significant extent. In such a case, firms have to
invest a significant amount of their revenues and money into the security and protection of big data.
Overall, it can be said that while big data offers some significant benefits to firms, it also poses some significant issues
and challenges. One of these challenges is concerned with the sheer number of privacy risks that are experienced by it.
The use of big data is quite risky in the sense that the identity of consumers can be exposed, and it can eliminate all the
possible trust they have in a specific brand or a firm. For the protection of big data, there are generally a number of steps
that are required to be taken by firms. One of these steps is concerned with the utilization of effective infrastructure that is
capable of managing big data without any type of issue. The second step is actually associated with the use of different
security protocols that are necessary for securing big data and ensuring that big data is protected in an effective manner.
Third, there is a need of utilizing techniques that can be scaled according to the requirements and needs of the firms.
Without this scalability, it would not be possible to protect all the areas within the firm. Fourth, there is a critical need to
ensure that there is a surveillance team within the firm that is responsible for monitoring the status of security within the
firm. It will enable firms to identify and evaluate security threats and issues that tend to exist in the organization. In
addition, it will enable the team to analyze the attacks and threats quickly before performing and implementing the
strategies that can help in ensuring that the threats are addressed in an efficient manner without experiencing any
difficulties. At the same time, there is a need for enhancing the security protocols within the firms consistently.
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
VI. VII. J. Moreno and M. A. Serrano, “Main Issues in Big Data Security,” Future Internet, vol. 8, no. 44, pp. 1-
16, September 2016.
VIII. IX. IDC, December 2012. [Online]. Available:
X. XI. D. Chen and H. Zhao, “Data security and privacy protection issues in cloud computing,” 2012
International Conference on Computer Science and Electronics Engineering, pp. 647-651, 2012.
XII. XIII. M. P. Babu and S. H. Sastry, “Big data and predictive analytics in ERP systems for automating decision
making process,” 2014 IEEE 5th International Conference on Software Engineering and Service Science,
pp. 259-262, 2014.
XIV. XV. W. ElHajj, “The most recent SSL security attacks: origins, implementation, evaluation, and suggested
countermeasures,” Security and Communication Networks, vol. 5, no. 1, pp. 113-124, 2012.
XVI. XVII. F. L. Greitzer, A. P. Moore, D. M. Cappelli, D. H. Andrews, L. A. Carroll and T. D. Hull, “Combating the
insider cyber threat,” IEEE Security & Privacy, vol. 6, no. 1, pp. 61-64, 2008.
XVIII. XIX. M. D. Viji, K. Saravanan and D. Hemavathi, “A Journey on Privacy protection strategies in big data,”
IEEE, pp. 1344-1347, 2017.
XX. XXI. S. LaValle, E. Lesser, R. Shockley, M. S. Hopkins and N. Kruschwitz, “Big data, analytics and the path
from insights to value,” MIT sloan management review, vol. 52, no. 2, pp. 21-32, 2011.
XXII.XXIII. C.-C. Lee, P.-S. Chung and M.-S. Hwang, “A Survey on Attribute-based Encryption Schemes of Access
Control in Cloud Environments,” IJ Network Security, vol. 15, no. 4, pp. 231-240, 2013.
XXIV. XXV. L. Xu, C. Jiang, J. Wang, J. Yuan and Y. Ren, “Information Security in Big Data:Privacy and Data
Mining,” IEEE, vol. 2, pp. 1149-1176, October 2014.
XXVI.XXVII. L. Leichtnam, E. Totel, N. Prigent and L. Mé, “Starlord: Linked security data exploration in a 3d graph,”
2017 IEEE Symposium on Visualization for Cyber Security (VizSec), pp. 1-4, 2017.
XXVIII.XXIX. J.-h. Li, “Cyber security meets artificial intelligence: a survey,” Frontiers of Information Technology &
Electronic Engineering, vol. 19, no. 12, pp. 1462-1474, 2018.
XXX.XXXI. G. Martin, P. Martin, C. Hankin, A. Darzi and J. Kinross, “Cybersecurity and healthcare: how safe are
we?,” Bmj, 2017.
XXXII.XXXIII. R. Mittu and W. F. Lawless, “Human factors in cybersecurity and the role for AI,” 2015 AAAI Spring
Symposium Series, 2015.
XXXIV.XXXV. J. Norbekov, “Ensuring information security as an ideological problem,” Mental Enlightenment Scientific-
Methodological Journal, pp. 56-65, 2020.
XXXVI.XXXVII. J. C. Ogbonna, F. O. Nwokoma and A. Ejem, “Database Security Issues: A Review,” International
Journal of Science and Research, vol. 6, no. 8, 2015.
XXXVIII.XXXIX. M. V. Pawar and J. Anuradha, “Network security and types of attacks in network,” Procedia Computer
Science, vol. 48, pp. 503-506, 2015.
XL. XLI. S. Roy, C. Ellis, S. Shiva, D. Dasgupta, V. Shandilya and Q. Wu, “A survey of game theory as applied to
network security,” 2010 43rd Hawaii International Conference on System Sciences, pp. 1-10, 2010.
XLII.XLIII. C. Raleigh and C. Dowd, “Governance and conflict in the Sahel’s ‘ungoverned space’,” Stability:
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology
Volume:01/Issue:01/ January-2022 Impact Factor- 6.752
@International Research Journal of Modernization in Engineering, Technology
and Science
International Journal of Security and Development, vol. 2, no. 2, 2013.
XLIV. XLV. P. Russom, “Big data analytics,” TDWI best practices report, fourth quarter, vol. 19, no. 4, pp. 1-34,
XLVI.XLVII. P. W. Singer and A. Friedman, Cybersecurity: What everyone needs to know, OUP USA, 2014.
XLVIII.XLIX. M. Siponen, M. A. Mahmood and S. Pahnila, “Employees’ adherence to information security policies: An
exploratory field study,” Information & management, vol. 51, no. 2, pp. 217-224, 2014.
L. LI. U. Sivarajah, M. M. Kamal, Z. Irani and V. Weerakkody, “Critical analysis of Big Data challenges and
analytical methods,” Journal of Business Research, vol. 70, pp. 263-286, 2017.
LII. LIII. Z. A. Soomro, M. H. Shah and J. Ahmed, “Information security management needs more holistic
approach: A literature review,” International Journal of Information Management, vol. 36, no. 2, pp. 215-
225, 2016.
LIV. LV. A. Strielkina, O. Illiashenko, M. Zhydenko and D. Uzun, “Cybersecurity of healthcare IoT-based systems:
Regulation and case-oriented assessment,” 2018 IEEE 9th International Conference on Dependable
Systems, Services and Technologies (DESSERT), pp. 67-73, 2018.
LVI. LVII. H. Xiao, B. Ford and J. Feigenbaum, “Structural cloud audits that protect private information,”
Proceedings of the 2013 ACM workshop on Cloud computing security workshop, pp. 101-112, 2013.
LVIII. LIX. D. Zhe, W. Qinghong, S. Naizheng and Z. Yuhan, “Study on data security policy based on cloud storage,”
2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international
conference on high performance and smart computing (hpsc), and ieee international conference on
intelligent data and security (ids), pp. 145-149, 2017.
LX. LXI. N. Walliman, Research methods: The basics, Routledge, 2017.
LXII.LXIII. P. Zikopoulos and C. Eaton, Understanding big data: Analytics for enterprise class hadoop and streaming
data, McGraw-Hill Osborne Media, 2011.
... Companies, administrations and others organizations run the risk of losing their data or to disclose them by mistake on false sites or following security breaches in their information system [6]. The consequences of a hostile exploitation of this data could cause a loss of confidence, a degradation of the brand image which can for example lead to the loss of contracts, markets international, etc. [7] Today, new forms of academic and industrial, national and international organizations promote synergies and the pooling of skills and know-how. The resulting discoveries are often the object of covetousness, technological capture, misappropriation of industrial property, etc [8]. ...
Full-text available
Online protection ostensibly is the discipline that could benefit most from the presentation of man-made reasoning (computer based intelligence). Where ordinary security frameworks may be slow and inadequate, man-made consciousness methods can further develop their general security execution and give better insurance from a rising number of modern digital dangers. Next to the extraordinary open doors credited to man-made intelligence inside online protection, its utilization has supported dangers and concerns. To further increment the development of online protection, a comprehensive perspective on associations' digital climate is expected in which artificial intelligence is joined with human knowledge, since neither individuals nor man-made intelligence alone has demonstrated by and large achievement in this circle. In this way, socially capable utilization of simulated intelligence methods will be fundamental to additionally alleviate related dangers and concerns.
... This is all due to their ability to avoid and evade being detected by intrusion detection systems, firewalls and anti-viruses. Data Management Challenges: especially when it comes to processing the intercepted data, which depends on its volume, size, type, compatibility, processing speed, homogeneous/heterogeneous nature, validation, consistency, trust, and location, especially in case where ethical hackers are dealing with big data [283,284]. This also depends on data compatibility with the in-use ethical hacking tools to ensure that the collected data would not result into it being damaged, altered or modified in a way that damages the IoT system, server, application, etc, nor would result into further security breach nor information/data leaks. ...
Full-text available
In recent years, attacks against various Internet-of-Things systems, networks, servers, devices, and applications witnessed a sharp increase, especially with the presence of 35.82 billion IoT devices since 2021; a number that could reach up to 75.44 billion by 2025. As a result, security-related attacks against the IoT domain are expected to increase further and their impact risks to seriously affect the underlying IoT systems, networks, devices, and applications. The adoption of standard security (counter) measures is not always effective, especially with the presence of resource-constrained IoT devices. Hence, there is a need to conduct penetration testing at the level of IoT systems. However, the main issue is the fact that IoT consists of a large variety of IoT devices, firmware, hardware, software, application/web-servers, networks, and communication protocols. Therefore, to reduce the effect of these attacks on IoT systems, periodic penetration testing and ethical hacking simulations are highly recommended at different levels (end-devices, infrastructure, and users) for IoT, and can be considered as a suitable solution. Therefore, the focus of this paper is to explain, analyze and assess both technical and non-technical aspects of security vulnerabilities within IoT systems via ethical hacking methods and tools. This would offer practical security solutions that can be adopted based on the assessed risks. This process can be considered as a simulated attack(s) with the goal of identifying any exploitable vulnerability or/and a security gap in any IoT entity (end devices, gateway, or servers) or firmware.
... Saat ini kebanyakan perusahaan telah mengalihkan dari tradisional ke database dengan NoSQL yang lebih fleksibel untuk data dari berbagai sumber sehingga cocok dengan kumpulan data semi-structured maupun unstructured, namun database NoSQL rawan terhadap serangan. Banyak sumber yang digunakan pada proses analisis data dalam mengumpulkan data tersebut, sehingga perlu adanya pemeliharaan privasi, hingga administrasi disebabkan kompleksnya data yang mana berkaitan dengan asal muasal data tersebut, serta banyaknya end-user terlibat [12] . Lalu, meskipun berbagai framework untuk melindungi data telah diajukan daripada tradisional, kebanyakan dari platforms Big Data menerapkan akses kontrol dasar, akibatnya akses kepada data yang bernilai dan sensitif dapat menyebabkan ancaman berbahaya [13]. ...
Abstrak— Seiring zaman yang terus berkembang, teknologi pun semakin canggih. Dewasa ini, teknologi telah menjadi kebutuhan sekaligus menjadi ‘teman hidup’ manusia. Berbagai macam teknologi yang ada mulai dari teknologi di bidang telekomunikasi, transportasi, kesehatan, bahkan dalam bidang pertahanan dan keamanan sangat membantu bagi manusia dalam menjalankan aktivitas nya sehari-hari. Dimana dalam penerapannya teknologi ini memerlukan data pengguna yang akan dijadikan sebagai bahan dalam pertimbangan mengenai kegunaan dari teknologi tersebut. Data-data ini dapat dikumpulkan dan diolah dengan menggunakan suatu teknologi yang dinamakan Big Data. Diantara data-data yang dikumpulkan, ada data yang sensitif. Data yang sensitif yaitu data pribadi yang rentan untuk dicuri, dirusak, dan dimanipulasi. Pada paper ini akan dibahas mengenai solusi untuk mencegah risiko yang ditimbulkan oleh penggunaan Big data terhadap data pribadi, mulai dari melakukan pengamanan kode komputasi, menerapkan validasi dan penyaringan input yang menyeluruh, menerapkan access control granular, mengamankan penyimpanan dan perhitugan data, serta meninjau dan menerapkan privasi, pemeliharaan data mining, dan analisis.Kata kunci: Big Data, keamanan, data pribadi, data pengguna
... Object identification which is both efficient and accurate has been a hot topic in computer vision development the system. With the development of deep learning systems, the accuracy of object detection has grown substantially [2,3]. But in today's world of technology, we rely more on machines rather than human resources. ...
Full-text available
Object detection has received a lot of academic attention in recent years because of its tight association with video analysis and picture interpretation. With the fast growth of deep learning, more powerful tools capable of learning semantic, high‐level, and deeper features are being offered to address difficulties in traditional architectures. These models react differently in terms of network design, training approach, and optimization function, among other things. This study describes multiple deep learning models and their characteristics for object detection in pictures and videos. Convolutional Neural Networks (CNN), models based on Region proposal, and models based on regression/classification are the many deep learning approaches. The study also discusses how deep learning approaches differ from one another and their weaknesses in comparison to the prior model. Face identification, pedestrian detection, autonomous driving, and other applications of object detection are also explored in this work. From the study, it was found that object recognition is a group of tasks for recognizing items in digital pictures. R‐CNNs, or Region‐based Convolutional Neural Networks, are a class of algorithms for tackling object localization and recognition problems that are optimized for model performance. You Only Look Once, or YOLO is a second family of object identification systems geared for speed and real‐time application.
... A network can be large and complex at times, and it is likely to rely on a large number of linked endpoints. While this is advantageous to your company's operations and makes your workflow more controllable, it also creates a security concern [12,13]. e issue is that, without the free movement of people within your network, if a hostile actor gains access, they may roam around and cause havoc, often without your knowledge. ...
Full-text available
To investigate the use of cloud computing technologies in safe computer storage, rstly, it is proposed to complete the central control function by building a cloud computing data center, collect multiple platforms and network safety technologies, and then connect computers in unlike sites to con rm computer information security. en, based on the implementation advantages of cloud computing technique in computer network safe storage, speci c applications are analyzed. Finally, a cloud computing secure modeling and analysis idea based on multiqueue and multiserver is proposed. e proposed cloud security approach ensures that both data and applications are easily accessible to authorized users. One always has a consistent way to access your cloud data and applications, allowing you to address any potential security issues as soon as they arise. It has greatly improved computer security storage convenience while also greatly improving computer network storage security. After veri cation, with the cloud computing technology platform to carry out relevant businesses at any time, the operation e ectiveness has been meaningfully enhanced by 80%. At the same time, it promotes the construction of information sharing and gives full performance to the bene ts of hardware, accelerates the process of resource integration, and provides information support for the formulation of enterprise strategic plans. Combined with the actual situation, the current study discusses the development and application direction of cloud computing, so as to add new impetus to the economic growth of enterprises.
... It is generally modified by fluorosilicate, borate, and organic polymer [9]. Avoiding thicker retardant layer for steel buildings prone to cracking during work, bottoming, and the use of longer-lasting weatherproof coating Life, boost fireproof coating performance, and be quite significant [10,11]. Etheridge and others used high-alumina cement, bentonite, light calcium carbonate, sepiolite, expanded vermiculite, mica, expanded perlite, glass fibre and other raw materials to make dry powder steel structure fireproof coatings. ...
Full-text available
The infrastructure of health and biomedical facilities caters to a variety of purposes that frequently vary over time, flexibility and adaptability are crucial traits for future re‐configuration. This paper chooses the relevant cement‐based solid metal structures fire‐retardant coatings available on the market as the test object and performs high‐cycle fatigue stress experiments on steel structural elements painted with cement‐based thick steel structure fire‐retardant varnishes. It simulates the deformation of the steel structure under the daily variable load of high‐rise steel structure buildings, investigates the cracking and adhesion properties of the fireproof coating on the surface of the steel component under the repeated action of this load, and evaluates the service life of the fireproof coating for the steel structure under normal use conditions. When the best dosage of modified epoxy resin E51 is 10%, the best ratio of emulsifier is m(SDS): m(OP‐10 = 2:1, the best dosage is 1.25% of the monomer quality, and the best dosage of G04 is 3% of the monomer mass, and the particle size distribution of the latex particles is concentrated. The average particle size of the emulsion is 138.3 nm, and the hardness and water resistance of the paint film are the best.
The Internet of Medical Things (IoMT) and blockchain technology are two key technologies that have influenced different aspects of human life. Smart healthcare and medical diagnosis are one of them. These latest trends are adding value to the existing frameworks by providing enhanced security and better data management. In today’s world, the healthcare frameworks that people are utilizing are sufficient, but they are suffering with lots of difficulties and challenges. The advanced frameworks are able to handle the massive healthcare records and also provide great security to the data by proper monitoring and real-time tracking. Nowadays, monitoring and security of medical data are the major requirements of any healthcare framework. Lots of work for improving the important parameters of medical data have been done in this domain. In order to improve the major hurdles of medical frameworks, blockchain technology has proved one of the best options. This paper shows a brief introduction about the relevance of blockchain technology in IoMT-enabled smart healthcare frameworks. Further, the paper discusses an exhaustive literature review on the existing technology based on IoMT, their issues and research challenges. In later sections, different comparative studies based on various parameters like types of blockchain frameworks, various existing frameworks and then their corresponding solutions and advantages proposed by blockchain technology have been discussed.KeywordsBlockchain technologySmart health careElectronic health records (EHR)Internet of Medical Things (IoMT)Blockchain parametersResearch challenges
In today’s world, smart healthcare supports the out-of-hospital concept, which transforms and offers higher care standards. This is accomplished with individual requirements with the help of public opinion. Moreover, smart healthcare systems are generally designed to sense individual health status data, which can be forwarded to clinical professionals for interpretation. Swarm intelligence analysis is a valuable tool for categorizing public opinion into different sentiments. Dynamics of Swarm Intelligence Health Analysis for the Next Generation discusses the role of behavioral activity in the evolution of traditional medical systems to intelligent systems. It further focuses on the economic, social, and environmental impacts of swarm intelligence smart healthcare systems. Covering topics such as healthcare data analytics, clustering algorithms, and the internet of medical things, this premier reference source is an excellent resource for healthcare professionals, hospital administrators, IT managers, policymakers, educators and students of higher education, researchers, and academicians.
Swarm intelligence is one of the most modern and less discovered artificial intelligence types. Until now it has been proven that the most comprehensive method to solve complex problems is using behaviours of swarms. Big data analysis plays a beneficial role in decision making, education domain, innovations, and healthcare in this digitally growing world. To synchronize and make decisions by analysing such a big amount of data may not be possible by the traditional methods. Traditional model-based methods may fail because of problem varieties such as volume, dynamic changes, noise, and so forth. Because of the above varieties, the traditional data processing approach will become inefficient. On the basis of the combination of swarm intelligence and data mining techniques, we can have better understanding of big data analytics, so utilizing swarm intelligence to analyse big data will give massive results. By utilizing existing information about this domain, more efficient algorithm can be designed to solve real-life problems.
Full-text available
As the “new oil of the future,” big data is becoming the leading industry of the new economy, the core asset of the country and enterprises, the “new blue ocean” to be pursued, and the national strategy to be developed by all countries. The development of big data and its related technology supports and promotes a new round of technological innovation, making a new generation of information security technology reform and innovation, bringing opportunities and challenges to optimize, and consolidating national information security. In the era of big data, what kind of challenges and impacts will information security face? and is it crucial to explore the response strategies? At present, China has risen to become the world’s largest number of Internet users and the largest number of people using smartphones, but because China’s information security is the initial stage, involving information security, especially national information security laws and regulations are not much, the national social supervision and monitoring mechanisms are not much, the application level of science and technology content is relatively backward, the core technology has a patent technology not much, resulting in the flood of network data nowadays. Therefore, the underground illegal “data industry chain” activities are rampant. Therefore, this paper proposes a security-aware model based on the combination of distributed data analysis technology and data features. The model uses data features to dynamically generate a library of situational anomalies, effectively solving the problem of analyzing and processing rapidly and dynamically generated data streams, increasing the detection rate to more than 98%, effectively reducing the possibility of false detection, and having good results on large-scale datasets.
ResearchGate has not been able to resolve any references for this publication.