Conference Paper

Research and Implementation of a Data Backup and Recovery System for Important Business Areas

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Contemporary backup and recovery systems have evolved to meet the demanding requirements of modern business environments [4]. Research indicates that organizations implementing structured backup strategies experience 60% faster recovery times during system failures. ...
... The implementation of automated backup systems has demonstrated significant improvements in data protection, with recent studies showing a 45% reduction in data loss incidents when compared to manual backup procedures. Modern recovery systems can now restore critical business data within minutes, ensuring minimal disruption to business operations [4]. ...
Article
Full-text available
This comprehensive article explores the fundamental aspects of Linux system administration, focusing on key areas essential for IT professionals. The article delves into the evolution of Linux architecture, emphasizing its role in modern computing environments and its impact on organizational infrastructure. The article examines critical components including file systems and storage management, networking and Lokeshwar Reddy Chilla https://iaeme.com/Home/journal/IJCET 2574 editor@iaeme.com security frameworks, distribution and package management systems, and advanced system administration practices. The article extends to cloud integration and containerization, highlighting the transformation of application deployment methodologies. Through detailed analysis of system performance metrics, security implementations, and automation strategies, this article demonstrates the significant advancements in Linux-based systems across various operational domains. The article particularly emphasizes the integration of intelligent monitoring systems, predictive maintenance capabilities, and the growing importance of community-driven development in the Linux ecosystem. This article provides valuable insights into the current state of Linux administration while exploring emerging trends and future developments in cloud computing and containerization technologies.
... The malicious behaviour of data tampering is detected and recovered, but this system is only suitable for recovering document data, not real-time data. In important business areas, Zhang and Li [16] combined blockchain and smart contract technology to improve the existing backup and recovery technology and adopted role-based access control strategies to strictly audit the data backup and recovery process to prevent data from being compromised. In the field of the supply chain, Cha et al. [17] proposed a data management and recovery system that uses blockchain and key agent encryption. ...
... Moreover, it can prevent unauthorized access. However, some of the above studies are still in the discussion of theoretical concepts [15,16], some apply blockchain to data recovery in specific fields [17], and some apply access control technology or key technology to data credible recovery to ensure data recovery security, and some just use coding technology to optimize storage efficiency [18,19,20]. In summary, there are few studies on improving the recovery efficiency while ensuring the credibility of the data recovery process. ...
Article
Full-text available
With the continuous development of information technology, the Internet of Things has also been widely used. At the same time, in the power Internet of Things environment, reliable data is essential for data use and accurate analysis. Data security has become a key factor in ensuring the stable operation of the power grid. However, the power Internet of Things devices is extremely vulnerable to network attacks, leading to data tampering and deletion. Resisting tampering, preventing data loss, and reliably restoring data have become difficult to ensure data security. In order to solve this problem, this paper proposes a trusted data recovery system based on blockchain and coding technology. Data nodes of the power Internet of Things encode key data and back them up to the blockchain network through a data processing server located on the edge. The data processing server performs real-time detection of the data integrity of the data nodes. When the data is tampered with or deleted, the data processing server promptly obtains the corresponding data encoding blocks from the blockchain network, decodes them, and sends them to the data node to complete the data recovery task. According to the test result, the data backup speed of this system is increased by 15.3%, and the data recovery speed is increased by 19.8% compared with the traditional scheme. It has good security and real-time performance. Meanwhile, it reduces the network and storage resource overhead in the data backup and recovery process.
... Various strategies have been used to process data safety, such as encryption techniques, audit and copying methods. For example, Blockchain technology has shown promising results in ensuring the safety of not manipulated data [3]. ...
... An important finding was the limited support for secure storage of users' financial data. Backing up such data however is essential to ensure its recovery in case the original data is lost or damaged (Zhang and Li, 2017). Findings also show that <38% of apps provide such free options mostly on a local storage device (user's phone), Google Drive or Dropbox. ...
Article
Full-text available
While financial practises permeate our lives, the effective management of personal finance is not trivial, as indicated in the increasing number of commercial apps aimed to support budgeting. Such apps however have been limitedly explored, despite the growing HCI interest in financial practises. To address this gap, we present the functionality review of 45 top-rated budgeting apps from Google Play and Apple Store, together with an analysis of their descriptions on marketplaces. Findings indicate the value of richer, multimodal app descriptions, support for budgeting literacy and for stronger theoretical underpinning of these apps. They also highlight main functionalities for supporting different types of transactions and accounts, for entering and managing transactions, securing data, as well as for creating and managing budgets. We conclude with five design implications to better support each of these functionalities.
... Zhang et.al. [4] have designed and implemented data backup and recovery system with security enhancements which mainly focuses on mainly focus on the availability including the confidentiality in the backup operation as well as recovery operation. The work has considered various modules in the architecture of the system such as backup module, recovery module, network communication module, logging and transmission module, task management module which exist at the upper layers of the architecture. ...
Article
Full-text available
Cloud storage has been a boon for many organizations and the individuals as it reduces the burden of safety and security of their data, in addition to minimizing the investment for infrastructure. Every cloud service provider offers not only the storage service but it also extends to preserve the data privacy and security. The inherent mechanism that many cloud service providers implement is n-backup by which the client is guaranteed for data recovery in the case when the original copy of data is damaged or lost. However, for each of the backup copy, it is evident that more storage space is required. In this digital big data era, industry is experiencing the data explosion hence more space and infrastructure would become a necessary need for the service providers. In this paper, a study is conducted on various cloud storage mechanisms, challenges and identified the gaps. Finally, possible solutions are mentioned which is a need for the present cloud storage scenario.
... disaster recovery and backup technologies has been conducted to ensure data security and business sustainability [2,3]. In contrast to the above research, research on location privacy protection technology assumes that the user's location and query are not accurately recognized as much as possible when the system has been intruded by the attacker, which in turn ensures the user's privacy security. ...
Article
Full-text available
With the development of mobile applications, location-based services (LBSs) have been incorporated into people’s daily lives and created huge commercial revenues. However, when using these services, people also face the risk of personal privacy breaches due to the release of location and query content. Many existing location privacy protection schemes with centralized architectures assume that anonymous servers are secure and trustworthy. This assumption is difficult to guarantee in real applications. To solve the problem of relying on the security and trustworthiness of anonymous servers, we propose a Geohash-based location privacy protection scheme for snapshot queries. It is named GLPS. On the user side, GLPS uses Geohash encoding technology to convert the user’s location coordinates into a string code representing a rectangular geographic area. GLPS uses the code as the privacy location to send check-ins and queries to the anonymous server and to avoid the anonymous server gaining the user’s exact location. On the anonymous server side, the scheme takes advantage of Geohash codes’ geospatial gridding capabilities and GL-Tree’s effective location retrieval performance to generate a k-anonymous query set based on user-defined minimum and maximum hidden cells, making it harder for adversaries to pinpoint the user’s location. We experimentally tested the performance of GLPS and compared it with three schemes: Casper, GCasper, and DLS. The experimental results and analyses demonstrate that GLPS has a good performance and privacy protection capability, which resolves the reliance on the security and trustworthiness of anonymous servers. It also resists attacks involving background knowledge, regional centers, homogenization, distribution density, and identity association.
... erefore, settings for the backup and recovery of data sets collected by OSINT were important. Data backup and recovery were the processes of backing up data in the case when data loss occurred and configuring the security systems, and in the end, it facilitated the recovery of the lost data [45]. However, considering the backup cost, data recovery cost, and loss cost, it was inefficient to configure the backup and recovery for all data because data collected by the OSINT tool were big data-although they might vary depending on the environment. ...
Article
Full-text available
Recently, users have used open-source intelligence (OSINT) to gather and obtain information regarding the data of interest. The advantage of using data gathered by OSINT is that security threats arising in cyberspace can be addressed. However, if a user uses data collected by OSINT for malicious purposes, information regarding the target of an attack can be gathered, which may lead to various cybercrimes, such as hacking, malware, and a denial-of-service attack. Therefore, from a cybersecurity point of view, it is important to positively use the data gathered by OSINT in a positive manner. If exploited in a negative manner, it is important to prepare countermeasures that can minimize the damage caused by cybercrimes. In this paper, the current status and security trends of OSINT will be explained. Specifically, we present security threats and cybercrimes that may occur if data gathered by OSINT are exploited by malicious users. Furthermore, to solve this problem, we propose security requirements that can be applied to the OSINT environment. The proposed security requirements are necessary for securely gathering and storing data in the OSINT environment and for securely accessing and using the data collected by OSINT. The goal of the proposed security requirements is to minimize the damage when cybercrimes occur in the OSINT environment.
Book
The book is a collection of best selected research papers presented at the International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications (ICMISC 2022) held during 28 – 29 March 2022 at CMR Institute of Technology, Hyderabad, Telangana, India. This book will contain the articles on current trends of machine learning, internet of things, and smart cities applications emphasizing on multi-disciplinary research in the area of artificial intelligence and cyber physical systems. The book is a great resource for scientists, research scholars and PG students to formulate their research ideas and find the future directions in these areas. Further, this book serves as a reference work to understand the latest technologies by practice engineers across the globe.
Chapter
Full-text available
Cancer is one of the major health problems persisting worldwide. The data for the prognosis of cancer is taken from the National Cancer Registry Program (www.ncrpindia.org.in) [1]. We have analyzed the underlying pattern of distribution of incidence rates of lung cancer in males for the two regions such as Bengaluru and Mumbai and fitted model A by observing the pattern of the incidence rates of lung cancer in males. By intuition; we divided the data into 2 groups. For Group 1, the second-degree equation fitted well. For Group 2, the cubic spline model fitted well. The estimation of parameters involved in both Group 1 and Group 2 was estimated by using least squares method. Expressions for the variance of parameters of second-degree curves were derived.
Chapter
When pandemic rose in 2020, people were fighting against COVID-19 virus and organizations had accelerated their digitization and cloud adoption rapidly (De et al. in Int J Inf Manag 55:102171, 2020 [1]) to meet the online based business during the lockdown. This chaos helped fraudsters and attackers taking advantage of the momentary lack of security controls and oversight. Federal Investigation Bureau (FBI) Internet Crime Compliant Center (IC3) 2020 reported highest number of complaints in 2020 (791 k + ) compared to prior five years (298 k + in 2016), with peak losses reported (4.2Billionin2020comparedto4.2 Billion in 2020 compared to 1.5 Billion in 2016) (Internet Crime Complaint Center in Internet crime report. Federal Bureau of Investigation, Washington, D.C., 2020 [2]). Majority of these incidents were connected to financial fraud, identity fraud, and phishing for personally identifiable information (PII). Considering the severity and impact of personal data exposure over cloud and hybrid environment, this paper provides a brief overview of prior research and discuss technical solutions to protect data across heterogeneous environments and ensure privacy regulations.
Chapter
In this digital era, data are very insecure, and data are being compromised with several risks, potential attackers, several methodologies, and mechanisms have been evolved to ensure backup of critical data. The reliable data backup technology ensures the reliability and availability of data over the network. Demand of safety and security for information and storage is increasing over the world day by day. Data are generated from a wide range of domain such as information technology sector, health and education sector, defense, banking, e-commerce, and telecommunication. Therefore, the backup of data plays a vital role to maintain the confidentiality, integrity, and availability of data for the end users. Decentralized data backup using blockchain technology can provide data confidentiality, data integrity, authentication, and authorization with digital signatures. This paper presents a review of various data backup techniques using centralized backup systems and discusses the problems and resolution associated with them and finally, how to resolve those issues with the help of decentralized data backup mechanisms using blockchain technology.KeywordsBackupBackup techniquesBlockchain backup approachConfidentialityAvailability
Chapter
Full-text available
The electroencephalogram is a test that is used to keep track on the brain activity. These signals are generally used in clinical areas to identify various brain activities that happen during specific tasks and to design brain–machine interfaces to help in prosthesis, orthosis, exoskeletons, etc. One of the tedious tasks in designing a brain–machine interface application is based on processing of EEG signals acquainted from real-time environment. The complexity arises due to the fact that the signals are noisy, non-stationary, and high-dimensional in nature. So, building a robust BMI is based on the efficient processing of these signals. Optimal selection of features from the signals and the classifiers used plays a vital role in building efficient devices. This paper concentrates on surveying the recent feature selection, feature extraction, and classification algorithms used in various applications for the development of BMI.KeywordsEEGProsthesisOrthosisExoskeletons
Conference Paper
A new categorized hat leverage the opportunities offered by modern Cloud Computing platforms, where scalable computational power and storage capacity can be engaged and decommissioned on demand, allow one to conveniently master huge amounts of information that otherwise could be impossible to wield. The features included in Backup and Restore may differ depending on the edition of Windows. It is challenging for cloud providers to quickly interpret which events to act upon and the priority of events. This paper discusses the design goals, technical requirements and architecture of centralized system a conceptual framework for the CB technology on top of a Cloud infrastructure, which aims to embody the concept of "Architecture as a service".
Article
Full-text available
This paper presents a comprehensive study on implementations and performance evaluations of two snapshot techniques: copy-on-write snapshot and redirect-on-write snapshot. We develop a simple Markov process model to analyze data block behavior and its impact on application performance, while the snapshot operation is underway at the block-level storage. We have implemented the two snapshots techniques on both Windows and Linux operating systems. Based on our analytical model and our implementation, we carry out quantitative performance evaluations and comparisons of the two snapshot techniques using IoMeter, PostMark, TPC-C, and TPC-W benchmarks. Our measurements reveal many interesting observations regarding the performance characteristics of the two snapshot techniques. Depending on the applications and different I/O workloads, the two snapshot techniques perform quite differently. In general, copy-on-write performs well on read intensive applications, while redirect-on-write performs well on write intensive applications.
Article
In this paper, we present YuruBackup, a space-efficient and highly scalable incremental backup system in the cloud. YuruBackup enables fine-grained data de-duplication with hierarchical partitioning to improve space efficiency to reduce bandwidth of both backup and restore processes, and storage costs. On the other hand, YuruBackup explores a highly scalable architecture for fingerprint servers that allows to add one or more fingerprint servers dynamically to cope with increasing clients. In this architecture, the fingerprint servers in a DB cluster are used for scaling writes of fingerprint catalog, while the slaves are used for scaling reads of fingerprint catalog. We present the system architecture of YuruBackup and its components, and we have implemented a proof-of-concept prototype of YuruBackup. By conducting performance evaluation in a public cloud, experimental results demonstrate the efficiency of the system.
Article
Hard disks are employed as the primary storage device for consumer electronic products. But the data on hard disks may be lost due to error operations, malicious code attacks and software conflictions. As data backup and recovery on hard disks move to ever more challenging areas, improving the security level of backup data, lowering overhead to the storage system and reducing the fragments created by the backup operation become increasingly important. Therefore, a novel Host Protected Area Virtual File System prototype, which is called as HVF, is presented in this paper. In order to boost security of data store locations, backup data or protected data are saved in Host Protected Area by the storage filter driver. The method of creating bitmap indexes for hard disks and the mechanism of the mapping relationship are proposed in order to reduce the effects to the operating system and cut down the fragments produced by a large number of I/O operations. The simulation results indicate that the proposed HVF has higher security, less overhead and fragments to dominating operating systems.
Conference Paper
Modern deduplication has become quite effective at eliminating duplicates in data, thus multiplying the effective capacity of disk-based backup systems, and enabling them as realistic tape replacements. Despite these improvements, single-node raw capacity is still mostly limited to tens or a few hundreds of terabytes, forcing users to resort to complex and costly multi-node systems, which usually only allow them to scale to singledigit petabytes. As the opportunities for deduplication efficiency optimizations become scarce, we are challenged with the task of designing deduplication systems that will effectively address the capacity, throughput, management and energy requirements of the petascale age. In this paper we present our high-performance deduplication prototype, designed from the ground up to optimize overall single-node performance, by making the best possible use of a node's resources, and achieve three important goals: scale to large capacity, provide good deduplication efficiency, and near-raw-disk throughput. Instead of trying to improve duplicate detection algorithms, we focus on system design aspects and introduce novelmechanisms--thatwe combinewith careful implementations of known system engineering techniques. In particular, we improve single-node scalability by introducing progressive sampled indexing and grouped mark-and-sweep, and also optimize throughput by utilizing an event-driven, multi-threaded client-server interaction model. Our prototype implementation is able to scale to billions of stored objects, with high throughput, and very little or no degradation of deduplication efficiency.
Conference Paper
Slow restoration due to chunk fragmentation is a serious problem facing inline chunk-based data deduplication systems: restore speeds for the most recent backup can drop orders of magnitude over the lifetime of a system. We study three techniques--increasing cache size, container capping, and using a forward assembly area-- for alleviating this problem. Container capping is an ingest-time operation that reduces chunk fragmentation at the cost of forfeiting some deduplication, while using a forward assembly area is a new restore-time caching and prefetching technique that exploits the perfect knowledge of future chunk accesses available when restoring a backup to reduce the amount of RAM required for a given level of caching at restore time. We show that using a larger cache per stream--we see continuing benefits even up to 8 GB--can produce up to a 5-16X improvement, that giving up as little as 8% deduplication with capping can yield a 2-6X improvement, and that using a forward assembly area is strictly superior to LRU, able to yield a 2-4X improvement while holding the RAM budget constant.
Article
In the second of an occasional series of articles for the QAJ, the author discusses backup of data held on computers, the types of backup that are possible such as full, incremental and differential, and processes for recovery of the data in the event of any file corruption or hardware failure. Backup is key to any successful recovery or disaster recovery process. To mitigate the impact of failure, the use of hardware redundancy is also discussed. Copyright © 2001 John Wiley & Sons, Ltd.