Article

To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?

Computing Research Repository - CORR 01/2007;
Source: arXiv

ABSTRACT Application designers often face the question of whether to store large
objects in a filesystem or in a database. Often this decision is made for
application design simplicity. Sometimes, performance measurements are also
used. This paper looks at the question of fragmentation - one of the
operational issues that can affect the performance and/or manageability of the
system as deployed long term. As expected from the common wisdom, objects
smaller than 256KB are best stored in a database while objects larger than 1M
are best stored in the filesystem. Between 256KB and 1MB, the read:write ratio
and rate of object overwrite or replacement are important factors. We used the
notion of "storage age" or number of object overwrites as way of normalizing
wall clock time. Storage age allows our results or similar such results to be
applied across a number of read:write ratios and object replacement rates.

1 Follower
 · 
105 Views
  • Source
    • "Data access pattern is mainly determined by the requirement of the specific application. An optimal data layout should be able to maximize the continuous I/O, and minimize the accesses of unnecessary (unrelated) data [15] "
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to the explosive growth in the size of scientific data-sets, data-intensive computing and analysing are an emerging trend in computational science. In these applications, data pre-processing is widely adopted because it can optimise the data layout or format beforehand to facilitate the future data access. On the other hand, current research shows an increasing popularity of MapReduce framework for large-scale data processing. However, the data access patterns which are generally applied to scientific data-set are not supported by current MapReduce framework directly. This gap motivates us to provide support for these scientific data access patterns in MapReduce framework. In our work, we study the data access patterns in matrix files and propose a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a data layout which maintains the dimensional property in chunk level. Contrary to the continuous data layout adopted in the current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk. This layout can guarantee that the average performance of data access is optimal regardless of the various access patterns. The concentric data layout requires reorganising the data before it is being analysed or processed. Our experiments are launched on a real-world halo-finding application; the results indicate that the concentric data layout improves the overall performance by up to 38%.
    International Journal of Parallel Emergent and Distributed Systems 10/2013; 28(5):407-433. DOI:10.1080/17445760.2012.720982
  • Source
    • "Log compression [20], [21] allows log records to be compressed and decompressed as they are written and read from the log files, which can provide disk usage savings. Bulk-logged [22] option in SQL Server reduces the penalty of logging because the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Hence, any-point-in-time recovery is not possible with bulk-logged option. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the past few years, more storage system applications have employed transaction processing techniques to ensure data integrity and consistency. Logging is one of the key requirements to ensure transaction Atomicity, Consistency, Isolation, Durability (ACID) properties and data recoverability in transaction processing systems (TPS). Recently, emerging complex I/O bound transactions have resulted in substantially more log content and higher log flushing latency. The latency will delay transaction commit and decrease the overall throughput of the TPS. On the other hand, RAID is widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. In this paper, we observe the overlap between the redundancies in the underlying RAID storage system and database logging system, and propose a novel reliable storage architecture called Transactional RAID (TRAID). TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content and thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees the same RAID reliability, as well as recovery correctness and ACID semantics as current TPS setups. We experiment on two open-source database systems: Berkeley DB and PostgreSQL, with three different workloads: standard OLTP benchmark TPC-C, customized TPC-C with strong access locality, and customized TPC-C with write-intensive property. Then we test TRAID performance with "Group Commit” enabled. Finally, we evaluate the recovery efficiency of TRAID. Our extensive results demonstrate that for throughput, TRAID outperforms RAID by 43.24-69.5 percent for various workloads; it also saves on log space by 28.57-35.48 percent, and outperforms RAID by about 20 percent in throughput with "Group Commit” enabled. At last, we show that TRAID outperforms RAID from 28.7 to 35.7 percent during the recovery.
    IEEE Transactions on Computers 04/2012; 61(4):517-529. DOI:10.1109/TC.2011.28 · 1.47 Impact Factor
  • Source
    • "The relational database is well suited for query, sorting, and reducing many discrete data items, but requires a high degree of advance schema design and system administration. A database can store large binary objects, but it is not highly optimized for this task [14]. On the other hand, the filesystem has a much lower barrier to entry, and is well suited for simply depositing large binary objects as they are created. "
    [Show abstract] [Hide abstract]
    ABSTRACT: As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides both large, robust, scalable storage and efficient rich metadata queries for scientific applications. In this paper, we demonstrate that ROARS is capable of importing and exporting large quantities of data, migrating data to new storage nodes, providing robust fault tolerance, and generating materialized views based on metadata queries. Our experimental results demonstrate that ROARS' aggregate throughput scales with the number of concurrent clients while providing fault-tolerant data access. ROARS is currently being used to store 5.1TB of data in our local biometrics repository.
    Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC 2010, Chicago, Illinois, USA, June 21-25, 2010; 01/2010
Show more

Preview

Download
4 Downloads
Available from