To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?

Computing Research Repository - CORR 01/2007;
Source: arXiv


Application designers often face the question of whether to store large
objects in a filesystem or in a database. Often this decision is made for
application design simplicity. Sometimes, performance measurements are also
used. This paper looks at the question of fragmentation - one of the
operational issues that can affect the performance and/or manageability of the
system as deployed long term. As expected from the common wisdom, objects
smaller than 256KB are best stored in a database while objects larger than 1M
are best stored in the filesystem. Between 256KB and 1MB, the read:write ratio
and rate of object overwrite or replacement are important factors. We used the
notion of "storage age" or number of object overwrites as way of normalizing
wall clock time. Storage age allows our results or similar such results to be
applied across a number of read:write ratios and object replacement rates.

1 Follower
14 Reads
  • Source
    • "Data access pattern is mainly determined by the requirement of the specific application. An optimal data layout should be able to maximize the continuous I/O, and minimize the accesses of unnecessary (unrelated) data [15] "
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to the explosive growth in the size of scientific data-sets, data-intensive computing and analysing are an emerging trend in computational science. In these applications, data pre-processing is widely adopted because it can optimise the data layout or format beforehand to facilitate the future data access. On the other hand, current research shows an increasing popularity of MapReduce framework for large-scale data processing. However, the data access patterns which are generally applied to scientific data-set are not supported by current MapReduce framework directly. This gap motivates us to provide support for these scientific data access patterns in MapReduce framework. In our work, we study the data access patterns in matrix files and propose a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a data layout which maintains the dimensional property in chunk level. Contrary to the continuous data layout adopted in the current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk. This layout can guarantee that the average performance of data access is optimal regardless of the various access patterns. The concentric data layout requires reorganising the data before it is being analysed or processed. Our experiments are launched on a real-world halo-finding application; the results indicate that the concentric data layout improves the overall performance by up to 38%.
    International Journal of Parallel Emergent and Distributed Systems 10/2013; 28(5):407-433. DOI:10.1080/17445760.2012.720982
  • Source
    • "On the one hand, storing an image as a BLOB in a database has certain advantages over storing it as a file, such as lower read throughput for objects <1 MB in size for short intervals and atomicity for overwriting. On the other hand, storing an image as a file provides other advantages such as high throughput for files >1 MB and low fragmentation in the long term (16). We have decided to use files for storing images in the BioDIG system to minimize the load on the database. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Genomic data and biomedical imaging data are undergoing exponential growth. However, our understanding of the phenotype–genotype connection linking the two types of data is lagging behind. While there are many types of software that enable the manipulation and analysis of image data and genomic data as separate entities, there is no framework established for linking the two. We present a generic set of software tools, BioDIG, that allows linking of image data to genomic data. BioDIG tools can be applied to a wide range of research problems that require linking images to genomes. BioDIG features the following: rapid construction of web-based workbenches, community-based annotation, user management and web services. By using BioDIG to create websites, researchers and curators can rapidly annotate a large number of images with genomic information. Here we present the BioDIG software tools that include an image module, a genome module and a user management module. We also introduce a BioDIG-based website, MyDIG, which is being used to annotate images of mycoplasmas. Database URL: BioDIG website: BioDIG source code repository: The MyDIG database:
    Database The Journal of Biological Databases and Curation 01/2013; 2013:bat016. DOI:10.1093/database/bat016 · 3.37 Impact Factor
  • Source
    • "Log compression [20], [21] allows log records to be compressed and decompressed as they are written and read from the log files, which can provide disk usage savings. Bulk-logged [22] option in SQL Server reduces the penalty of logging because the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Hence, any-point-in-time recovery is not possible with bulk-logged option. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the past few years, more storage system applications have employed transaction processing techniques to ensure data integrity and consistency. Logging is one of the key requirements to ensure transaction Atomicity, Consistency, Isolation, Durability (ACID) properties and data recoverability in transaction processing systems (TPS). Recently, emerging complex I/O bound transactions have resulted in substantially more log content and higher log flushing latency. The latency will delay transaction commit and decrease the overall throughput of the TPS. On the other hand, RAID is widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. In this paper, we observe the overlap between the redundancies in the underlying RAID storage system and database logging system, and propose a novel reliable storage architecture called Transactional RAID (TRAID). TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content and thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees the same RAID reliability, as well as recovery correctness and ACID semantics as current TPS setups. We experiment on two open-source database systems: Berkeley DB and PostgreSQL, with three different workloads: standard OLTP benchmark TPC-C, customized TPC-C with strong access locality, and customized TPC-C with write-intensive property. Then we test TRAID performance with "Group Commit” enabled. Finally, we evaluate the recovery efficiency of TRAID. Our extensive results demonstrate that for throughput, TRAID outperforms RAID by 43.24-69.5 percent for various workloads; it also saves on log space by 28.57-35.48 percent, and outperforms RAID by about 20 percent in throughput with "Group Commit” enabled. At last, we show that TRAID outperforms RAID from 28.7 to 35.7 percent during the recovery.
    IEEE Transactions on Computers 04/2012; 61(4):517-529. DOI:10.1109/TC.2011.28 · 1.66 Impact Factor
Show more


14 Reads
Available from