Sandeep Uttamchandani

IBM, Armonk, New York, United States

Are you Sandeep Uttamchandani?

Claim your profile

Publications (30)1.51 Total impact

  • Source
    N. Borisov, S. Babu, N. Mandagere, S. Uttamchandani
    [Show abstract] [Hide abstract]
    ABSTRACT: The danger of production or backup data becoming corrupted is a problem that database administrators dread. This position paper aims to bring this problem to the attention of the database research community, which, surprisingly, has by and large overlooked this problem. We begin by pointing out the causes and consequences of data corruption. We then describe the Proactive Checking Framework (PCF), a new framework that enables a database system to deal with data corruption automatically and proactively. We use a prototype implementation of PCF to give deeper insights into the overall problem and to outline a challenging research agenda to address it.
    Data Engineering Workshops (ICDEW), 2011 IEEE 27th International Conference on; 05/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Modern storage systems are employing data deduplication with increasing frequency. Often the storage systems on which these techniques are deployed contain important data, and utilize fault-tolerant hardware and software to improve the reliability of the system and reduce data loss. We suggest that data deduplication introduces inter-file relationships that may have a negative impact on the fault tolerance of such systems by creating dependencies that can increase the severity of data loss events. We present a framework composed of data analysis methods and a model of data deduplication that is useful in studying the reliability impact of data deduplication. The framework is useful for determining a deduplication strategy that is estimated to satisfy a set of reliability constraints supplied by a user.
    30th IEEE Symposium on Reliable Distributed Systems (SRDS 2011), Madrid, Spain, October 4-7, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Occasional corruption of stored data is an unfortunate byproduct of the complexity of modern systems. Hardware errors, software bugs, and mistakes by human administrators can corrupt important sources of data. The dominant practice to deal with data corruption today involves administrators writing ad hoc scripts that run data-integrity tests at the application, database, file-system, and storage levels. This manual approach is tedious, error-prone, and provides no understanding of the potential system unavailability and data loss if a corruption were to occur. We introduce the Amulet system that addresses the problem of verifying the correctness of stored data proactively and continuously. To our knowledge, Amulet is the first system that: (i) gives administrators a declarative language to specify their objectives regarding the detection and repair of data corruption; (ii) contains optimization and execution algorithms to ensure that the administrator's objectives are met robustly and with least cost, e.g., using pay-as-you cloud resources; and (iii) provides timely notification when corruption is detected, allowing proactive repair of corruption before it impacts users and applications. We describe the implementation and a comprehensive evaluation of Amulet for a database software stack deployed on an infrastructure-as-a-service cloud provider.
    Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2011, Athens, Greece, June 12-16, 2011; 01/2011
  • T. Nayak, R. Routray, A. Singh, S. Uttamchandani, A. Verma
    [Show abstract] [Hide abstract]
    ABSTRACT: We present the design and implementation of ENDEAVOUR - a framework for integrated end-to-end disaster recovery (DR) planning. Unlike existing research that provides DR planning within a single layer of the IT stack (e.g. storage controller based replication), ENDEAVOUR can choose technologies and composition of technologies across multiple layers like virtual machines, databases and storage controllers. ENDEAVOUR uses a canonical model of available replication technologies at all layers, explores strategies to compose them, and performs a novel map-search-reduce heuristic to identify the best DR plans for given administrator requirements. We present a detailed analysis of ENDEAVOUR including empirical characterization of various DR technologies, their composition, and a end-to-end case study.
    Network Operations and Management Symposium (NOMS), 2010 IEEE; 05/2010
  • IEEE/IFIP Network Operations and Management Symposium, NOMS 2010, 19-23 April 2010, Osaka, Japan; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many enterprise environments have databases running on network-attached server-storage infrastructure (referred to as Storage Area Networks or SANs). Both the database and the SAN are complex systems that need their own separate administrative teams. This paper puts forth the vision of an innovative management framework to simplify administrative tasks that require an in-depth understanding of both the database and the SAN. As a concrete instance, we consider the task of diagnosing the slowdown in performance of a database query that is executed multiple times (e.g., in a periodic report-generation setting). This task is very challenging because the space of possible causes includes problems specific to the database, problems specific to the SAN, and problems that arise due to interactions between the two systems. In addition, the monitoring data available from these systems can be noisy. We describe the design of DIADS which is an integrated diagnosis tool for database and SAN administrators. DIADS generates and uses a powerful abstraction called Annotated Plan Graphs (APGs) that ties together the execution path of queries in the database and the SAN. Using an innovative workflow that combines domain-specific knowledge with machine-learning techniques, DIADS was applied successfully to diagnose query slowdowns caused by complex combinations of events across a PostgreSQL database and a production SAN.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present DIADS, an integrated DIAgnosis tool for Databases and Storage area networks (SANs). Existing diagnosis tools in this domain have a database-only (e.g., (11)) or SAN-only (e.g., (28)) focus. DIADS is a first- of-a-kind framework based on a careful integration of in- formation from the database and SAN subsystems; and is not a simple concatenation of database-only and SAN- only modules. This approach not only increases the ac- curacy of diagnosis, but also leads to significant improve- ments in efficiency. DIADS uses a novel combination of non-intrusive ma- chine learning techniques (e.g., Kernel Density Estima- tion) and domain knowledge encoded in a new symptoms database design. The machine learning component pro- vides core techniques for problem diagnosis from mon- itoring data, and domain knowledge acts as checks-and- balances to guide the diagnosis in the right direction. This unique system design enables DIADS to function effectively even in the presence of multiple concurrent problems as well as noisy data prevalent in production environments. We demonstrate the efficacy of our ap- proach through a detailed experimental evaluation of DI- ADSimplemented on a real data center testbed with Post- greSQL databases and an enterprise SAN.
    7th USENIX Conference on File and Storage Technologies, February 24-27, 2009, San Francisco, CA, USA. Proceedings; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many enterprise environments have databases running on network-attached storage infrastructure (referred to as Stor- age Area Networks or SANs). Both the database and the SAN are complex subsystems that are managed by separate teams of administrators. As often as not, database admin- istrators have limited understanding of SAN conflguration and behavior, and limited visibility into the SAN's run-time performance; and vice versa for the SAN administrators. Di- agnosing the cause of performance problems is a challenging exercise in these environments. We propose to remedy the situation through a novel tool, called Diads, for database and SAN problem diagnosis. This demonstration proposal summarizes the technical innovations in Diads: (i) a power- ful abstraction called Annotated Plan Graphs (APGs) that ties together the execution path of queries in the database and the SAN using low-overhead monitoring data, and (ii) a diagnosis work∞ow that combines domain-speciflc knowledge with machine-learning techniques. The scenarios presented in the demonstration are also described.
  • Source
    A. Singh, S. Uttamchandani, Yin Wang
    [Show abstract] [Hide abstract]
    ABSTRACT: As storage deployments within enterprises continue to grow, there is an increasing need to simplify and automate. Existing tools for automation rely on extracting information in the form of device models and workload patterns from raw performance data collected from devices. This paper evaluates the effectiveness of applying such information extraction techniques on real-world data collected over a period of months from the data centers of two commercial enterprises. Real-world monitor data has several challenges that typically do not exist in controlled lab environments. Our analysis for creating models is using popular algorithms such as M5, CART, ARIMA and Fast Fourier Transform (FFT). The relative error rate in predicting device response time from real-world data is 40-45% - a similar experiment using data from a controlled lab environment has a relative error of 25%. Bootstrapping models for the two commercial datasets ran for 245 mins and 477 mins respectively, which illustrates the need for mechanisms that effectively deal with large enterprise scales. We describe one such technique that clusters devices with similar hardware configurations. With a cluster size of five devices, we were able to reduce the model creation time to 94 mins and 138 mins respectively. Finally, an interesting trade-off arises in model accuracy and computation time required to refine the model.
    Modeling, Analysis and Simulation of Computers and Telecommunication Systems, 2008. MASCOTS 2008. IEEE International Symposium on; 10/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Exponential growth in storage requirements and an increasing number of heterogeneous devices and application policies are making enterprise storage management a nightmare for administrators. Back-of-the-envelope calculations, rules of thumb, and manual correlation of individual device data are too error prone for the day-to-day administrative tasks of resource provisioning, problem determination, performance management, and impact analysis. Storage management tools have evolved over the past several years from standardizing the data reported by storage subsystems to providing intelligent planners. In this paper, we describe that evolution in the context of the IBM TotalStorage® Productivity Center (TPC)—a suite of tools to assist administrators in the day-to-day tasks of monitoring, configuring, provisioning, managing change, analyzing configuration, managing performance, and determining problems. We describe our ongoing research to develop ways to simplify and automate these tasks by applying advanced analytics on the performance statistics and raw configuration and event data collected by TPC using the popular Storage Management Initiative-Specification (SMI-S). In addition, we provide details of SMART (storage management analytics and reasoning technology) as a library that provides a collection of data-aggregation functions and optimization algorithms.
    Ibm Journal of Research and Development 08/2008; DOI:10.1147/rd.524.0341
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Introducing an application into a data center involves complex interrelated decision-making for the placement of data (where to store it) and resiliency in the event of a disaster (how to protect it). Automated planners can assist administrators in making intelligent placement and resiliency decisions when provisioning for both new and existing applications. Such planners take advantage of recent improvements in storage resource management and provide guided recommendations based on monitored performance data and storage models. For example, the IBM Provisioning Planner provides intelligent decision-making for the steps involved in allocating and assigning storage for workloads. It involves planning for the number, size, and location of volumes on the basis of workload performance requirements and hierarchical constraints, planning for the appropriate number of paths, and enabling access to volumes using zoning, masking, and mapping. The IBM Disaster Recovery (DR) Planner enables administrators to choose and deploy appropriate replication technologies spanning servers, the network, and storage volumes to provide resiliency to the provisioned application. The DR Planner begins with a list of high-level application DR requirements and creates an integrated plan that is optimized on criteria such as cost and solution homogeneity. The Planner deploys the selected plan using orchestrators that are responsible for failover and failback.
    Ibm Journal of Research and Development 08/2008; 52(4.5-52):353 - 365. DOI:10.1147/rd.524.0353
  • Source
    S. Agarwala, R. Routray, S. Uttamchandani
    [Show abstract] [Hide abstract]
    ABSTRACT: Most organizations are becoming increasingly reliant on IT product and services to manage their daily operations. The total cost of ownership (TCO), which includes the hardware and software purchase cost, management cost, etc., has significantly increased and forms one of the major portions of the total expenditure of the company. CIOs have been struggling to justify the increased costs and at the same time fulfill the IT needs of their organizations. For businesses to be successful, these costs need to be carefully accounted and attributed to specific processes or user groups/departments responsible for the consumption of IT resources. This process is called IT chargeback and although desirable, is hard to implement because of the increased consolidation of IT resources via technologies like virtualization. Current IT chargeback methods are either too complex or too adhoc, and often a times lead to unnecessary tensions between IT and business departments and fail to achieve the goal for which chargeback was implemented. This paper presents a new tool called ChargeView that automates the process of IT costing and chargeback. First, it provides a flexible hierarchical framework that encapsulates the cost of IT operations at different level of granularity. Second, it provides an easy way to account for different kind of hardware and management costs. Third, it permits implementation of multiple chargeback policies that fit the organization goals and establishes relationship between the cost and the usage by different users and departments within an organization. Finally, its advanced analytics functions can keep track of usage and cost trends, measure unused resources and aid in determining service pricing. We discuss the prototype implementation of ChargeView and show how it has been used for managing complex systems and storage networks.
    Network Operations and Management Symposium, 2008. NOMS 2008. IEEE; 05/2008
  • NagaPramod Mandagere, Pin Zhou, Mark A. Smith, Sandeep Uttamchandani
    [Show abstract] [Hide abstract]
    ABSTRACT: Effectiveness and tradeoffs of deduplication technologies are not well understood -- vendors tout Deduplication as a "silver bullet" that can help any enterprise optimize its deployed storage capacity. This paper aims to provide a comprehensive taxonomy and experimental evaluation using real-world data. While the rate of change of data on a day-to-day basis has the greatest influence on the duplication in backup data, we investigate the duplication inherent in this data, independent of rate of change of data or backup schedule or backup algorithm used. Our experimental results show that between different deduplication techniques the space savings varies by about 30%, the CPU usage differs by almost 6 times and the time to reconstruct a deduplicated file can vary by more than 15 times.
    Middleware 2008, ACM/IFIP/USENIX 9th International Middleware Conference, Leuven, Belgium, December 1-5, 2008, Companion Proceedings; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Introducing an application into a data center involves complex interrelated decision-making for the placement of data (where to store it) and resiliency in the event of a disaster (how to protect it). Automated planners can assist administrators in making intelligent placement and resiliency decisions when provisioning for both new and existing applications. Such planners take advantage of recent improvements in storage resource management and provide guided recommendations based on monitored performance data and storage models. For example, the IBM Provisioning Planner provides intelligent decision-making for the steps involved in allocating and assigning storage for workloads. It involves planning for the number, size, and location of volumes on the basis of workload performance requirements and hierarchical constraints, planning for the appropriate number of paths, and enabling access to volumes using zoning, masking, and mapping. The IBM Disaster Recovery (DR) Planner enables administrators to choose and deploy appropriate replication technologies spanning servers, the network, and storage volumes to provide resiliency to the provisioned application. The DR Planner begins with a list of high- level application DR requirements and creates an integrated plan that is optimized on criteria such as cost and solution homogeneity. The Planner deploys the selected plan using orchestrators that are responsible for failover and failback.
    Ibm Journal of Research and Development 01/2008; 52:353-366.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Storage management is becoming the largest component in the overall cost of storage ownership. Most organizations are trying to either consolidate their storage management operations or outsource them to a storage service provider (SSP) in order to contain the management costs. Currently, there do not exist any planning tools that help the clients and the SSPs in figuring out the best outsourcing option. In this paper we present a planning tool, Brahma, that specifically addresses the above mentioned problem, as Brahma is capable of providing solutions where the management tasks are split between the client and SSP at a finer granularity. Our tool is unique because: (a) in addition to hardware/software resources, it also takes human skill set as an input; (b) it takes planning time window as input because plans that are optimal for a given time period (e.g. a month) might not necessarily be the most optimum for a different time period (e.g. a year); (c) it can be used separately by both the client and the SSP to do their respective planning; (d) it allows the client and the SSP to propose alternative solutions if certain input service level agreements can be relaxed. We have implemented BRAHMA, and our experiment results show that there definitely are cost benefits that one can attain by having a tool with the above mentioned functional properties.
    Services Computing, 2007. SCC 2007. IEEE International Conference on; 08/2007
  • Li Yin, Sandeep Uttamchandani, Randy H. Katz
    [Show abstract] [Hide abstract]
    ABSTRACT: The effectiveness of automatic storage management depends on the accuracy of the storage performance models that are used for making resource allocation decisions. Several approaches have been proposed for modeling. Black-box approaches are the most promising in real-world storage systems because they require minimal device specific information, and are self-evolving with respect to changes in the system. However, blackbox techniques have been traditionally considered inaccurate and non-converging in real-world systems. This paper evaluates a popular off-the-shelf black-box technique for modeling a real-world storage environment. We measured the accuracy of performance predictions in single workload and multiple workload environments. We also analyzed accuracy of different performance metrics namely throughput, latency, and detection of saturation state. By empirically exploring improvements for the model accuracy, we discovered that by limiting the component model training for the nonsaturated zone only and by taking into account the number of outstanding IO requests, the error rate of the throughput model is 4.5% and the latency model is 19.3%. We also discovered that for systems with multiple workloads, it is necessary to consider access characteristics of each workload as input parameters for the model. Lastly, we report results on the sensitivity of model accuracy as a function of the amount of bootstrapping data.
    14th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2006), 11-14 September 2006, Monterey, California, USA; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The designers of clustered Þle systems, storage resource management software and storage virtualization devices are trying to provide the necessary planning functionality in their products to facilitate the invocation of the appro- priate corrective actions in order to satisfy user speciÞed service level objectives (SLOs). However, most exist- ing approaches only perform planning for a single type of action such as workload throttling, or data migration, or addition of new resources. As will be shown in this paper, single action based plans are not always cost ef- fective. In this paper we present a framework SMART that considers multiple types of corrective actions in an integrated manner and generates a combined corrective action schedule. Furthermore, often times, the best cost- effective schedule for a one-week lookahead could be different from the best cost-effective schedule for a one- year lookahead. An advantage of the SMART framework is that it considers this lookahead time window in coming up with its corrective action schedules. Finally, another key advantage of this framework is that it has a built-in mechanism to handle unexpected surges in workloads. We have implemented our framework and algorithm as part of a clustered Þle system and performed various ex- periments to show the beneÞts of our approach.
    Proceedings of the 2006 USENIX Annual Technical Conference, May 30 - June 3, 2006, Boston, MA, USA; 01/2006
  • Source
    Lin Qiao, B.R. Iyer, D. Agrawal, A. El Abbadi, S. Uttamchandani
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditionally storage has been purchased and attached to a single computer system. Such storage is accessible only through the computer system to which it is locally attached. In the last 10 years, especially in corporate data centers, storage is being increasingly purchased independent of the processors, and independently managed and administered. Because of the standardization of disk IO protocols, storage can be easily shared amongst various heterogeneous processors running various applications. The shared storage is accessed over a network interconnecting the processors to the shared disk subsystem, known as the storage area network - a network on which processors send IO calls to virtual disks. It is the task of the storage controller to manage the mapping of virtual disks to physical disks, a task known as storage virtualization, similar to memory virtualization of processors. The storage virtualization layer has been exploited to provide diverse storage functions. If a reasonable prediction of IO workload can be made, the storage virtualization layer could optimize the mapping of physical disks to virtual disks to satisfy applications' IO response time requirements. In this paper, we tackle the problems of moving data in a storage hierarchy under both capacity/performance constraints and on-demand resource provisioning constraints
    Autonomic Computing, 2005. ICAC 2005. Proceedings. Second International Conference on; 07/2005
  • Source
    S. Uttamchandani, Xiaoxin Yin, J. Palmer, Gul Agha
    [Show abstract] [Hide abstract]
    ABSTRACT: The effectiveness of automated system management is dependent on the domain-specific information that is encoded within the management framework. Existing approaches for defining the domain knowledge are categorized into white-box and black-box approaches, each of which has limitations. White-box approaches define detailed formulas for system behavior, and are limited by excessive complexity and brittleness of the information. On the other hand, black-box techniques gather domain knowledge by monitoring the system; they are error-prone and require an infeasible number of iterations to converge in real-world systems. Monitormining is a gray-box approach for creating domain knowledge in automated system management; it combines simple designer-defined specifications with the information gathered using machine learning. The designer specifications enumerate input parameters for the system behavior functions, while regression techniques (such as neural networks, support vector machines) are used to derive the mathematical function that relates these parameters. These functions are constantly refined at run-time, by periodically invoking regression on the newly monitored data. Monitormining has the advantage of reduced complexity of the designer specifications, better accuracy of regression functions due to a reduced parameter set, and self-evolving with the changes in the system. Our initial experimental results of applying monitormining are quite promising.
    Integrated Network Management, 2005. IM 2005. 2005 9th IFIP/IEEE International Symposium on; 06/2005
  • Source
    Lin Qiao, B.R. Iyer, D. Agrawal, A. El Abbadi, S. Uttamchandani
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes STORAGEDB: a paradigm for implementing storage virtualation using databases. It describes details for storing the logical-to-physical mapping information as tables within the database; handling the incoming I/O requests of the application as database queries; bookkeeping of the I/O operations as database transactions. In addition, STORAGEDB uses built-in DBMS features to support storage virtualization functionalities; as an example we describe how online table space migration can be used to support reallocation of logical disks. Finally, we describe our modifications to a traditional RDBMS implementation, in order to make it light-weight. Improving the performance of a traditional DBMS is critical for the acceptance of STORAGEDB since the performace overheads are considered a primary challenge in replacing existing storage virtualization engines. Our current lightweight RDBMS has a 19 times shorter invocation path length than the original. In comparision to the open-source virtualization software, namely LVM, the extra cost of STORAGEDB is within 20% of LVM in trace-driven tests. (unlike STORAGEDB, LVMdid not have logging overhead). We consider these initial results as the "stepping stone" in the paradigm of applying databases for storage virtualization.
    Mass Storage Systems and Technologies, 2005. Proceedings. 22nd IEEE / 13th NASA Goddard Conference on; 05/2005