[show abstract][hide abstract] ABSTRACT: BACKGROUND: To balance the demand for uptake of essential elements with their potential toxicity living cells have complex regulatory mechanisms. Here, we describe a genome-wide screen to identify genes that impact the elemental composition ('ionome') of yeast Saccharomyces cerevisiae. Using inductively coupled plasma -- mass spectrometry (ICP-MS) we quantify Ca, Cd, Co, Cu, Fe, K, Mg, Mn, Mo, Na, Ni, P, S and Zn in 11890 mutant strains, including 4940 haploid and 1127 diploid deletion strains, and 5798 over expression strains. RESULTS: We identified 1065 strains with an altered ionome, including 584 haploid and 35 diploid deletion strains, and 446 over expression strains. Disruption of protein metabolism or trafficking has the highest likelihood of causing large ionomic changes, with gene dosage also being important. Gene over expression produced more extreme ionomic changes, but over expression and loss of function phenotypes are generally not related. Ionomic clustering revealed the existence of only a small number of possible ionomic profiles suggesting fitness tradeoffs that constrain the ionome. Clustering also identified important roles for the mitochondria, vacuole and ESCRT pathway in regulation of the ionome. Network analysis identified hub genes such as PMR1 in Mn homeostasis, novel members of ionomic networks such as SMF3 in vacuolar retrieval of Mn, and cross-talk between the mitochondria and the vacuole. All yeast ionomic data can be searched and downloaded at www.ionomicshub.org. CONCLUSIONS: Here, we demonstrate the power of high-throughput ICP-MS analysis to functionally dissect the ionome on a genome-wide scale. The information this reveals has the potential to benefit both human health and agriculture.
[show abstract][hide abstract] ABSTRACT: The widespread use of location-aware devices has led to countless
location-based services in which a user query can be arbitrarily complex, i.e.,
one that embeds multiple spatial selection and join predicates. Amongst these
predicates, the k-Nearest-Neighbor (kNN) predicate stands as one of the most
important and widely used predicates. Unlike related research, this paper goes
beyond the optimization of queries with single kNN predicates, and shows how
queries with two kNN predicates can be optimized. In particular, the paper
addresses the optimization of queries with: (i) two kNN-select predicates, (ii)
two kNN-join predicates, and (iii) one kNN-join predicate and one kNN-select
predicate. For each type of queries, conceptually correct query evaluation
plans (QEPs) and new algorithms that optimize the query execution time are
presented. Experimental results demonstrate that the proposed algorithms
outperform the conceptually correct QEPs by orders of magnitude.
[show abstract][hide abstract] ABSTRACT: The continuous growth of social web applications along with the development of sensor capabilities in electronic devices is creating countless opportunities to analyze the enormous amounts of data that is continuously steaming from these applications and devices. To process large scale data on large scale computing clusters, MapReduce has been introduced as a framework for parallel computing. However, most of the current implementations of the MapReduce framework support only the execution of fixed-input jobs. Such restriction makes these implementations inapplicable for most streaming applications, in which queries are continuous in nature, and input data streams are continuously received at high arrival rates. In this demonstration, we showcase M$^3$, a prototype implementation of the MapReduce framework in which continuous queries over streams of data can be efficiently answered. M$^3$ extends Hadoop, the open source implementation of MapReduce, bypassing the Hadoop Distributed File System (HDFS) to support main-memory-only processing. Moreover, M$^3$ supports continuous execution of the Map and Reduce phases where individual Mappers and Reducers never terminate.
[show abstract][hide abstract] ABSTRACT: In this paper we present GDR, a Guided Data Repair framework that
incorporates user feedback in the cleaning process to enhance and accelerate
existing automatic repair techniques while minimizing user involvement. GDR
consults the user on the updates that are most likely to be beneficial in
improving data quality. GDR also uses machine learning methods to identify and
apply the correct updates directly to the database without the actual
involvement of the user on these specific updates. To rank potential updates
for consultation by the user, we first group these repairs and quantify the
utility of each group using the decision-theory concept of value of information
(VOI). We then apply active learning to order updates within a group based on
their ability to improve the learned model. User feedback is used to repair the
database and to adaptively refine the training set for the model. We
empirically evaluate GDR on a real-world dataset and show significant
improvement in data quality using our user guided repairing process. We also,
assess the trade-off between the user efforts and the resulting data quality.
[show abstract][hide abstract] ABSTRACT: With organizations increasingly depending on Web services to build complex applications, security and privacy concerns including the protection of access control policies are becoming a serious issue. Ideally, service providers would like to make sure that clients have knowledge of only portions of the access control policy relevant to their interactions to the extent to which they are entrusted by the Web service and without restricting the client’s choices in terms of which operations to execute. We propose ACConv, a novel model for access control in Web services that is suitable when interactions between the client and the Web service are conversational and long-running. The conversation-based access control model proposed in this article allows service providers to limit how much knowledge clients have about the credentials specified in their access policies. This is achieved while reducing the number of times credentials are asked from clients and minimizing the risk that clients drop out of a conversation with the Web service before reaching a final state due to the lack of necessary credentials. Clients are requested to provide credentials, and hence are entrusted with part of the Web service access control policies, only for some specific granted conversations which are decided based on: (1) a level of trust that the Web service provider has vis-à-vis the client, (2) the operation that the client is about to invoke, and (3) meaningful conversations which represent conversations that lead to a final state from the current one. We have implemented the proposed approach in a software prototype and conducted extensive experiments to show its effectiveness.