Martín Abadi’s research while affiliated with Google Inc. and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (122)


Smart Choices and the Selection Monad
  • Article
  • Full-text available

April 2023

·

33 Reads

·

1 Citation

Logical Methods in Computer Science

Martin Abadi

·

Gordon Plotkin

Describing systems in terms of choices and their resulting costs and rewards offers the promise of freeing algorithm designers and programmers from specifying how those choices should be made; in implementations, the choices can be realized by optimization techniques and, increasingly, by machine-learning methods. We study this approach from a programming-language perspective. We define two small languages that support decision-making abstractions: one with choices and rewards, and the other additionally with probabilities. We give both operational and denotational semantics. In the case of the second language we consider three denotational semantics, with varying degrees of correlation between possible program values and expected rewards. The operational semantics combine the usual semantics of standard constructs with optimization over spaces of possible execution strategies. The denotational semantics, which are compositional, rely on the selection monad, to handle choice, augmented with an auxiliary monad to handle other effects, such as rewards or probability. We establish adequacy theorems that the two semantics coincide in all cases. We also prove full abstraction at base types, with varying notions of observation in the probabilistic case corresponding to the various degrees of correlation. We present axioms for choice combined with rewards and probability, establishing completeness at base types for the case of rewards without probability.

Download



Smart Choices and the Selection Monad

July 2020

·

25 Reads

Describing systems in terms of choices and of the resulting costs and rewards offers the promise of freeing algorithm designers and programmers from specifying how those choices should be made; in implementations, the choices can be realized by optimization techniques and, increasingly, by machine learning methods. We study this approach from a programming-language perspective. We define two small languages that support decision-making abstractions: one with choices and rewards, and the other additionally with probabilities. We give both operational and denotational semantics. The operational semantics combine the usual semantics of standard constructs with optimization over a space of possible executions. The denotational semantics, which are compositional and can also be viewed as an implementation by translation to a simpler language, rely on the selection monad. We establish that the two semantics coincide in both cases.


A simple differentiable programming language

December 2019

·

229 Reads

·

82 Citations

Proceedings of the ACM on Programming Languages

Automatic differentiation plays a prominent role in scientific computing and in modern machine learning, often in the context of powerful programming systems. The relation of the various embodiments of automatic differentiation to the mathematical notion of derivative is not always entirely clear---discrepancies can arise, sometimes inadvertently. In order to study automatic differentiation in such programming contexts, we define a small but expressive programming language that includes a construct for reverse-mode differentiation. We give operational and denotational semantics for this language. The operational semantics employs popular implementation techniques, while the denotational semantics employs notions of differentiation familiar from real analysis. We establish that these semantics coincide.


A Simple Differentiable Programming Language

November 2019

·

130 Reads

Automatic differentiation plays a prominent role in scientific computing and in modern machine learning, often in the context of powerful programming systems. The relation of the various embodiments of automatic differentiation to the mathematical notion of derivative is not always entirely clear---discrepancies can arise, sometimes inadvertently. In order to study automatic differentiation in such programming contexts, we define a small but expressive programming language that includes a construct for reverse-mode differentiation. We give operational and denotational semantics for this language. The operational semantics employs popular implementation techniques, while the denotational semantics employs notions of differentiation familiar from real analysis. We establish that these semantics coincide.


Dynamic Control Flow in Large-Scale Machine Learning

April 2018

·

291 Reads

·

75 Citations

Yuan Yu

·

Peter Hawkins

·

Michael Isard

·

[...]

·

Tim Harley

Many recent machine learning models rely on fine-grained dynamic control flow for training and inference. In particular, models based on recurrent neural networks and on reinforcement learning depend on recurrence relations, data-dependent conditional execution, and other features that call for dynamic control flow. These applications benefit from the ability to make rapid control-flow decisions across a set of computing devices in a distributed system. For performance, scalability, and expressiveness, a machine learning system must support dynamic control flow in distributed and heterogeneous environments. This paper presents a programming model for distributed machine learning that supports dynamic control flow. We describe the design of the programming model, and its implementation in TensorFlow, a distributed machine learning system. Our approach extends the use of dataflow graphs to represent machine learning models, offering several distinctive features. First, the branches of conditionals and bodies of loops can be partitioned across many machines to run on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs. Second, programs written in our model support automatic differentiation and distributed gradient computations, which are necessary for training machine learning models that use control flow. Third, our choice of non-strict semantics enables multiple loop iterations to execute in parallel across machines, and to overlap compute and I/O operations. We have done our work in the context of TensorFlow, and it has been used extensively in research and production. We evaluate it using several real-world applications, and demonstrate its performance and scalability.


On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

August 2017

·

117 Reads

·

12 Citations

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.



A computational model for TensorFlow: an introduction

June 2017

·

963 Reads

·

86 Citations

TensorFlow is a powerful, programmable system for machine learning. This paper aims to provide the basics of a conceptual framework for understanding the behavior of TensorFlow models during training and inference: it describes an operational semantics, of the kind common in the literature on programming languages. More broadly, the paper suggests that a programming-language perspective is fruitful in designing and in explaining systems such as TensorFlow.


Citations (87)


... In this paper, which is a full version of [AP21], we study decision-making abstractions from a programming-language perspective. We define two small languages that support such abstractions, one with choices and rewards, and the other one additionally with probabilities. ...

Reference:

Smart Choices and the Selection Monad
Smart Choices and the Selection Monad
  • Citing Conference Paper
  • June 2021

... Related work on guessing is based on various theories. A widely used definition is due to Lowe [14], while Abadi et al. [1] present a sound approach from an algebraic point of view based on indistinguishability. Corin et al. [6] also use equational theories, while [12] explicitly represents intruder computation steps, but is limited to offline attacks. Tools that are able to find guessing attacks are presented by Blanchet [5], Corin et al. [7], and Lowe [14]. ...

Guessing attacks and the computational soundness of static equivalence
  • Citing Article
  • August 2010

Journal of Computer Security

... (1) Correctness: autodiff should actually compute the derivative of the function in question. Even specifying what correctness means is surprisingly subtle: while autodiff can be shown to compute the standard mathematical definition of a derivative except on a measure-zero set [2], that characterization of correctness is not ideal because it fails to compose; so, other conceptualizations have been proposed, such as PAP functions [13]. Once a suitable definition of correctness is chosen, the task remains to actually prove that this property applies to a particular autodiff formulation. ...

A simple differentiable programming language

Proceedings of the ACM on Programming Languages

... The second, more elaborate way is to assume that the dataflow graph is fixed, ie., it can not change at runtime. As it turns out, this fixed-graph approach is preferred by the established deep learning frameworks such as TensorFlow [29]. There are good reasons for choosing the fixed-graph idea. ...

Dynamic Control Flow in Large-Scale Machine Learning
  • Citing Conference Paper
  • April 2018

... There is a considerable amount of work on the application of differen-tially private algorithms to various machine learning problems, including regression [17,18,19], online learning [20], graphical models [21], empirical risk minimization [22,23,24,22,5,25], and deep learning [26,27,28]. The objectives of these algorithms differ slightly from those of the data publishing algorithms mentioned earlier. ...

On the Protection of Private Information in Machine Learning Systems: Two Recent Approches
  • Citing Conference Paper
  • August 2017

... c) Evaluation Criterion: To assess the performance of TDCIV and the baseline models, we employ the absolute error, |ÂCE t − ACE t | as the evaluation metric, whereÂCE t represents the estimated value, and ACE t denotes the ground truth. The compared methods are implemented using the P ython package econml [47], while our proposed TDCIV method is implemented with Tensor-Flow [48]. The parameter configurations for TDCIV are summarised in Table I. ...

TensorFlow : Large-Scale Machine Learning on Heterogeneous Distributed Systems
  • Citing Technical Report
  • January 2015

... ML-based techniques for various detection processes in SDNs have been documented in the literature (Akhunzada et al. 2016;Tayfour and Marsono 2021). On the other hand, there are a trend and a modest movement from ML to DL-based techniques in the present literature (Abadi et al. 2017;Rahman Minar and Naher 2018;Sahl and Hasan 2020). The reason for this is because DL is incredibly effective and does not require any additional processing for feature selection. ...

On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

... This process entails evaluating specific nodes or operations to generate results. The clear distinction between graph construction and execution not only enhances TensorFlow's performance but also facilitates flexibility and extensibility in constructing complex data processing pipelines [27]. ...

A computational model for TensorFlow: an introduction
  • Citing Conference Paper
  • June 2017

... Semantic-parsingbased methods transform natural language questions into a logical form (e.g., SQL) which machines can easily understand and execute. There are two sub-categories of methods: (i) weakly-supervised (Pasupat and Liang 2015;Yu et al. 2018a;Neelakantan et al. 2017), and (ii) fully-supervised, such as NL-to-SQL (Zhong, Xiong, and Socher 2017;Pourreza and Rafiei 2023;Gao et al. 2024). In weakly supervised methods, the semantic parser generates the logical form based on an input question, a table, and the answer. ...

Learning a Natural Language Interface with Neural Programmer
  • Citing Article
  • November 2016