Drashti Pathak’s research while affiliated with Amazon and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Program semantics
Morgan and McIver’s weakest pre-expectation operator
Overview of Exist
Algorithm Exist
Running example: program and model tree

+1

Data-driven invariant learning for probabilistic programs
  • Article
  • Publisher preview available

December 2024

·

24 Reads

Formal Methods in System Design

Jialu Bao

·

Nitesh Trivedi

·

Drashti Pathak

·

[...]

·

Subhajit Roy

Morgan and McIver’s weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations, which can measure expected values of probabilistic program quantities. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation, a difficult task. We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.

View access options

Data-Driven Invariant Learning for Probabilistic Programs (Extended Abstract)

August 2023

·

2 Reads

The weakest pre-expectation framework from Morgan and McIver for deductive verification of probabilistic programs generalizes binary state assertions to real-valued expectations to measure expected values of expressions over probabilistic program variables. While loop-free programs can be analyzed by mechanically transforming expectations, verifying programs with loops requires finding an invariant expectation. We view invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. With this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach learns piecewise continuous invariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.


Data-Driven Invariant Learning for Probabilistic Programs

July 2023

·

42 Reads

·

1 Citation

Morgan and McIver’s weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly,the idea is to generalize binary state assertions to real-valued expectations, whichcan measure expected values of probabilistic program quantities. While loop-freeprograms can be analyzed by mechanically transforming expectations, verifyingloops usually requires finding an invariant expectation, a difficult task.We propose a new view of invariant expectation synthesis as a regression prob-lem: given an input state, predict the average value of the post-expectation inthe output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior workon probabilistic invariant inference, our approach can learn piecewise continuousinvariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper-or lower-bound expected values. We implement our approaches and demonstratetheir effectiveness on a variety of benchmarks from the probabilistic programmingliterature.


Data-Driven Invariant Learning for Probabilistic Programs

August 2022

·

35 Reads

·

21 Citations

Lecture Notes in Computer Science

Morgan and McIver’s weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations , which can measure expected values of probabilistic program quantities. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation , a difficult task. We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.


Data-Driven Invariant Learning for Probabilistic Programs

June 2021

·

20 Reads

Morgan and McIver's weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation, a difficult task. We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations, and also works when only given black-box access to the program. We implement our approach and demonstrate its effectiveness on a variety of benchmarks from the probabilistic programming literature.

Citations (1)


... The weakest pre-expectation transformer [McIver and Morgan 2005;Olmedo et al. 2016] is a generalisation of the weakest precondition transformer [Dijkstra 1975]. This notion is used to verify properties such as termination probabilities and probabilistic invariants [Bao et al. 2022;Batz et al. 2023] for imperative probabilistic programming languages. The expected runtime transformer ] is a similar notion proposed for verification of expected costs. ...

Reference:

Automated Verification of Higher-Order Probabilistic Programs via a Dependent Refinement Type System
Data-Driven Invariant Learning for Probabilistic Programs

Lecture Notes in Computer Science