Pramod Viswanath’s research while affiliated with Princeton University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (269)


Figure 6. Under various 3-way collusion attacks, the proposed collusion resistant fingerprinting with p = 0.243 achieves nearperfect detection rate when the number of total fingerprints M is larger than 2048. This implies that each model needs to scale to include at least M p = 500 fingerprints on average to achieve Security against collusion attacks. The total number of shared fingerprinted models is N = 2048.
Figure 7. Changing the sampling temperature We find that changing the temperature of sampling leads to a lower fingerprint detection rate, but also lowers the model utility. The detection can be made better by a simple modification to our fingerprinting scheme.
Figure 10. Number of fingerprints retained after SFT plotted against fingerprints inserted
Scalable Fingerprinting of Large Language Models
  • Preprint
  • File available

February 2025

·

2 Reads

Anshul Nasery

·

Jonathan Hayase

·

Creston Brooks

·

[...]

·

Sewoong Oh

Model fingerprinting has emerged as a powerful tool for model owners to identify their shared model given API access. However, to lower false discovery rate, fight fingerprint leakage, and defend against coalitions of model users attempting to bypass detection, we argue that {\em scalability} is critical, i.e., scaling up the number of fingerprints one can embed into a model. Hence, we pose scalability as a crucial requirement for fingerprinting schemes. We experiment with fingerprint design at a scale significantly larger than previously considered, and introduce a new method, dubbed Perinucleus sampling, to generate scalable, persistent, and harmless fingerprints. We demonstrate that this scheme can add 24,576 fingerprints to a Llama-3.1-8B model -- two orders of magnitude more than existing schemes -- without degrading the model's utility. Our inserted fingerprints persist even after supervised fine-tuning on standard post-training data. We further address security risks for fingerprinting, and theoretically and empirically show how a scalable fingerprinting scheme like ours can mitigate these risks.

Download

Figure 1: A host initiates a download request under the OML 1.0 protocol and receives an OMLized model, M .oml, to be used in their services to external users.
Training AI to be Loyal

Loyal AI is loyal to the community that builds it. An AI is loyal to a community if the community has ownership, alignment, and control. Community owned models can only be used with the approval of the community and share the economic rewards communally. Community aligned models have values that are aligned with the consensus of the community. Community controlled models perform functions designed by the community. Since we would like permissionless access to the loyal AI's community, we need the AI to be open source. The key scientific question then is: how can we build models that are openly accessible (open source) and yet are owned and governed by the community. This seeming impossibility is the focus of this paper where we outline a concrete pathway to Open, Monetizable and Loyal models (OML), building on our earlier work on OML, arXiv:2411.03887(1) , and a representation via a cryptographic-ML library http://github.com/sentient-agi/oml-1.0-fingerprinting .



OML: Open, Monetizable, and Loyal AI

November 2024

·

22 Reads

Artificial Intelligence (AI) has steadily improved across a wide range of tasks. However, the development and deployment of AI are almost entirely controlled by a few powerful organizations that are racing to create Artificial General Intelligence (AGI). The centralized entities make decisions with little public oversight, shaping the future of humanity, often with unforeseen consequences. In this paper, we propose OML, which stands for Open, Monetizable, and Loyal AI, an approach designed to democratize AI development. OML is realized through an interdisciplinary framework spanning AI, blockchain, and cryptography. We present several ideas for constructing OML using technologies such as Trusted Execution Environments (TEE), traditional cryptographic primitives like fully homomorphic encryption and functional encryption, obfuscation, and AI-native solutions rooted in the sample complexity and intrinsic hardness of AI tasks. A key innovation of our work is introducing a new scientific field: AI-native cryptography. Unlike conventional cryptography, which focuses on discrete data and binary security guarantees, AI-native cryptography exploits the continuous nature of AI data representations and their low-dimensional manifolds, focusing on improving approximate performance. One core idea is to transform AI attack methods, such as data poisoning, into security tools. This novel approach serves as a foundation for OML 1.0 which uses model fingerprinting to protect the integrity and ownership of AI models. The spirit of OML is to establish a decentralized, open, and transparent platform for AI development, enabling the community to contribute, monetize, and take ownership of AI models. By decentralizing control and ensuring transparency through blockchain technology, OML prevents the concentration of power and provides accountability in AI development that has not been possible before.


Fig. 3: ETH volatility over time, with the controller dynamically adjusting the collateral factor. The bottom plot compares the actual liquidations resulting from this adjusted collateral factor with the target expected liquidations, LT t = 0.9.
Fig. 6: Impact of forgetting factor on the mean square error of the estimated parameters by the RLS-based algorithms.
Fig. 7: Comparison of utilization between RLS-based controller and Aave's static curves, with user supply and demand curves learned from real Aave DAI pool data, ρ = 0.8.
Fig. 8: Comparison of supply, used as a proxy for revenue, between the RLS-based controller and Aave's static curves. The parameters evolve according to a random walk with Gaussian noise. The x-axis represents the relative standard deviation of the noise in percentage. The fixed factor is set to ρ = 0.8.
AgileRate: Bringing Adaptivity and Robustness to DeFi Lending Markets

October 2024

·

22 Reads

Decentralized Finance (DeFi) has revolutionized lending by replacing intermediaries with algorithm-driven liquidity pools. However, existing platforms like Aave and Compound rely on static interest rate curves and collateral requirements that struggle to adapt to rapid market changes, leading to inefficiencies in utilization and increased risks of liquidations. In this work, we propose a dynamic model of the lending market based on evolving demand and supply curves, alongside an adaptive interest rate controller that responds in real-time to shifting market conditions. Using a Recursive Least Squares algorithm, our controller estimates tracks the external market and achieves stable utilization, while also minimizing risk. We provide theoretical guarantees on the interest rate convergence and utilization stability of our algorithm. We establish bounds on the system's vulnerability to adversarial manipulation compared to static curves, while quantifying the trade-off between adaptivity and adversarial robustness. Our dynamic curve demand/supply model demonstrates a low best-fit error on Aave data, while our interest rate controller significantly outperforms static curve protocols in maintaining optimal utilization and minimizing liquidations.



Figure 1 Protocol Overview. The interest rate controller observes borrower actions to estimate r * and set rt = ˆ r * . The collateral factor planner includes a parameter estimator and an optimizer: the estimator finds r l o , r b o , and σ, while the optimizer uses these estimates to determine the optimal collateral factor for the market.
Thinking Fast and Slow: Data-Driven Adaptive DeFi Borrow-Lending Protocol

July 2024

·

34 Reads

Decentralized finance (DeFi) borrowing and lending platforms are crucial to the decentralized economy, involving two main participants: lenders who provide assets for interest and borrowers who offer collateral exceeding their debt and pay interest. Collateral volatility necessitates over-collateralization to protect lenders and ensure competitive returns. Traditional DeFi platforms use a fixed interest rate curve based on the utilization rate (the fraction of available assets borrowed) and determine over-collateralization offline through simulations to manage risk. This method doesn't adapt well to dynamic market changes, such as price fluctuations and evolving user needs, often resulting in losses for lenders or borrowers. In this paper, we introduce an adaptive, data-driven protocol for DeFi borrowing and lending. Our approach includes a high-frequency controller that dynamically adjusts interest rates to maintain market stability and competitiveness with external markets. Unlike traditional protocols, which rely on user reactions and often adjust slowly, our controller uses a learning-based algorithm to quickly find optimal interest rates, reducing the opportunity cost for users during periods of misalignment with external rates. Additionally, we use a low-frequency planner that analyzes user behavior to set an optimal over-collateralization ratio, balancing risk reduction with profit maximization over the long term. This dual approach is essential for adaptive markets: the short-term component maintains market stability, preventing exploitation, while the long-term planner optimizes market parameters to enhance profitability and reduce risks. We provide theoretical guarantees on the convergence rates and adversarial robustness of the short-term component and the long-term effectiveness of our protocol. Empirical validation confirms our protocol's theoretical benefits.


Adaptive Curves for Optimally Efficient Market Making

June 2024

·

45 Reads

Automated Market Makers (AMMs) are essential in Decentralized Finance (DeFi) as they match liquidity supply with demand. They function through liquidity providers (LPs) who deposit assets into liquidity pools. However, the asset trading prices in these pools often trail behind those in more dynamic, centralized exchanges, leading to potential arbitrage losses for LPs. This issue is tackled by adapting market maker bonding curves to trader behavior, based on the classical market microstructure model of Glosten and Milgrom. Our approach ensures a zero-profit condition for the market maker's prices. We derive the differential equation that an optimal adaptive curve should follow to minimize arbitrage losses while remaining competitive. Solutions to this optimality equation are obtained for standard Gaussian and Lognormal price models using Kalman filtering. A key feature of our method is its ability to estimate the external market price without relying on price or loss oracles. We also provide an equivalent differential equation for the implied dynamics of canonical static bonding curves and establish conditions for their optimality. Our algorithms demonstrate robustness to changing market conditions and adversarial perturbations, and we offer an on-chain implementation using Uniswap v4 alongside off-chain AI co-processors.



Player-Replaceability and Forensic Support Are Two Sides of the Same (Crypto) Coin

December 2023

·

8 Reads

Lecture Notes in Computer Science

Player-replaceability is a property of a blockchain protocol that ensures every step of the protocol is executed by an unpredictably random (small) set of players; this guarantees security against a fully adaptive adversary and is a crucial property in building permissionless blockchains. Forensic Support is a property of a blockchain protocol that provides the ability, with cryptographic integrity, to identify malicious parties when there is a safety violation; this provides the ability to enforce punishments for adversarial behavior and is a crucial component of incentive mechanism designs for blockchains. Player-replaceability and strong forensic support are both desirable properties, yet, none of the existing blockchain protocols have both properties. Our main result is to construct a new BFT protocol that is player-replaceable and has maximum forensic support. The key invention is the notion of a “transition certificate”, without which we show that natural adaptations of extant BFT and longest chain protocols do not lead to the desired goal of simultaneous player-replaceability and forensic support. (The full version of paper is available in https://eprint.iacr.org/2022/1513.)


Citations (47)


... These blockchains are developed by their respective projects (e.g., Ethereum, BSC, etc. [2], [46]) and operated by their own blockchain nodes, which handle transaction processing, consensus within the blockchain, and other tasks. IntegrateX also features m (variable) relayers responsible for trustless cross-chain communication between blockchains, similar to many mainstream interoperability protocols [24], [17], [47], [48]. Each relayer has its own public-private key pair and signs cross-chain transactions. ...

Reference:

Atomic Smart Contract Interoperability with High Efficiency via Cross-Chain Integrated Execution
TrustBoost: Boosting Trust among Interoperable Blockchains
  • Citing Conference Paper
  • November 2023

... • We use our general results to characterize when truthful signal elicitation is possible in location signal networks and bandwidth signal networks. These two DePIN categories are actively used in practice (see, e.g., Sheng et al. 2024b;Sheng et al. 2024a), and our results imply crucial design considerations for setting them up, as well as how signal elicitation should be conducted once these networks are deployed. ...

Proof of Backhaul: Trustfree Measurement of Broadband Bandwidth
  • Citing Conference Paper
  • January 2024

... Code-adaptive HARQ techniques [13] optimize the encoding and modulation of retransmitted packets, exploiting the additional information from the rich feedback to improve HARQ energy-efficiency even if the feedback is outdated or inaccurate [14], at the cost of significant computational load on the transmitter. Recently, a rich feedback protocol was proposed [15], which uses the decoded message as feedback and a compressed error vector in the retransmissions. This allows the transmitter to maximize the probability of decoding in each round and yields a trade-off between spectral efficiency, reliability, and latency. ...

Compressed Error HARQ: Feedback Communication on Noise-Asymmetric Channels

... Another fundamental pillar of the proposed solution is the integration with the Blockchain to automate the notarization process of the successful execution of events in the certified maintenance process. We chose to use Blockchains [10] based on a Proof of Stake [22] consensus algorithm which do not significantly impact in terms of processing times. In fact, the use of a blockchain is advantageous in all those areas where the immutability of information and transparency are to be guaranteed. ...

Economics of Proof-of-Stake Payment Systems
  • Citing Article
  • January 2023

SSRN Electronic Journal

... There is a plethora of works that elaborate on improving the LTE/5G legacy infrastructure [20,19,42,11,9,33,35,26], such as the Core, Radio Access Network decentralization and device to device communications. Additionally, designing drone-assisted cellular network have been explored [53,52,12,7,50,28] to enhance the cellular networks further, as well as specialized testbeds for experimentation [29,13]. ...

Trust-free service measurement and payments for decentralized cellular networks
  • Citing Conference Paper
  • November 2022

... In [15], an improved DPoS algorithm is proposed that provides higher throughput and a two-layer blockchain structure for improved consensus efficiency and scalability. In addition, there are also many similar approaches to [16], which combine multiple traditional consensus mechanisms to create new consensus mechanisms. However, these classic improvement schemes tied to the static node environment have problems in systems with frequently changing node communication quality. ...

Minotaur: Multi-Resource Blockchain Consensus
  • Citing Conference Paper
  • November 2022

... Second, bribery attacks, where an adversary, even without knowing who to corrupt, can advertise payouts for certain verifiable malicious behaviors [9],( e.g., the attacker can create a smart contract that pays participants to censor specific transactions) are a concern. An adaptive, bribing adversary is one of the strongest threat models, and is discussed further in [84]. ...

Proof-of-Stake Longest Chain Protocols: Security vs Predictability
  • Citing Conference Paper
  • November 2022

... Only a handful of blockchain sharding studies allow corrupted shards, with various limitations. Free2Shard [39] allows corrupted shards and preserves the system's security via a network-wide consensus. However, it is based on the assumption of a global synchronous network environment. ...

Free2Shard: Adversary-resistant Distributed Resource Allocation for Blockchains
  • Citing Article
  • June 2022

ACM SIGMETRICS Performance Evaluation Review

... This architecture is designed to increase the blockchain's throughput and the proportion of non-malicious nodes, thus enhancing its resistance to attacks. Free2Shard proposes a dynamic self-allocation strategy to maintain a favorable ratio of honest to hostile nodes in each shard [13]. PolyShard [14] uses polynomial-coding to address scalability, security, and decentralization challenges. ...

Free2Shard: Adversary-resistant Distributed Resource Allocation for Blockchains
  • Citing Article
  • February 2022

Proceedings of the ACM on Measurement and Analysis of Computing Systems