Warren He's research while affiliated with University of California, Berkeley and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (15)
Although smart contracts inherit the availability and other security assurances of the blockchain, they are impeded by lack of confidentiality and poor performance. We present Ekiden, a system that aims to close these critical gaps by combining the blockchain with trusted execution environments.
Deep reinforcement learning (DRL) has achieved great success in various applications. However, recent studies show that machine learning models are vulnerable to adversarial attacks. DRL models have been attacked by adding perturbations to observations. While such observation based attack is only one aspect of potential attacks on DRL, other forms...
Existing black-box attacks on deep neural networks (DNNs) have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class pro...
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perce...
Smart contracts are applications that execute on blockchains. Today they manage billions of dollars in value and motivate visionary plans for pervasive blockchain deployment. While smart contracts inherit the availability and other security assurances of blockchains, however, they are impeded by blockchains' lack of confidentiality and poor perform...
An important line of privacy research is investigating the design of systems for secure input and output (I/O) within Internet browsers. These systems would allow for users’ information to be encrypted and decrypted by the browser, and the specific web applications will only have access to the users’ information in encrypted form. The state-of-the-...
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perce...
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations. Researchers have explored different defense methods to defend against such ad...
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can "transfer" to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model's cl...
Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are rece...
In the paper, we present designs for multiple blockchain consensus primitives and a novel blockchain system, all based on the use of trusted execution environments (TEEs), such as Intel SGX-enabled CPUs. First, we show how using TEEs for existing proof of work schemes can make mining equitably distributed by preventing the use of ASICs. Next, we ex...
A number of recent research and industry proposals discussed using encrypted data in web applications. We first present a systematization of the design space of web applications and highlight the advantages and limitations of current proposals. Next, we present ShadowCrypt, a previously unexplored design point that enables encrypted input/output wi...
Rich client-side applications written in HTML5 proliferate on diverse platforms, access sensitive data, and need to maintain data-confinement invariants. Applications currently enforce these invariants using implicit, ad-hoc mechanisms. We propose a new primitive called a data-confined sandbox or DCS. A DCS enables complete mediation of communicati...
The complexity of Android's message-passing system has led to numerous vulnerabilities in third-party applications. Many of these vulnerabilities are a result of developers confusing inter-application and intra-application communication mechanisms. Consequently, we propose modifications to the Android platform to detect and protect inter-applicatio...
Citations
... The one-server setting matches the typical MLaaS scenario in which the model is in the cloud and the customers send their data over a secure channel. Several attempts to secure the one-server by leveraging a TEE have been introduced [20,23,31,45,57,64]. Slalom is the first protocol to apply masking to outsource the expensive linear operations of ANN inference from the TEE to a GPU. ...
... Brandenburger et al. [4,5] used Trusted Execution Environment (TEE) to protect the privacy of chaincode data computation from potentially untrusted peers square influence. Cheng et al. [6] proposed the Ekiden model, which adopts the mechanism of separating computation and consensus, and computing nodes complete the computation of private data in an offline trusted execution environment. Fan et al. [7] placed the sensitive private information-processing process in the blockchain transaction process in the Intel Software Guard Extensions (SGX) security area for protection. ...
... Remarkably, though GAN offers many benefits, there are still problems with this technique, one of which is the instability in traditional GAN training. Hence, other GAN variants are introduced such as Wasserstein GAN (WGAN) [19], WGAN with gradient penalty (WGAN-GP) [20] and its version with two timescale update rule (TTUR) (WGAN-GP TTUR) [21], AdvGAN [22], etc. GAN methods may be different in performance, so we aim to consider all above GAN variants to design our proposed framework to generate adversarial samples. ...
... Blockchain cost efficiency is measured by the amount of cryptocurrencies (e.g., Ether) per computing task. Existing research improves the blockchain cost efficiency by extending the networks with off-chain services (i.e., so-called layer-two solutions [1], [23], [11], [13]) or building on-chain/off-chain middleware [30], [29], [20], [28]. This line of research, while introducing new services to the networks, does not affect other nodes beyond what a newly joined node does; we don't consider them interfering. ...
... The first work [27] has embedded additional DOM subtrees inside the original DOM so that web data can be processed in the encapsulated environment. The following works have explored vulnerabilities of the prior work [22] and designed browser extensions to provide isolated memory for web data processing [40,59]. But they are still engineering-cost due to iteratively updated data fields on each webpage, which needs to develop new frames and APIs. ...
... This method is further extended by various iterative strategies, e.g., basic iterative method (Kurakin et al., 2017), deep fool (Moosavi Dezfooli et al., 2016), momentum (Dong et al., 2018) and Hamming space (Yang et al., 2018b). Score-based attack, on the other hand, relies on searching the input space, considering that the distorted images can largely affect the prediction score (Yan et al., 2020b(Yan et al., , 2021Cherepanova et al., 2021;Xiao et al., 2018). Jacobian-based saliency map attack greedily modifies the input instance (Papernot et al., 2016b). ...
... Facing weight fipping caused by poor samples in WGAN, Gulrajani proposed a gradient penalty (WGAN-GP) algorithm [23] which can penalize the norm of the gradient of the critic with respect to its input. Moreover, in adversarial examples producing high quality and efciency, Xiao produced AdvGAN [24] to generate adversarial examples with GANs in both semiwhite-box and black-box attack settings. ...
... Apart from these are other common attacks are, Integrity attack-A efficient resource assault that is identified as normal traffic due to false negatives [134]. ...
... Gu and Rigazio [67] developed a model in 2015 that can withstand adversarial examples, but they were still unable to eliminate their impact. Warren He [68] made an attempt with five defenses for adversarial examples, but even ensembles of weak defenses were shown to be ineffective. In 2017, Xie [69] demonstrated that the impact of adversarial examples is diminished by random scaling. ...
Reference: New Cognitive Deep-Learning CAPTCHA
... Proof-of-Luck (PoL) [63] is an extension of PoET by using a TEE to require that a fixed amount of time must pass, during which participants may produce new blocks. PoL includes a parameter called luck, a random value generated by the TEE. ...