Zhenyu Liao

Zhenyu Liao
  • Boston University

About

14
Publications
2,224
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
268
Citations
Current institution
Boston University

Publications

Publications (14)
Preprint
Image manipulation has attracted a lot of interest due to its wide range of applications. Prior work modifies images either from low-level manipulation, such as image inpainting or through manual edits via paintbrushes and scribbles, or from high-level manipulation, employing deep generative networks to output an image conditioned on high-level sem...
Preprint
Full-text available
Improving sample efficiency of reinforcement learning algorithms requires effective exploration. Following the principle of $\textit{optimism in the face of uncertainty}$, we train a separate exploration policy to maximize an approximate upper confidence bound of the critics in an off-policy actor-critic framework. However, this introduces extra di...
Preprint
Full-text available
Recent advances in generative models and adversarial training have enabled artificially generating artworks in various artistic styles. It is highly desirable to gain more control over the generated style in practice. However, artistic styles are unlike object categories -- there are a continuous spectrum of styles distinguished by subtle differenc...
Chapter
This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models....
Preprint
Quantization reduces computation costs of neural networks but suffers from performance degeneration. Is this accuracy drop due to the reduced capacity, or inefficient training during the quantization procedure? After looking into the gradient propagation process of neural networks by viewing the weights and intermediate activations as random variab...
Preprint
Deep neural networks with adaptive configurations have gained increasing attention due to the instant and flexible deployment of these models on platforms with different resource budgets. In this paper, we investigate a novel option to achieve this goal by enabling adaptive bit-widths of weights and activations in the model. We first examine the be...
Preprint
Full-text available
This paper considers the fundamental problem of learning a complete (orthogonal) dictionary from samples of sparsely generated signals. Most existing methods solve the dictionary (and sparse representations) based on heuristic algorithms, usually without theoretical guarantees for either optimality or complexity. The recent $\ell^1$-minimization ba...
Preprint
Full-text available
This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models....
Article
Full-text available
In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous constructions required $\Omega(n^4)$ running time [BSS14, Zou12], our sparsification routine can be implemented in almost-quadratic running time $O(n^{2+\varepsilon})$. The fundamental conceptual novelty...
Article
Full-text available
We study two fundamental problems in computational geometry: finding the maximum inscribed ball (MaxIB) inside a polytope defined by $m$ hyperplanes in a $d$-dimensional space, and finding the minimum enclosing ball (MinEB) of a set of $n$ points in a $d$-dimensional space. We translate both geometry problems into purely algebraic optimization ques...
Article
The definitions of (∈,∈∨q(λ,μ))-fuzzy left (resp. right) h-ideals of hemirings, generalized fuzzy left (resp. right) h-ideals of hemirings, prime (semiprime) (∈,∈∨q(λ,μ))-left (resp. right) h-ideals of hemirings and prime (semiprime) generalized fuzzy left (resp. right) h-ideals of hemirings are given. Meanwhile, some fundamental properties of them...

Network

Cited By