Motasem Alfarra's research while affiliated with King Abdullah University of Science and Technology and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (26)
Current evaluations of Continual Learning (CL) methods typically assume that there is no constraint on training time and computation. This is an unrealistic assumption for any real-world setting, which motivates us to propose: a practical real-time evaluation of continual learning, in which the stream does not wait for the model to complete trainin...
Modern machine learning pipelines are limited due to data availability, storage quotas, privacy regulations, and expensive annotation processes. These constraints make it difficult or impossible to maintain a large-scale model trained on growing annotation sets. Continual learning directly approaches this problem, with the ultimate goal of devising...
Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores On...
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fréchet Inception Distance (FID). Analogous to the vulnerability of deep models against a variety of adversarial attacks, we show that such metrics can also be manipulated by additive pixel perturbations. Our experiments indicate that one ca...
Recent progress in empirical and certified robustness promises to deliver reliable and deployable Deep Neural Networks (DNNs). Despite that success, most existing evaluations of DNN robustness have been done on images sampled from the same distribution that the model was trained on. Yet, in the real world, DNNs may be deployed in dynamic environmen...
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to characterize the decision boundaries of a simple network of the form (Affine, ReLU, Affine). Our main finding...
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input p...
Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e.g. translations, rotations, etc. Current input deformation certification methods either (i) do not scale to deep networks on large input datasets, or (ii) can only certify a specific class...
Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial a...
3D computer vision models are commonly used in security-critical applications such as autonomous driving and surgical robotics. Emerging concerns over the robustness of these models against real-world deformations must be addressed practically and reliably. In this work, we propose 3DeformRS, a method to certify the robustness of point cloud Deep N...
Deep Neural Networks (DNNs) lack robustness against imperceptible perturbations to their input. Face Recognition Models (FRMs) based on DNNs inherit this vulnerability. We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input. Our methodology causes FRMs to malfunction by designi...
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr\'echet Inception Distance (FID). Analogous to the vulnerability of deep models against a variety of adversarial attacks, we show that such metrics can also be manipulated by additive pixel perturbations. Our experiments indicate that one...
Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping models with Test-time Transformation Ensembling (TTE) can work as a reliable defense against such attacks. While transforming the input data, both at train and test times, is known to enhance model perform...
Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale. All prior art on randomized smoothing has focused on isotropic $\ell_p$ certification, which has the advantage of yielding certificates that can be easily compared among isotropic methods via $\ell_p$-norm radius. H...
Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e.g. translations, rotations, etc. Current input deformation certification methods either (i) do not scale to deep networks on large input datasets, or (ii) can only certify a specific class...
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input p...
Randomized smoothing is a recent technique that achieves state-of-art performance in training certifiably robust deep neural networks. While the smoothing family of distributions is often connected to the choice of the norm used for certification, the parameters of the distributions are always set as global hyper parameters independent of the input...
We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect of replacing the first layers of various deep architectures with Gabor layers (i.e. convolutional layers with filters that are based on learnable Gabor parameters) on robustness against adversarial attacks. We observe that ar...
This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness. Recent works observed that Adversarial Training leads to robust models, whose learnt features appear to correlate with human perception. Inspired by this connection from robustness to semantics, we study the compleme...
Recent advances in the theoretical understandingof SGD (Qian et al., 2019) led to a formula for the optimal mini-batch size minimizing the number of effective data passes, i.e., the number of iterations times the mini-batch size. However, this formula is of no practical value as it depends on the knowledge of the variance of the stochastic gradient...
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to characterize the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine). Our main f...
This paper studies distributed multi-relay selection in energy-harvesting cooperative wireless networks and models it as an Indian Buffet Game (IBG). Particularly, the IBG is utilized to model the multi-relay selection decisions of network source nodes, while accounting for negative network externality. Two scenarios are considered: (1) constrained...
Unfortunately, the original publication contains errors.
This work takes a step towards investigating the benefits of merging classical vision techniques with deep learning models. Formally, we explore the effect of replacing the first layers of neural network architectures with convolutional layers that are based on Gabor filters with learnable parameters. As a first result, we observe that architecture...
Citations
... Due to the inherent randomness, both randomized smoothing and PointGuard only have probabilistic guarantees. [26,28] proposed 3DCertify and 3De-formRS to certify robustness of point cloud classification against common 3D transformations, e.g., rotations. However, both methods are not applicable to point addition (or deletion or modification or perturbation) attacks, which can arbitrarily manipulate points. ...
... The duality between CPWL functions computed by neural networks and Newton polytopes inspired by tropical geometry has been used in several other works about neural networks before (Maragos et al., 2021), for example to analyze the shape of decision boundaries (Alfarra et al., 2020;Zhang et al., 2018) or to count and bound the number of linear pieces (Charisopoulos & Maragos, 2018;Hertrich et al., 2021;Montúfar et al., 2022). ...
... Our approach builds upon the theoretical background of Randomized Smoothing (RS) [10]. Specifically, we build 3DeformRS by leveraging DeformRS [1], an RS reformulation that generalized from pixel-intensity perturbations to vector field deformations, and specializing it to point cloud data. In contrast to previous approaches, our work considers spatial deformations on any point cloud DNN, providing efficiency and practicality. ...
... Stochastic defences, even when averaging multiple gradient samples with EoT, tend to create a rough loss landscape that white-box adversaries find difficult to navigate. A second, and perhaps more interesting finding, is that this property is not exclusive to stochastic defences; there exist non-stochastic adversarial defences that have the same effect [4]. ...
... MEMO augments the single sample and adapts the model with the marginal entropy of those augmented samples. Test time transformation ensembling (TTE) (Pérez et al., 2021) proposes to augment the image with a fixed set of transformations and ensembles the outputs through averaging. The only one that does not update the model is proposed by Mao et al. (2021b), which modifies the pixels of adversarial samples to minimize the contrastive loss, and not tested on distribution shifting. ...
Reference: Self-Supervised Convolutional Visual Prompts
... From three biological repetitions (each containing at least two technical replicates), we identified 143,211 in-focus single cells (red channel). This image dataset was then used to train a Gaborbased convolutional neural network (100,101). Specifically, 116,651 of the red channel images (mCherry landmarks stably expressed in MCF-7 cells) including images of four subcellular locations: ER ( Ch Cb5), mitochondria ( Ch ActA), MAMs ( Ch PTDSS1), and diffuse expression (mCardinal) were used as the training dataset. ...
... An iterative interference alignment algorithm is proposed to optimize the network energy efficiency. The authors in [33] investigated a distributed multi-relay selection problem for cooperative wireless networks with EH relays. This formulation is an MINLP problem, and game theory is used to address this multi-relay selection problem. ...