Nils LukasUniversity of Waterloo | UWaterloo
Nils Lukas
Master of Science
About
15
Publications
2,088
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
345
Citations
Introduction
Publications
Publications (15)
Large Language Models (LLMs) can be \emph{misused} to spread online spam and misinformation. Content watermarking deters misuse by hiding a message in model-generated outputs, enabling their detection using a secret watermarking key. Robustness is a core security property, stating that evading detection requires (significant) degradation of the con...
Privacy challenges in machine learning can stem from leakage by the model or from distributed data sources. Differential privacy addresses model leakage and computation over encrypted data the other. During training cryptographic approaches need to be augmented with techniques such as federated learning.
Machine Learning as a Service (MLaaS) is an increasingly popular design where a company with abundant computing resources trains a deep neural network and offers query access for tasks like image classification. The challenge with this design is that MLaaS requires the client to reveal their potentially sensitive queries to the company hosting the...
Deep image classification models trained on large amounts of web-scraped data are vulnerable to data poisoning, a mechanism for backdooring models. Even a few poisoned samples seen during training can entirely undermine the model's integrity during inference. While it is known that poisoning more samples enhances an attack's effectiveness and robus...
Deepfakes refer to content synthesized using deep generators, which, when \emph{misused}, have the potential to erode trust in digital media. Synthesizing high-quality deepfakes requires access to large and complex generators only few entities can train and provide. The threat are malicious users that exploit access to the provided model and genera...
Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as sc...
Deep Neural Network (DNN) watermarking is a method for provenance verification of DNN models. Watermarking should be robust against watermark removal attacks that derive a surrogate model that evades provenance verification. Many watermarking schemes that claim robustness have been proposed, but their robustness is only validated in isolation again...
In Machine Learning as a Service, a provider trains a deep neural network and provides many users access to it. However, the hosted (source) model is susceptible to model stealing attacks where an adversary derives a \emph{surrogate model} from API access to the source model. For post hoc detection of such attacks, the provider needs a robust metho...
Obtaining the state of the art performance of deep learning models imposes a high cost to model generators, due to the tedious data preparation and the substantial processing requirements. To protect the model from unauthorized re-distribution, watermarking approaches have been introduced in the past couple of years. The watermark allows the legiti...
SunFlower is a new model for the simulation of Solar Tower Power Systems. The model has been cross-validated with the NREL tool SolTrace and is currently compared to other tools. This tool is freely accessible via a user-friendly web interface. The idea behind this Web App is to provide a high accurate tool with strong support for comparable progra...