Figure - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Result Comparison between Our Methods and Baselines on Text Quality and Watermark Strength for the HumanEval Dataset.
Source publication
Recent advancements in large language models (LLMs) have highlighted the risk of misuse, raising concerns about accurately detecting LLM-generated content. A viable solution for the detection problem is to inject imperceptible identifiers into LLMs, known as watermarks. Previous work demonstrates that unbiased watermarks ensure unforgeability and p...
Context in source publication
Similar publications
Large Language Models (LLMs) have demonstrated impressive capabilities in generating diverse and contextually rich text. However, concerns regarding copyright infringement arise as LLMs may inadvertently produce copyrighted material. In this paper, we first investigate the effectiveness of watermarking LLMs as a deterrent against the generation of...