Figure - uploaded by Zhenghao Wu
Content may be subject to copyright.
Results of the first Human Evaluation. "(Free)": model generates summaries freely. "(GT)": model generates summaries with the desired length set as the length of the reference summary.

Results of the first Human Evaluation. "(Free)": model generates summaries freely. "(GT)": model generates summaries with the desired length set as the length of the reference summary.

Contexts in source publication

Context 1
... the first experiment (Table 5), participants are asked to choose a better one from two given summaries. The desired length is set as the length of the reference summary. ...
Context 2
... need to rate each summary from 0 to 5. In order to guarantee the accuracy and credibility of results, each article is presented once to each participant. As shown in Table 5, models with LenAtten have better completeness and correctness scores on both datasets, along with a few improvements on the fluency. In the second experiment, Table 6 shows that (1) the completeness and correctness scores increase as the desired length increases. ...
Context 3
... the first experiment (Table 5), participants are asked to choose a better one from two given summaries. The desired length is set as the length of the reference summary. ...
Context 4
... need to rate each summary from 0 to 5. In order to guarantee the accuracy and credibility of results, each article is presented once to each participant. As shown in Table 5, models with LenAtten have better completeness and correctness scores on both datasets, along with a few improvements on the fluency. In the second experiment, Table 6 shows that (1) the completeness and correctness scores increase as the desired length increases. ...

Citations

... By controlling the length of the generated sentence, we can produce a suitable sentence or headline [27] that fits to a given space for the description. This concept of length control has been studied not only in image captioning [13,16,26,35,48,76] but also in fields such as length controllable generation [8,19,27], text summarization [7,18,24,33,37,45,46,49,58,61,75,78], translation [62] and paraphrasing [70]. However, the main approach in previous research is coarse control of a few length levels, and there have been hardly any attempts to fine-grained control on the length of generated captions. ...
... This is different from prefix learning because a length embedding is added at each word that is sequentially generated, so there is the advantage of being able to continue to give length control information to the decoder until the whole sentence is generated. There are two ways of embedding lengths: one is to add an embedding of the remaining length T − t up to the specified length T for the t-th generated word [37,48,75], and the other is to add a same length embedding to each word embedding regardless of the remaining length [13,16]. The former would be effective, since it can give the decoder at each generation step the information on how many words remain to be generated. ...
Preprint
This paper proposes a method for video captioning that controls the length of generated captions. Previous work on length control often had few levels for expressing length. In this study, we propose two methods of length embedding for fine-grained length control. A traditional embedding method is linear, using a one-hot vector and an embedding matrix. In this study, we propose methods that represent length in multi-hot vectors. One is bit embedding that expresses length in bit representation, and the other is ordinal embedding that uses the binary representation often used in ordinal regression. These length representations of multi-hot vectors are converted into length embedding by a nonlinear MLP. This method allows for not only the length control of caption sentences but also the control of the time when reading the caption. Experiments using ActivityNet Captions and Spoken Moments in Time show that the proposed method effectively controls the length of the generated captions. Analysis of the embedding vectors with ICA shows that length and semantics were learned separately, demonstrating the effectiveness of the proposed embedding methods.
... Building upon existing summarization work, lengthcontrollable summarization (LCS) (Yu et al. 2021) has been studied to control the length of output summaries. The LCS approach enables users to control summary length, thus to serve various real industrial scenarios when users read different types of articles, or use different size of screens, on different occasions. ...
... The majority of work in LCS focus on abstractive methods, which aims to stop the generating process at desired length by generating a stop token (e.g., [EOS]) (Rush, Chopra, and Weston 2015;Kikuchi et al. 2016;Liu, Luo, and Zhu 2018;Takase and Okazaki 2019;Yu et al. 2021;Makino et al. 2019;Liu, Jia, and Zhu 2022). Previous studies on lengthcontrollable extractive methods are very limited. ...
Article
Unsupervised extractive summarization is an important technique in information extraction and retrieval. Compared with supervised method, it does not require high-quality human-labelled summaries for training and thus can be easily applied for documents with different types, domains or languages. Most of existing unsupervised methods including TextRank and PACSUM rely on graph-based ranking on sentence centrality. However, this scorer can not be directly applied in end-to-end training, and the positional-related prior assumption is often needed for achieving good summaries. In addition, less attention is paid to length-controllable extractor, where users can decide to summarize texts under particular length constraint. This paper introduces an unsupervised extractive summarization model based on a siamese network, for which we develop a trainable bidirectional prediction objective between the selected summary and the original document. Different from the centrality-based ranking methods, our extractive scorer can be trained in an end-to-end manner, with no other requirement of positional assumption. In addition, we introduce a differentiable length control module by approximating 0-1 knapsack solver for end-to-end length-controllable extracting. Experiments show that our unsupervised method largely outperforms the centrality-based baseline using a same sentence encoder. In terms of length control ability, via our trainable knapsack module, the performance consistently outperforms the strong baseline without utilizing end-to-end training. Human evaluation further evidences that our method performs the best among baselines in terms of relevance and consistency.
... This is different from prefix learning because a length embedding is added at each word that is sequentially generated, so there is the advantage of being able to continue to give length control information to the decoder until the whole sentence is generated. There are two ways of embedding lengths: one is to add an embedding of the remaining length T − t up to the specified length T for the t-th generated word [20], [23], [31], and the other is to add a same length embedding to each word embedding regardless of the remaining length [16], [17]. The former would be effective, since it can give the decoder at each generation step the information on how many words remain to be generated. ...
Article
Full-text available
This paper proposes a method for video captioning that controls the length of generated captions. Previous work on length control often had few levels for expressing length. In this study, we propose two methods of length embedding for fine-grained length control. A traditional embedding method is linear, using a one-hot vector and an embedding matrix. In this study, we propose methods that represent length in multi-hot vectors. One is bit embedding that expresses length in bit representation, and the other is ordinal embedding that uses the binary representation often used in ordinal regression. These length representations of multi-hot vectors are converted into length embedding by a nonlinear MLP. This method allows for not only the length control of caption sentences but also the control of the time when reading the caption. Experiments using ActivityNet Captions and Spoken Moments in Time show that the proposed method effectively controls the length of the generated captions. Analysis of the embedding vectors with ICA shows that length and semantics were learned separately, demonstrating the effectiveness of the proposed embedding methods.
... Text summarizations aim to extract crucial information from texts and documents, which are required to be accurate, concise, and easily comprehensible. A lot of effort has gone into making the summary concise, i.e. controlling the length of the output to match what is actually needed [2], [3]. Some researchers have employed sinusoidal positional encoders within neural encoder-decoder models to conduct length constraints [4]. ...
Article
Full-text available
The task of text summarization aims to provide highly condensed summaries of long textual information, with the perfect summary being both precise and concise. In recent years, there has been extensive research on the brevity of summaries, but these methods still have significant room for improvement in ROUGE scores, especially when the beam width is increased. We propose a new model called the DC (Dual-Level Contrastive Learning), which combines contrastive learning and data augmentation, and design a new scoring function during the training phase to enhance accuracy and conciseness. Ultimately, our framework achieves excellent ROUGE scores, ensuring concise and readable output even with increased beam width. Experimental results on the CNN/DailyMail (47.82 ROUGE-1, 0.017 VAR) and XSum (47.31 ROUGE-1, 0.0052 VAR) datasets demonstrate that our approach can significantly enhance the accuracy and conciseness of the summaries. Some metrics have exceeded those of the current state-of-the-art model BRIO[1], promoting the state-of-the-art performance to a higher level.
... R2 w/o pre-trained LM LenInit (Kikuchi et al., 2016) G 25.87 8.27 LenEmb (Kikuchi et al., 2016) G 26.73 8.39 LC (Liu et al., 2018) G 35.45 14.50 GOLC (Makino et al., 2019) G 38.27 16.30 LenCtrl (Fan et al., 2018) G 39.16 15.54 LenAttn (Yu et al., 2021) G 39.82 17.31 GPT2 CMDP (Chan et al., 2021) G 41.72 17.99 LPAS (Saito et al., 2020 GE 42.55 20.09 w/ pre-trained LM BART (Lewis et al., 2020) N 44.16 21.28 BLPAS (Liu et al., 2022) GE 42.95 20.29 LAAM (Liu et al., 2022) GE 43.55 20.44 PtLAAM (Liu et al., 2022) GE 44 ...
Preprint
Full-text available
Many applications of text generation such as summarization benefit from accurately controlling the text length. Existing approaches on length-controlled summarization either result in degraded performance or can only control the length approximately. In this work, we present a framework to generate summaries with precisely the specified number of tokens or sentences, while maintaining or even improving the text quality. In addition, we jointly train the models to predict the lengths, so our model can generate summaries with optimal length. We evaluate the proposed framework on the CNNDM dataset and show improved performance compared to existing methods.
... In addition to regularizing the training of the decoder, this method reduces the search space by searching only for summaries of the appropriate length during generation, and so it is expected to produce a concise and informative summary. Although there have been studies on adjusting the output length of summaries, they have focused on controlling the output length for a given desired length (Kikuchi et al., 2016;Liu et al., 2018;Takase and Okazaki, 2019;Makino et al., 2019;Saito et al., 2020;Yu et al., 2021). 1 We incorporate a target-length prediction task to the encoder side and then inject the predicted length to the decoder side to generate the final summary. ...
... In summary length control, previous work mostly focuses on controlling models for generating summaries with a predefined length (Kikuchi et al., 2016;Liu et al., 2018;Takase and Okazaki, 2019;Makino et al., 2019;Saito et al., 2020;Yu et al., 2021). Our work is novel because it enables a model dynamically predicts the appropriate summary length from the input text without relying on any predefined length. ...
Article
Full-text available
Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model’s parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model’s superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.
Preprint
Large language models (LLMs) have attracted great attention given their strong performance on a wide range of NLP tasks. In practice, users often expect generated texts to fall within a specific length range, making length controlled generation an important topic, especially for GPT-style models. Existing length control methods mostly focus on a simple control type of "equal to" a target length. Different from them, we propose a prompt-based method to achieve length controlled generation under different control types with high accuracy. In particular, we adopt reinforcement learning (RL) and sample filtering with the reward signal given by rule-based reward models, which enhances the length control ability of models by rewarding outputs that follow certain control instructions. In addition, we introduce a standard prompt extractor to parse arbitrary users' input into standard control instructions. Experiments show that our method significantly improves the accuracy of prompt-based length control on popular summarization datasets like CNNDM and NYT under multiple control types. Moreover, both the standard prompt extractor and RL-tuned model show strong generalization to unseen control prompt templates.