Abstract: Profanity often conveys rich meaning concisely. We leverage this by substituting Russian obscene terms, achieving up to 23\% shorter sentences, and introduce a reinforcement learning method that fine-tunes models for brevity without sacrificing informativeness. Evaluations on Gazeta and ru\_ParaDetox show that our approach produces summaries over 65\% shorter while maintaining comparable metrics. These findings demonstrate the effectiveness of combining expressive lexicon substitution with reward-guided training for efficient text summarization and style transfer.
Paper Type: Short
Research Area: Summarization
Research Area Keywords: Summarization, Generation, Language Modeling, NLP Applications, Semantics: Lexical and Sentence-Level
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Russian
Submission Number: 3033
Loading