Abstract: This Paper describes Zoom’s submission to the First Shared Task on Automatic Minuting at Interspeech 2021. We participated in Task A: generating abstractive summaries of meetings. For this task, we use a transformer-based summarization model which is first trained on data from a similar domain and then finetuned for domain transfer. In this configuration, our model does not yet produce usable summaries. We theorize that in the choice of pretraining corpus, the target side is more important than the source.
Loading