Abstract: Conversational Question Answering (ConvQA) is a Conversational Search task in a
simplified setting, where an answer must be
extracted from a given passage. Neural language models, such as BERT, fine-tuned on
large-scale ConvQA datasets such as CoQA
and QuAC have been used to address this
task. Recently, Multi-Task Learning (MTL)
has emerged as a particularly interesting approach for developing ConvQA models, where
the objective is to enhance the performance of
a primary task by sharing the learned structure
across several related auxiliary tasks. However, existing ConvQA models that leverage
MTL have not investigated the dynamic adjustment of the relative importance of the different
tasks during learning, nor the resulting impact
on the performance of the learned models. In
this paper, we first study the effectiveness and
efficiency of dynamic MTL methods including
Evolving Weighting, Uncertainty Weighting,
and Loss-Balanced Task Weighting, compared
to static MTL methods such as the uniform
weighting of tasks. Furthermore, we propose
a novel hybrid dynamic method combining
Abridged Linear for the main task with a LossBalanced Task Weighting (LBTW) for the auxiliary tasks, so as to automatically fine-tune
task weighting during learning, ensuring that
each of the tasks’ weights is adjusted by the relative importance of the different tasks. We conduct experiments using QuAC, a large-scale
ConvQA dataset. Our results demonstrate the
effectiveness of our proposed method, which
significantly outperforms both the single-task
learning and static task weighting methods
with improvements ranging from +2.72% to
+3.20% in F1 scores. Finally, our findings
show that the performance of using MTL in
developing ConvQA model is sensitive to the
correct selection of the auxiliary tasks as well
as to an adequate balancing of the loss rates of
these tasks during training by using LBTW.
0 Replies
Loading