Claim verification can be a difficult task, even for humans. In this paper, we propose a method to improve automated claim verification through short fact extraction from evidence to enhance reasoning abilities. We propose a framework (FactGen) that uses Large Language Models (LLMs) to generate short factual statements from evidence and then label these facts based on their semantic relevance to the claim and evidence. We then add a relevant fact-detection task (FactDetect) to the claim verification task as a multi-tasking approach to improve performance and explainability.
Our method improves the supervised claim verification model by 15% on the F1 score when evaluated on SciFact and demonstrates competitive results on other challenging scientific claim verification datasets. We also demonstrate that FactDetect can be adjusted to the LLMs as a prompting strategy for verdict prediction. We show that incorporating FactDetect in relatively smaller LLMs such as Llama2-13B and Vicuna-13B can improve the verification performance significantly on the SciFact dataset and higher quality FactGen generated sentences outperform state-of-the-art models in all test sets.