Keywords: synthetic data, industrial assets, LLM
Abstract: With the emergence of agentic workflow development using Large Language Models (LLMs) for industrial applications, there is a growing need for small language models to possess domain-specific knowledge. In many existing approaches, reference materials such as books are used as a source of knowledge. This paper presents a novel approach to fine-tune a base LLM model in a continued pre-training fashion for the industrial assets domain, leveraging knowledge documented in a tabular structure to generate synthetic knowledge documents and a vast amount of question-answer pairs using an entity and relationship-driven approach. Ultimately, this approach enables the fine-tuning of a small LLM (LLAMA 3.1) to evaluate the performance enhancement it brings. We tested the base model and the enhanced model on the Industry4-FMSR MCQA dataset, comprising over 2,600 samples, and obtained around 4% overall improvement. Our experimental results confirm the validity of our approach in generating synthetic data for knowledge infusion tasks.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10844
Loading