Keywords: reactive synthesis, temporal logics, deep learning, neural networks, language models
TL;DR: This extended abstract reports preliminary results on fine-tuning pre-trained language models for solving reactive synthesis problems end-to-end.
Abstract: This extended abstract reports preliminary results on fine-tuning pre-trained language models for solving reactive synthesis problems end-to-end. In recent work, hierarchical Transformer neural networks have been successfully trained from scratch to synthesize sequential circuits directly out of formal specifications. We improve over existing approaches by fine-tuning CodeT5 models that have been pre-trained on both natural language and programming languages. Our experiments show improved generalization and sample efficiency compared to the previous approach.
1 Reply
Loading