ADELT: Unsupervised Transpilation Between Deep Learning FrameworksDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Applications, Programming Languages, Deep Learning, Unsupervised Learning, Adversarial Training
Abstract: We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source transpilation between deep learning frameworks. Unlike prior approaches, ADELT formulates the transpilation problem as mapping API keyword (an API function name or a parameter name). Based on contextual embeddings extracted by a BERT for code, we train aligned API embeddings in a domain-adversarial setting, upon which we generate a dictionary for keyword translation. The model is trained on our unlabeled DL corpus from web crawl data, without using any hand-crafted rules and parallel data. Our method outperforms state-of-the-art transpilers on multiple transpilation pairs including PyTorch-Keras and PyTorch-MXNet. We make our code, corpus, and evaluation benchmark publicly available.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
TL;DR: We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source transpilation between deep learning frameworks.
Supplementary Material: zip
18 Replies

Loading