Keywords: Migration, Neural Networks, MDE, Model Transformation, Code Generation
Abstract: The development of smart systems (i.e., systems enhanced with AI components) has thrived thanks to the rapid advancements in neural networks (NNs). A wide range of libraries and frameworks have consequently emerged to support NN design and implementation. The choice depends on factors such as available functionalities, ease of use, documentation and community support. After adopting a given NN framework, organizations might later choose to switch to another if performance declines, requirements evolve, or new features are introduced. Unfortunately, migrating NN implementations across libraries is challenging due to the lack of migration approaches specifically tailored for NNs. This leads to increased time and effort to modernize NNs, as manual updates are necessary to avoid relying on outdated implementations and ensure compatibility with new features. In this paper, we propose an approach to automatically migrate NN code across deep learning frameworks. Our method makes use of a pivot NN model to create an abstraction of the NN prior to migration. We validate our approach using two popular NN frameworks, namely PyTorch and TensorFlow. We also discuss the challenges of migrating code between the two frameworks and how they were approached in our method. Experimental evaluation on five NNs shows that our approach successfully migrates their code and produces NNs that are functionally equivalent to the originals. Our artefacts are available online.
Revision Summary: The following changes were made in response to the reviewers’ feedback:
**Section 6.3.2**: Following Reviewer 1K5K’s suggestion, a sentence was added clarifying the cause of the small differences observed between source and migrated NNs, supported by two additional references suggested by the reviewer [10, 63].
**Section 7.1**: Following Reviewer 4o5p’s observation about the inconsistent use of "interoperability" across the paper, the sentence was revised to clarify that the term refers specifically to runtime cross-framework compatibility in this context.
**Section 7.2**: Several additions were made in response to feedback from all three reviewers:
- ADELT was added and explicitly compared with our approach [22].
- The discussion of ONNX-derived libraries was expanded to explicitly list and discuss onnx2torch, onnx2keras, keras-onnx, and torch.onnx [17, 43, 48, 49, 52, 64].
- TVM, MLIR, and DyCL were added and discussed in relation to our approach [11, 12, 36].
- A sentence was added discussing the trade-offs between our approach and existing conversion tools.
- TensorFlow Lite was added as an example of a deployment-oriented framework [23].
- Jajal et al. and Lee et al. were added and explicitly discussed [30, 37].
**Author information and repository**: Author names, affiliations, and acknowledgements have been added, and the anonymous repository link has been updated to the public repository.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public.
Paper Type: Full-length papers (i.e. case studies, theoretical, applied research papers). 8 pages
Reroute: false
Submission Number: 14
Loading