Keywords: Machine Learning, Code conversion, Frameworks, Compilers, Infrastructure
TL;DR: In this paper we present Ivy, a machine learning framework and transpiler that speeds up research, development and inference by enabling the use of all ML infrastructure across models and frameworks.
Abstract: Today's machine learning (ML) ecosystem suffers from deep fragmentation due to the proliferation of numerous incompatible frameworks, compiler infrastructure and hardware. Each unique tool within this fragmented stack has its own set of benefits and drawbacks, making it better suited for certain use-cases. As a result, different areas of industry and academia use different tools for different use cases, which hinders collaboration and democratization, ultimately resulting in costly re-implementations and sub-optimal runtime efficiency when deploying, due to sparse and partial connections to the rest of the stack. In this paper, we present Ivy, a complementary, multi-backend ML framework, and its transpiler, which aims to bridge this gap and solve the fragmentation problem by enabling the integration of code from one framework into another to speed up research, development, and model inference.
Submission Number: 58
Loading