Workshop Track: Machine Learning for System
Presentation: Virtual
Keywords: generative model, compiler, high level synthesis, directed acyclic graphs
Presenter Full Name: Mufei Li
TL;DR: A generative model of directed acyclic graphs for the design and optimizations of system and hardware
Presenter Email: mufei.li@gatech.edu
Abstract: Directed acyclic graphs (DAGs) are ubiquitous in the design and optimizations of systems. For example, neural networks have become a key computational workload for system design, and neural architectures are natively DAGs. Intermediate representations in compilers or hardware-synthesis tool to characterize execution dependencies and dataflows of computation also often take the form of DAGs. For sensitive scenarios, we believe that learning a conditional generative model of DAGs allows releasing synthetic data that preserves downstream utility while protecting intellectual property (obfuscation). In addition, such models can efficiently search the space of valid DAGs for desired properties, which is of great potential use to applications like compiler optimization. However, generating realistic DAGs is challenging due to their inherent directional and logical dependencies. This paper introduces LayerDAG, an autoregressive diffusion model designed to address these challenges in DAG generation. By iteratively removing the nodes without predecessors and their outgoing edges, we can obtain a unique tokenization that turns a DAG into a sequence of directed bipartite graphs and its nodes into a sequence of node layers. LayerDAG leverages autoregressive generation to model directional dependencies and employs diffusion models to capture logical dependencies within each bipartite graph. Empirical studies demonstrate that LayerDAG outperforms existing DAG generative models, particularly for generating large-scale DAGs with up to 400 nodes—a critical scenario for system. Our implementation will be available at https://github.com/Graph-COM/LayerDAG.
Presenter Bio: Mufei Li is a PhD student in machine learning at Georgia Institute of Technology. His research interests include graph machine learning and generative models.
Paper Checklist Guidelines: I certify that all co-authors have validated the presented results and conclusions, and have read and commit to adhering to the Paper Checklist Guidelines, Call for Papers and Publication Ethics.
YouTube Link: https://www.youtube.com/watch?v=lE4nIRJTCDE
Dataset Release: I certify that all co-authors commit to release the dataset and necessary scripts to reproduce the presented results.
Workshop Registration: Yes, at least one of the authors has registered for the workshop (Two-Day Registration at minimum).
Submission Number: 14
Loading