Abstract: Learning feasible representation from raw gate-level circuits is essential for incorporating machine learning techniques in logic synthesis, physical design, or verification. Existing structure-based learning methods tend to concentrate mainly on the graph topology, often neglecting logic functionality. This oversight frequently results in a failure to capture the underlying semantics, thereby limiting their overall applicability. To address the concern, we propose a novel circuit representation learning framework, FGNN2, that utilizes a contrastive scheme to effectively extract generic functionality knowledge. We construct a comprehensive pretraining dataset through a customized circuit augmentation scheme. We have also developed a novel contrastive loss function to capture the relative functional distance between different circuits, and to generate representations that are invariant to the input order. In addition, we employed a customized graph neural network (GNN) architecture to better align with the above framework. Comprehensive experiments on the multiple complex real-world designs demonstrate that our proposed solution significantly outperforms the state-of-the-art circuit representation learning flows.
Loading