TL;DR: We use the distillation learning method to improve the accuracy of prediction of the information above the layout on the RTL level circuit.
Abstract: Accurate and efficient timing prediction at the register-transfer level (RTL) remains a fundamental challenge in electronic design automation (EDA), particularly in striking a balance between accuracy and computational efficiency. While static timing analysis (STA) provides high-fidelity results through comprehensive physical parameters, its computational overhead makes it impractical for rapid design iterations. Conversely, existing RTL-level approaches sacrifice accuracy due to the limited physical information available. We propose RTLDistil, a novel cross-stage knowledge distillation framework that bridges this gap by transferring precise physical characteristics from a layout-aware teacher model (Teacher GNN) to an efficient RTL-level student model (Student GNN), both implemented as graph neural networks (GNNs). RTLDistil efficiently predicts key timing metrics, such as arrival time (AT), and employs a multi-granularity distillation strategy that captures timing-critical features at node, subgraph, and global levels. Experimental results demonstrate that RTLDistil achieves significant improvement in RTL-level timing prediction error reduction, compared to state-of-the-art prediction models. This framework enables accurate early-stage timing prediction, advancing EDA's ``left-shift'' paradigm while maintaining computational efficiency. Our code and dataset will be publicly available at https://github.com/sklp-eda-lab/RTLDistil.
Lay Summary: Designing computer chips is like constructing a building—problems discovered late in construction are expensive to fix. Currently, engineers can only accurately predict if a chip will run fast enough after completing most of the design work, similar to discovering structural issues only after a building is nearly complete. We developed RTLDistil, a system that uses artificial intelligence to predict chip performance much earlier in the design process. Our approach works like an experienced teacher sharing knowledge with a student: a "teacher" AI model learns from completed chip designs with all their physical details, then transfers this knowledge to a "student" AI model that can make predictions using only early-stage design sketches. The student model achieves nearly the same accuracy as the teacher while working with limited information. Our experiments show RTLDistil significantly improves early-stage predictions compared to existing methods. This advancement allows chip designers to identify and fix timing problems months earlier, potentially saving millions in development costs and accelerating the delivery of faster, more efficient computer chips that power everything from smartphones to data centers.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/sklp-eda-lab/RTLDistil
Primary Area: Applications->Everything Else
Keywords: .+Knowledge Distillation.+RTL timing predict.+Electronic Design Automation
Submission Number: 5715
Loading