Abstract: In this work, we propose DeepSeq, a novel representation learning framework for sequential netlists. It employs a graph neural network (GNN) with customized propagation to capture temporal correlations. To ensure effective learning, we propose a multi-task training objective with two sets of strongly related supervision: logic probability and transition probability at each logic gate. A novel dual attention aggregation mechanism is introduced to facilitate learning both tasks efficiently. Experimental results validate DeepSeq's superiority over other GNN models in sequential circuit learning. It demonstrates accurate reliability and power estimation across diverse circuits and workloads.
Loading