Rethinking multi-level information fusion in temporal graphs: Pre-training then distilling for better embedding
Abstract: Highlights•Multi-level information is important in temporal graphs and is difficult to obtain efficiently with existing methods.•We point out that such information can be flexibly captured by “pre-training then distilling” pattern.•We realize the knowledge distilling of different levels of information by introducing a pre-training module.•Experimental results demonstrate our method can improve performance while avoiding excessive computation.
Loading