Track: Tiny paper track (up to 4 pages)
Abstract: Deep neural networks (DNNs) have achieved remarkable success in predicting transcription factor (TF) binding from high-throughput genome profiling data. Since TF binding is primarily driven by sequence motifs, understanding how DNNs make accurate predictions could help identify these motifs and their logical syntax. However, the black-box nature of DNNs complicates interpretation. Most post-hoc methods evaluate the importance of each base pair in isolation, often resulting in noise since they overlook the fact that motifs are contiguous regions. Additionally, these methods fail to capture the complex interactions between different motifs. To address these challenges, we propose Motif Explainer Models (MEMs), a novel explanation method that uses sufficiency and necessity to identify important motifs and their syntax. MEMs excel at identifying multiple disjoint motifs across DNA sequences, overcoming limitations of existing methods. Moreover, by accurately pinpointing sufficient and necessary motifs, MEMs can reveal the logical syntax that governs genomic regulation.
Submission Number: 65
Loading