Abstract: The expressivity of Graph Neural Networks (GNNs) can be described via appropriate fragments of the first order logic. Any query of the two variable fragment of graded modal logic (GC2) interpreted over labeled graphs can be expressed using a Rectified Linear Unit (ReLU) GNN whose size does not grow with graph input sizes [Barcelo & Al., 2020]. Conversely, a GNN expresses at most a query of GC2, for any choice of activation function. In this article, we prove that some GC2 queries of depth $3$ cannot be expressed by GNNs with any rational activation function. This shows that not all non-polynomial activation functions confer GNNs maximal expressivity, answering a open question formulated by [Grohe, 2021]. This result is also in contrast with the efficient universal approximation properties of rational feedforward neural networks investigated by [Boull\'e & Al., 2020]. We also present a rational subfragment of the first order logic (RGC2), and prove that rational GNNs can express RGC2 queries uniformly over all graphs.
Loading