Abstract: Symbolic regression is the process of finding an analytical expression that fits experimental data with the least amount of operators, variables and constants symbols. Given the huge combinatorial space of possible expressions, evolutionary algorithms struggle to find expressions that meets these criteria in a reasonable amount of time. To efficiently reduce the search space, neural symbolic regression algorithms have recently been proposed for their ability to identify patterns in the data and output analytical expressions in a single forward-pass. However, these new approaches to symbolic regression do not allow for the direct encoding of user-defined prior knowledge, a common scenario in natural sciences and engineering. In this work, we propose the first neural symbolic regression method that allows users to explicitly bias prediction towards expressions that satisfy a set of assumptions on the expected structure of the ground-truth expression. Our experiments show that our conditioned deep learning model outperforms its unconditioned counterparts in terms of accuracy while achieving control over the predicted expression structure.