We are a research lab focused on investigating probabilistic models and programs that are reliable and efficient. We are based at the School of Informatics, University of Edinburgh within the Institute for Adaptive and Neural Computations (ANC).

latest news
selected works
A systematic framework in which tractable inference routines can be broken down into smaller and composable primitives operating on circuit representations. NeurIPS 2021 oral (top 0.6%)
We design a differentiable layer that can be plugged into any neural network to guarantee that predictions are always consistent with a set of predefined symbolic constraints and can be trained end-to-end. NeurIPS 2022
We propose to build (hierarchical) negative mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only. TPM 2023
KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up inference and learning and guaranteeing the satisfaction of logical constraints by design. NeurIPS 2023 oral