We are a research lab focused on investigating probabilistic models and programs that are reliable and efficient. We are based at the School of Informatics, University of Edinburgh within the Institute for Adaptive and Neural Computations (ANC).

#probabilistic-modeling #neuro-symbolic-AI #constraints #tractable-inference #circuits
latest news
selected works
We investigate the connections between tensor factorizations and circuits: how the literature of the former can benefit from the theory about the latter, and how we can scale the latter with the former. arXiv 2024
We theoretically prove an expressiveness limitation of deep subtractive mixture models learned by squaring circuits. To overcome this limitation, we propose sum of squares circuits and build an expressiveness hierarchy around them, allowing us to unify and separate many tractable probabilistic models. arXiv 2024
We propose to build (deep) subtractive mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only. ICLR 2024 spotlight (top 5%)
We highlight a weakness of low-rank linear multi-label classifiers: they can have meaningful outputs that cannot be predicted. We design a classifier which guarantees that sparse outputs can be predicted while using less trainable parameters. AAAI 2024
KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up inference and learning and guaranteeing the satisfaction of logical constraints by design. NeurIPS 2023 oral (top 0.6%)
We design a differentiable layer that can be plugged into any neural network to guarantee that predictions are always consistent with a set of predefined symbolic constraints and can be trained end-to-end. NeurIPS 2022
A systematic framework in which tractable inference routines can be broken down into smaller and composable primitives operating on circuit representations. NeurIPS 2021 oral (top 0.6%)