We are a research lab focused on investigating probabilistic models and programs that are reliable and efficient. We are based at the School of Informatics, University of Edinburgh within the Institute for Adaptive and Neural Computations (ANC).

#probabilistic-modeling #neuro-symbolic-AI #constraints #tractable-inference #circuits
latest news
selected works
A novel task in remote sensing for mood disorders, better aligned with the real-world clinical practice, beyond a reductioninst acute illness yes-no binary classification: old and new machine learning challenges Nature Translational Psychiatry
We propose to build (deep) subtractive mixture models by squaring circuits. We theoretically prove their expressiveness by deriving an exponential lowerbound on the size of circuits with positive parameters only. ICLR 2024 spotlight (top 5%)
We highlight a weakness of low-rank linear multi-label classifiers: they can have meaningful outputs that cannot be predicted. We design a classifier which guarantees that sparse outputs can be predicted while using less trainable parameters. AAAI 2024
KGE models such as CP, RESCAL, TuckER, ComplEx can be re-interpreted as circuits to unlock their generative capabilities, scaling up inference and learning and guaranteeing the satisfaction of logical constraints by design. NeurIPS 2023 oral (top 0.6%)
We design a differentiable layer that can be plugged into any neural network to guarantee that predictions are always consistent with a set of predefined symbolic constraints and can be trained end-to-end. NeurIPS 2022
A systematic framework in which tractable inference routines can be broken down into smaller and composable primitives operating on circuit representations. NeurIPS 2021 oral (top 0.6%)