Download PDFOpen PDF in browser

On Projectivity in Markov Logic Networks

EasyChair Preprint no. 8668

20 pagesDate: August 11, 2022

Abstract

Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains.

Keyphrases: lifted inference, Markov Logic Networks, Projectivitiy, Statistical Relational Learning, Weighted First Order Model Counting

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:8668,
  author = {Sagar Malhotra and Luciano Serafini},
  title = {On Projectivity in Markov Logic Networks},
  howpublished = {EasyChair Preprint no. 8668},

  year = {EasyChair, 2022}}
Download PDFOpen PDF in browser