Abstract
Algorithms for exact and approximate inference
in stochastic logic programs (SLPs) are pre
sented, based respectively, on variable elimina
tion and importance sampling. We then show
how SLPs can be used to represent prior distri
butions for machine learning, using(i) logic pro
grams and(ii) Bayes net structures as examples.
Drawing on existing work in statistics, we apply
theMetropolis-Hasting algorithm to construct a
Markov chain which samples from the posterior
distribution. A Prolog implementation for this is
described. We also discuss the possibility of con
structing explicit representations of the posterior.
in stochastic logic programs (SLPs) are pre
sented, based respectively, on variable elimina
tion and importance sampling. We then show
how SLPs can be used to represent prior distri
butions for machine learning, using(i) logic pro
grams and(ii) Bayes net structures as examples.
Drawing on existing work in statistics, we apply
theMetropolis-Hasting algorithm to construct a
Markov chain which samples from the posterior
distribution. A Prolog implementation for this is
described. We also discuss the possibility of con
structing explicit representations of the posterior.
Original language | English |
---|---|
Title of host publication | Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence (UAI-00) |
Publisher | Morgan Kauffman |
Pages | 115-122 |
DOIs | |
Publication status | Published - 30 Jun 2000 |