Abstract
We consider the problem of learning Bayesian
networks (BNs) from complete discrete data.
This problem of discrete optimisation is for-
mulated as an integer program (IP). We de-
scribe the various steps we have taken to al-
low efficient solving of this IP. These are (i)
efficient search for cutting planes, (ii) a fast
greedy algorithm to find high-scoring (per-
haps not optimal) BNs and (iii) tightening
the linear relaxation of the IP. After relating
this BN learning problem to set covering and
the multidimensional 0-1 knapsack problem,
we present our empirical results. These show
improvements, sometimes dramatic, over ear-
lier results.
networks (BNs) from complete discrete data.
This problem of discrete optimisation is for-
mulated as an integer program (IP). We de-
scribe the various steps we have taken to al-
low efficient solving of this IP. These are (i)
efficient search for cutting planes, (ii) a fast
greedy algorithm to find high-scoring (per-
haps not optimal) BNs and (iii) tightening
the linear relaxation of the IP. After relating
this BN learning problem to set covering and
the multidimensional 0-1 knapsack problem,
we present our empirical results. These show
improvements, sometimes dramatic, over ear-
lier results.
Original language | English |
---|---|
Title of host publication | Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013) |
Publication status | Published - 12 Jul 2013 |