Abstract
Learning Bayesian network structure is the NP-hard task of findinga directed acyclic graph that best fits real data. Two integer vector encodingsexist – family variable and characteristic imset – which model thesolution space of BN structure. Each encoding yields a polytope, the familyvariableand characteristic imset polytopes respectively. It has been shownthat learning BN structure using a decomposable and score equivalent scoringcriteria (such as BIC) is equivalent to optimizing a linear function overeither the family-variable or characteristic imset polytope. This monographis primarily intended for readers already familiar with BN but not familiarwith polyhedral approaches to learning BN. Thus, this monograph focuses onthe family-variable and characteristic imset polytopes, their known faces andfacets, and more importantly, deep connections between their faces and facets.Specifically that many of the faces of the family variable polytope are superfluouswhen learning BN structure. Sufficient background on Bayesian networks,graphs, and polytopes are provided. The currently known faces and facets ofeach polytope are described. Deep connections between many of the faces andfacets of family-variable and characteristic polytope are then summarized fromrecent results. Lastly, a brief history and background on practical approachesto learning BN structure using integer linear programming over both polytopesis provided.
Original language | English |
---|---|
Title of host publication | Algebraic and Geometric Methods in Discrete Mathematics |
Place of Publication | Providence, RI |
Publisher | American Mathematical Society |
Pages | 155-188 |
Number of pages | 34 |
Volume | 685 |
DOIs | |
Publication status | Published - 2017 |