Markov Logic: An Interface Layer for Artificial Intelligence
Most subfields of computer science have an interface layer via which applications communicate with the infrastructure, and this is key to their success (e.g., the Internet in networking, the relational model in databases, etc.). So far this interface layer has been missing in AI. First-order logic and probabilistic graphical models each have some of the necessary features, but a viable interface layer requires combining both. Markov logic is a powerful new language that accomplishes this by attaching weights to first-order formulas and treating them as templates for features of Markov random fields. Most statistical models in wide use are special cases of Markov logic, and first-order logic is its infinite-weight limit. Inference algorithms for Markov logic combine ideas from satisfiability, Markov chain Monte Carlo, belief propagation, and resolution. Learning algorithms make use of conditional likelihood, convex optimization, and inductive logic programming. Markov logic has been successfully applied to problems in information extraction and integration, natural language processing, robot mapping, social networks, computational biology, and others, and is the basis of the open-source Alchemy system. Table of Contents: Introduction / Markov Logic / Inference / Learning / Extensions / Applications / Conclusion
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
Alchemy algorithm applications approach assignment Bayesian networks beam search belief propagation cluster combination computing conditional conjunction constants constraints contains coreference corresponding database dataset deﬁned Deﬁnition disjunction distribution efﬁcient entity resolution Equation evaluated example ﬁle ﬁnd ﬁrst ﬁrst-order logic ﬂipping function Gibbs measures Gibbs sampling graphical models ground atoms ground formula ground predicate HMLNs inductive logic programming inference inﬁnite inputs interface layer iteration knowledge base L-BFGS lifted network literals logic programming logistic regression mapping Markov blanket Markov logic network Markov network MaxWalkSAT MC-SAT MCMC methods ML,C number of true objects optimization parameters parfeature possible worlds probabilistic probability problem quantiﬁed query atoms random ﬁelds recursive represent robot mapping rule satisﬁed second-order clique segments semantic speciﬁcation step structure learning superfeatures supernodes symbols target domain TDSL true groundings truth values tuples unit clause variables WalkSAT weight learning WPLL