# Markov Logic Networks for Natural Language Question Answering

@article{Khot2015MarkovLN, title={Markov Logic Networks for Natural Language Question Answering}, author={Tushar Khot and Niranjan Balasubramanian and Eric Gribkoff and Ashish Sabharwal and Peter E. Clark and Oren Etzioni}, journal={ArXiv}, year={2015}, volume={abs/1507.03045} }

Our goal is to answer elementary-level science questions using knowledge extracted automatically from science textbooks, expressed in a subset of first-order logic. Given the incomplete and noisy nature of these automatically extracted rules, Markov Logic Networks (MLNs) seem a natural model to use, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. In the first, we simply use the extracted science rules directly as MLN clauses… Expand

#### 18 Citations

Reading and Reasoning with Knowledge Graphs

- Computer Science
- 2015

This thesis presents methods for reasoning over very large knowledge bases, and shows how to apply these methods to models of machine reading, which can successfully incorporate knowledge base information into machine learning models of natural language. Expand

A Semantic Question Answering Framework for Large Data Sets

- Computer Science
- Open J. Semantic Web
- 2016

This article describes a purely semantic question answering (QA) framework for large document collections that transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. Expand

Automatic Construction of Inference-Supporting Knowledge Bases

- Computer Science
- 2014

This paper describes the work on automatically constructing an inferential knowledge base, and applying it to a question-answering task, and suggests several challenges that this approach poses, and innovative, partial solutions that have been developed. Expand

Leveraging Graph Neighborhoods for Efficient Inference

- Computer Science
- CIKM
- 2019

This study uses a probabilistic extension of OWL RL as a modeling language and exploit graph neighborhoods (of undirected graphical models) for efficient approximate Probabilistic inference and shows that subgraph extraction based inference is much faster and has comparable accuracy to full graph inference. Expand

Improving Retrieval-Based Question Answering with Deep Inference Models

- Computer Science
- 2019 International Joint Conference on Neural Networks (IJCNN)
- 2019

This proposed two-step model outperforms the best retrieval-based solver by over 3% in absolute accuracy and can answer both simple, factoid questions and more complex questions that require reasoning or inference. Expand

Scaling Probabilistic Temporal Query Evaluation

- Computer Science
- CIKM
- 2017

This work proposes the PRATiQUE (PRobAbilistic Temporal QUery Evaluation) framework for scalable temporal query evaluation, which harnesses the structure of temporal inference rules for efficient in-database grounding and uses partitions to store structurally equivalent rules. Expand

KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

- Computer Science, Mathematics
- ArXiv
- 2018

This paper proposes a novel framework for answering science exam questions, which mimics human solving process in an open-book exam and outperforms the previous state-of-the-art QA systems. Expand

Multi-hop Path Queries over Knowledge Graphs with Neural Memory Networks

- Computer Science
- DASFAA
- 2019

A novel model based on the recently proposed neural memory networks, which have large external memories and flexible writing/reading schemes, is designed and a flexible memory updating method is developed to facilitate writing intermediate entity information during the multi-hop reasoning into memories. Expand

Rule Based Temporal Inference

- Computer Science
- ICLP
- 2017

This work studies the problems of temporal information extraction and temporal scoping of facts in knowledge graphs by using probabilistic programming and reports experimental results comparing the efficiency of several state of the art systems. Expand

Semantic question answering on big data

- Computer Science
- SBD '16
- 2016

Improvements in performance over a regular free text search index-based question answering engine prove that SQA can benefit greatly from the addition and consumption of deep semantic information. Expand

#### References

SHOWING 1-10 OF 44 REFERENCES

Efficient Markov Logic Inference for Natural Language Semantics

- Computer Science
- AAAI Workshop: Statistical Relational Artificial Intelligence
- 2014

A new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms and introduces a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Expand

Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network

- Mathematics, Computer Science
- IJCAI
- 2009

A preprocessing algorithm is proposed that can substantially reduce the effective size of Markov Logic Networks (MLNs) by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals. Expand

Markov logic networks

- Computer Science, Mathematics
- Machine Learning
- 2006

Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach to combining first-order logic and probabilistic graphical models in a single representation. Expand

Memory-Efficient Inference in Relational Domains

- Computer Science
- AAAI
- 2006

LazySAT is proposed, a variation of the Walk-SAT solver that avoids this blowup by taking advantage of the extreme sparseness that is typical of relational domains and reduces memory usage by orders of magnitude. Expand

Constraint Propagation for Efficient Inference in Markov Logic

- Mathematics, Computer Science
- CP
- 2011

This work proposes a generalized arc consistency algorithm that prunes the domains of predicates by propagating hard constraints, avoiding the need to explicitly ground the hard constraints during the pre-processing phase, yielding a potentially exponential savings in space and time. Expand

Tuffy: Scaling up Statistical Inference in Markov Logic Networks using an RDBMS

- Computer Science
- Proc. VLDB Endow.
- 2011

This work presents Tuffy, a scalable Markov Logic Networks framework that achieves scalability via three novel contributions: a bottom-up approach to grounding, a novel hybrid architecture that allows to perform AI-style local search efficiently using an RDBMS, and a theoretical insight that shows when one can improve the efficiency of stochastic local search. Expand

Extracting Semantic Networks from Text Via Relational Clustering

- Computer Science
- ECML/PKDD
- 2008

This paper uses the TextRunner system to extract tuples from text, and then induce general concepts and relations from them by jointly clustering the objects and relational strings in the tuples using Markov logic. Expand

A Tractable First-Order Probabilistic Logic

- Computer Science
- AAAI
- 2012

It is shown that TML knowledge bases allow for efficient inference even when the corresponding graphical models have very high treewidth, and opens up the prospect of efficient large-scale first-order probabilistic inference. Expand

Sound and Efficient Inference with Probabilistic and Deterministic Dependencies

- Computer Science
- AAAI
- 2006

MC-SAT is an inference algorithm that combines ideas from MCMC and satisfiability, based on Markov logic, which defines Markov networks using weighted clauses in first-order logic and greatly outperforms Gibbs sampling and simulated tempering over a broad range of problem sizes and degrees of determinism. Expand

Unsupervised semantic parsing

- Computer Science
- MLSLP
- 2012

This work presents the first unsupervised approach to the problem of learning a semantic parser, using Markov logic, and substantially outperforms TextRunner, DIRT and an informed baseline on both precision and recall on this task. Expand