OREGON STATE UNIVERSITY

You are here

Learning Rules from Incomplete Examples via a Probabilistic Mention Model

TitleLearning Rules from Incomplete Examples via a Probabilistic Mention Model
Publication TypeConference Paper
Year of Publication2011
AuthorsSorower, M S., T. G. Dietterich, J R. Doppa, P. Tadepalli, and X. Z. Fern
Conference NameIJCAI 2011 Workshop on Learning by Reading and its Applications in Intelligent Question-Answering
Date Published07/2011
Conference LocationBarcelona, Catalonia, Spain
Abstract

We consider the problem of learning rules from natural language text sources. These sources, such as news articles, journal articles, and web texts, are created by a writer to communicate information to a reader, where the writer and reader share substantial domain knowledge. Consequently, the texts tend to be concise and mention the minimum information necessary for the reader to draw the correct conclusions. We study the problem of learning domain knowledge from such concise texts, which is an instance of the general problem of learning in the presence of missing data. However, unlike standard approaches to missing data, in this setting we know that facts are more likely to be missing from the text in cases where the reader can infer them from the facts that are mentioned combined with the domain knowledge. Hence, we can explicitly model this “missingness” process and invert it via probabilistic inference to learn the underlying domain knowledge. This paper introduces an explicit probabilistic mention model that models the probability of facts being mentioned in the text based on what other facts have already been mentioned and domain knowledge in the form of Horn clause rules. Learning must simultaneously search the space of rules and learn the parameters of the mention model. We accomplish this via an application of Expectation Maximization within a Markov Logic framework. An experimental evaluation on synthetic and natural text data shows that the method can successfully learn accurate rules and apply them to new texts to make correct inferences.