Monday, October 5, 2015 - 4:00pm to 4:50pm
KEC 1003

Speaker Information

Liang Huang
Assistant Professor
School of EECS
Oregon State University


Why are computers so bad at processing human languages while so good at programming languages? What’s the key difference between English and C++ that makes the former so much harder? Can computers ever translate English into Chinese as fast as it compiles Java into bytecode? In this talk I'll present a linear-time dynamic programming model for incremental parsing inspired by both human sentence processing (psycholinguistics) and compiler theory (LR parsing). This model, being linear-time, is much faster than, but also as accurate as, the dominant cubic-time algorithms. It overcomes the ambiguity explosion problem by approximate dynamic programming, which corresponds to local ambiguity packing in psycholinguistics. Given the parse tree of the input sentence, we can use another linear-time dynamic programming algorithm to translate it into another language, again inspired by syntax-directed translation in compiler theory. To alleviate the affect of parsing errors where the 1-best parse tree might not be accurate, we simultaneously consider millions of alternative parse trees, but using space and time that are equivalent to just about 30 trees, thanks to the compact representation called “packed forest” encoding exponentially many trees.

Speaker Bio

Liang Huang is a new faculty member in EECS. Before coming here he spent three years as an assistant professor at the City University of New York (CUNY) and a part-time research scientist at IBM T. J. Watson Research Center. He graduated in 2008 from the University of Pennsylvania and has worked as a research scientist at Google and a research assistant professor at USC/ISI before CUNY. Most of his work develops fast algorithms and provable theory to speed up large-scale natural language processing and structured machine learning. He has received a Best Paper Award at ACL 2008, several best paper nominations (ACL 2007, EMNLP 2008, and ACL 2010), two Google Faculty Research Awards (2010 and 2013), a Yahoo Faculty Research Award (2015), and a University Teaching Prize at Penn (2005). His research has been supported by DARPA, NSF, Google, and Yahoo.