Once every week while school is in session, EECS invites a distinguished researcher or practitioner in a computer science or electrical and computer engineering-related field to present their ideas and/or work. Talks are generally targeted to electrical engineering and computer science graduate students. This colloquium series is free and open to everyone.

Upcoming Colloquia

Temporal Logic Robustness for General Signal Classes in Multi-Agent Systems

Monday, January 28, 2019 - 4:00pm to 4:50pm
Weniger Hall 151

Speaker Information

Houssam Abbas
Assistant Professor
Oregon State University

Abstract

In multi-agent systems, robots transmit their planned trajectories to each other or to a central controller, and each receiver plans its own actions by maximizing a measure of mission satisfaction. For missions expressed in temporal logic, the robustness function plays the role of satisfaction measure. It is not clear how the signal representation used to compress and transmit the signal affects the robustness computation error at the receiver, and the efficiency of computing it. An incorrect robustness value, or a delayed computation result, can mean the difference between successful control and a crash. Current practice uses simple Piece-Wise Linear interpolation to reconstruct the transmitted signal, which has little compressive ability. When communication capacity is at a premium, this is a serious bottleneck.

In this talk, we study these questions on two case studies from quadrotor flight and cardiac signal monitoring. We first show that the robustness computation is significantly affected by how the continuous-time signal is reconstructed from the received samples. We show that monitoring spline-based reconstructions yields a smaller robustness error, and that it can be done with the same time complexity as monitoring the simpler piece-wise linear reconstructions. We provide a tight bound on the robustness computation error, and leverage it to design a reconstruction scheme with an even lower computation error than the spline-based schemes. Thus classical signal processing techniques come to the rescue of fragile controller synthesis.

Speaker Bio

Houssam is an Assistant Professor in EECS at Oregon State University. Prior to that, he was a postdoctoral fellow at the University of Pennsylvania, and an SoC Verification engineer at Intel. His research interests are in the verification, control and programming of autonomous Cyber-Physical Systems. Current research includes the verification of life-supporting medical devices, the verification and control of autonomous vehicles with a view towards certifying such systems, and anytime computation and control. Houssam holds a PhD in Electrical Engineering from Arizona State University.

Breakthrough in Simultaneous Translation

Monday, February 4, 2019 - 4:00pm to 4:50pm
Weniger Hall 151

Speaker Information

Liang Huang
Assistant Professor
Oregon State University

Abstract

Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we propose a novel prefix-to-prefix framework for simultaneous translation that seamlessly integrates anticipation and translation. Within this framework, we present a very simple yet surprisingly effective “wait-k” model trained to generate the target sentence concurrently with the source sentence, but always k words behind, for any given k. We also formulate a new latency metric that addresses the deficiencies of previous ones. Experiments show our strategy achieves low latency and reasonable quality (compared to full-sentence translation) on 4 directions: Chinese<->English and German<->English.

This technique has been successfully deployed to simultaneously translate Chinese speeches into English subtitles at the 2018 Baidu World Conference. It has also been covered by numerous media reports: https://simultrans-demo.github.io/.

Speaker Bio

Liang Huang is an Assistant Professor at Oregon State University (currently on leave) and a Principal Scientist at Baidu Research USA. He received his PhD from the University of Pennsylvania in 2008 (under the late Aravind Joshi) and BS from Shanghai Jiao Tong University in 2003. He has been a research scientist at Google, a research assistant professor at USC/ISI, an assistant professor at CUNY, a part-time research scientist at IBM, and an assistant professor at Oregon State University. He is a leading expert in natural language processing (NLP), where he is known for his work on fast algorithms and provable theory in parsing, machine translation, and structured prediction. Dr. Huang also works on applying the same linear-time algorithms he developed for parsing to computational structural biology. He received a Best Paper Award at ACL 2008, a Best Paper Honorable Mention at EMNLP 2016, several best paper nominations (ACL 2007, EMNLP 2008, ACL 2010, and SIGMOD 2018), two Google Faculty Research Awards (2010 and 2013), a Yahoo! Faculty Research Award (2015), and a University Teaching Prize at Penn (2005). The NLP group he runs at Oregon State University ranks 15th on csrankings.org. He also enjoys teaching algorithms and co-authored a best-selling textbook in China on algorithms for programming contests. He is most proud of the four (4) PhD students he graduated.

Past Colloquia

John F. Wager
Monday, May 22, 2017 - 4:00pm to 4:50pm
Tuesday, May 16, 2017 - 11:30am to 12:30pm
Shreekant ‘Ticky’ Thakkar
Monday, April 17, 2017 - 4:00pm to 4:50pm
Tawfiq Musah
Monday, April 10, 2017 - 4:00pm to 4:50pm
Marouane Kesentini
Monday, March 13, 2017 - 4:00pm to 4:50pm
Don Heer, MEE (1) and Karl Mundorff, MBA (2)
Monday, March 6, 2017 - 4:00pm to 4:50pm
Amin Alipour
Monday, February 20, 2017 - 4:00pm to 4:50pm
Bruce E. Hofer
Monday, February 13, 2017 - 4:00pm to 4:50pm

Pages