Once every week while school is in session, EECS invites a distinguished researcher or practitioner in a computer science or electrical and computer engineering-related field to present their ideas and/or work. Talks are generally targeted to electrical engineering and computer science graduate students. This colloquium series is free and open to everyone.

Upcoming Colloquia

Semantic Foundations for Live Programming Environments

Wednesday, April 3, 2019 - 10:00am to 11:00am
Johnson 226

Speaker Information

Cyrus Omar
Postdoctoral Scholar
The University of Chicago

Abstract

When programming, it is common to spend minutes, hours, or even days at a time working with program text where there are missing pieces, type errors, or merge conflicts. Programming environments have trouble generating meaningful feedback in these situations because programming languages typically assign formal meaning only to fully-formed, well-typed programs. Lacking timely and meaningful feedback, programmers--particularly novice programmers--struggle to maintain an accurate mental model of the behavior of the code that they are reading and writing, resulting in confusion and costly mistakes.

In this talk, I will introduce Hazel, a live functional programming environment that addresses the problem of working with incomplete programs from type-theoretic first principles. Hazel can synthesize incomplete types for incomplete programs, run incomplete programs to produce incomplete results, and perform various semantic transformations on incomplete programs. These transformations are exposed to the programmer as keyboard-driven semantic edit actions. We have formally specified each of these mechanisms using the Agda proof assistant, resulting in the first end-to-end specification for a live programming environment.

This rich semantic framework lays the foundation for a new generation of intelligent programming environments that automate away boilerplate software development tasks, leaving users to focus only on those tasks that truly require human creativity.

Speaker Bio

Cyrus Omar leads the Hazel project as a postdoc at The University of Chicago. He received his PhD from Carnegie Mellon University in May 2017. Cyrus started his research career as a neurobiologist before deciding to focus on augmenting human cognition with better programming tools.

Defending Memory Vulnerabilities Latent in Production Software

Friday, April 5, 2019 - 2:00pm to 3:00pm
KEC 1005

Speaker Information

Tongping Liu
Assistant Professor
University of Texas at San Antonio

Abstract

Memory vulnerabilities can be exploited for security attacks, such as data corruption, control-flow hijacks, and information leakage. The intermittent reports of security attacks indicate the wide existence of memory vulnerabilities, and the lack of effective systems to defend such vulnerabilities in reality. This talk will present two of our research efforts aiming to defend memory vulnerabilities latent in the production software.

First, I will present a novel heap allocator--Guarder--that could make heap-based security attacks harder to succeed. Randomization is the conventional wisdom to achieve this. However, existing secure allocators face with two serious issues that prevent their wide adoptions, the significant performance overhead, and the unstable randomization entropy that can vary on different execution phases. Due to the second fact, attackers may breach the system at the weakest point. Guarder ensures the reliable randomization entropy, and provides an unprecedented level of security guarantee by providing all security features of existing allocators, but without compromising the performance, which has an overhead less than 3% on average comparing to performance-oriented allocators. This project was supported by Mozilla Company.

Second, I will present an efficient tool--iReplayer--that could report memory vulnerabilities precisely. The key insight is that it is possible to ensure that the evidence of memory vulnerabilities remains for the later detection. Therefore, instead of detecting memory vulnerabilities in the original execution, which may impose prohibitive performance overhead, the proposed approach only invokes the detection when the evidence of vulnerabilities is found. More specifically, it only performs the detection based on the found evidence, which avoids the significant performance overhead for common cases that do not have vulnerabilities and makes it applicable for the production environment. iReplayer further unlocks numerous possibilities in security forensics, failure diagnosis, and online error remediation.

Speaker Bio

Tongping Liu is an Assistant Professor at the University of Texas at San Antonio. He received his Ph.D. from the University of Massachusetts Amherst in 2014. His primary research goal is to practically improve the security and reliability of software. His work appeared in most prestigious system and security conferences, such as SOSP, OSDI, USENIX Security, CCS, and PLDI. He has been awarded the 2015 Google Faculty Research Award, and multiple grants from NSF. More information can be seen at http://www.cs.utsa.edu/~tongpingliu/.

Run-time computation for enhanced integrated circuits and systems

Monday, April 8, 2019 - 4:00pm to 4:50pm
Weniger Hall 151

Speaker Information

Visvesh Sathe
Assistant professor
University of Washington

Abstract

For over half a century, Integrated Circuits have been designed and developed (rather successfully) toward the goal of enhancing computing performance and efficiency.  During this time, the relationship between circuit design and computing has remained largely one-directional: Careful, detailed circuit design is performed in the service of building computing systems. Notwithstanding a post-Moore and post-Dennard reality, the impressive strides made by digital computing thus far spurs an important question that re-examines the traditional circuit-computing relationship: Can runtime computing itself be used to enhance circuit and system capabilities? If so, under which conditions and to what extent?

 

In this talk, I will present an overview of recent efforts in my group as we examine the promising role of computing in augmenting circuit capabilities to (1) overcome limitations inherent in circuit design; (2) enable rapid, time-optimal control of integrated control systems; and (3) to perform low-complexity runtime system optimization. Each of these goals is examined a through a representative test-chip design. Applications include robust True-Random Number Generators (TRNGs) demonstrating the lowest measured energy-per-bit (2.58pJ/bit), All-digital PLLs (ADPLLs) for system clocking applications with the fastest demonstrated cold-start and re-lock times (16 Refclk cycles, mean), and an autonomous all-digital minimum energy-point (MEP) tracking system for a sub-threshold microprocessor.

Speaker Bio

Visvesh Sathe is an assistant professor at the University of Washington, where his group conducts research into digital, mixed-signal and power circuits and architectures. Prior to joining the University of Washington, he served as a Member of Technical Staff in the Low-Power Advanced Development Group at AMD, where his research focused on inventing and developing new technologies in clocking, voltage-noise mitigation and circuit-design for energy-efficient and high-performance computing. Dr. Sathe led the research and development effort at AMD that resulted in the first-ever resonant clocked commercial microprocessor.  Dr. Sathe received the B.Tech degree from the Indian Institute of Technology Bombay in 2001, and the M.S and Ph.D. degrees from the University of Michigan, Ann Arbor in 2004 and 2007, respectively. He has served as a chapter officer for the Denver Solid State Circuits Society, as has previously served as a technical program committee member at the Custom Integrated Circuits Conference as well as a JSSC guest editor.

 

 

Human-Centered AI through Scalable Visual Data Analytics

Wednesday, April 10, 2019 - 10:00am to 11:00am
KEC 1007

Speaker Information

Minsuk Kahng
Ph.D. Candidate
Computer Science
Georgia Institute of Technology

Abstract

While artificial intelligence (AI) has led to major breakthroughs in many domains, understanding machine learning models remains a fundamental challenge. They are often used as "black boxes," which could be detrimental. How can we help people understand complex machine learning models, so that they can learn them more easily and use them more effectively?

In this talk, I present my research that makes AI more accessible and interpretable, through a novel human-centered approach, by creating novel data visualization tools that are scalable, interactive, and easy to learn and to use. I present my work in two interrelated topics. (1) Visualization for Industry-scale Models: I present how to scale up interactive visualization tools for industry-scale deep learning models that use large datasets. I describe how the ActiVis system helps Facebook data scientists interpret deep neural network models by visually exploring activation flows. ActiVis is patent-pending, and has been deployed on Facebook’s ML platform. (2) Interactive Understanding of Complex Models: I show how visualization helps novices interactively learn complex concepts of deep learning models. I describe how I developed GAN Lab, a visual education system for Generative Adversarial Networks (GANs), one of the most popular, but hard-to-understand models. GAN Lab has been open-sourced in collaboration with Google Brain and used by over 30,000 people from 140 countries. I conclude with my vision to make AI more human-centered, to promote actionability for AI, stimulate a stronger ethical AI workforce, and foster healthy impacts of AI on broader society.

Speaker Bio

Minsuk Kahng is a Ph.D. Candidate in Computer Science at Georgia Tech. His research focuses on building visual analytics tools for exploring, interpreting, and interacting with complex machine learning systems and large datasets. He publishes at premier venues spanning data visualization, data mining, databases, machine learning, and human-computer interaction. His research led to deployed and patent-pending technologies by Facebook (ActiVis, MLCube), and an open-sourced education tool for deep learning with Google Brain (GAN Lab). He received his Master's and Bachelor's degrees from Seoul National University in South Korea. He was named Graduate Teaching Assistant of the Year in Computer Science at Georgia Tech. He has been supported by a Google PhD Fellowship and an NSF Graduate Research Fellowship.
Website: https://minsuk.com

Sense and sense-making research in the Ambient Computing domain

Monday, May 20, 2019 - 4:00pm to 4:50pm
Weniger Hall 151

Speaker Information

Giuseppe Raffa
Senior Research Scientist
Intel Labs

Abstract

We are entering a new era of computing – Ambient Computing – where sensing, inference and actuation technologies will disappear into our daily lives and the spaces we live in. Spaces will be able to sense multiple signals, understand their meaning and infer high-level contextual information about people, spaces and situations.

This vision requires novel algorithms and systems, and also new user-centric approaches to make sure these technologies can provide real value to the inhabitants of these spaces.  This talk will introduce the research activities that are happening in Intel Labs related to Ambient Computing and Smart Spaces technologies.

 

I will present our research approach and various horizontal technologies we are researching on related to machine learning and artificial intelligence. Some examples include multimodal person identification, activity recognition, emotion recognition and mixed reality. Finally, I will discuss some of the vertical domains the team is exploring, i.e. Smart Home (Kid’s learning support and Elderly care), Autonomous vehicles (in cabin experiences) and Smart Manufacturing (task support).

 

Speaker Bio

Giuseppe "Beppe" Raffa is a Senior Research Scientist and Research Manager whose research interests are in the field of smart spaces, sensors-based context-aware systems and sensors-based human-computer interaction. He leads a team working on “Smart Spaces and Interactions” in environments such as Home, Office, Industrial. He worked on gesture recognition, low power systems, on-body multimodal interfaces, and context-aware applications, systems and architectures. Beppe received his Ph.D. degree in Computer Engineering from the University of Bologna with a dissertation on "Context-aware Computing in Smart Environments".

He has co-authored >20 papers published in international conferences and journals, served as co-editor of two workshops (UbiComp2005, ACII 2017), filed >100 patents and served as committee member of several ACM and IEEE conferences. Beppe received his PhD in Computer Science from the University of Bologna (Italy), where he researched, developed, and deployed in-field location- and context-aware mobile systems across Europe in the Cultural Heritage research domain. He can be contacted at giuseppe.raffa@intel.com and https://www.linkedin.com/in/giuseppe-beppe-raffa/

Past Colloquia

Sheng Chen, Jesse Hostetler and Michael Hilton
Monday, February 2, 2015 - 4:00pm to 4:50pm
Li Hao, Anh Pham and Sherif Abdelwahab
Monday, January 26, 2015 - 4:00pm to 4:50pm
William H. Sanders
Monday, January 12, 2015 - 4:00pm to 4:50pm
Michel Kinsy
Monday, November 24, 2014 - 4:00pm to 4:50pm
Ron Goldman
Monday, November 17, 2014 - 4:00pm to 4:50pm
Steve Brown
Monday, November 10, 2014 - 4:00pm to 4:50pm
Kyoung-Youm Kim
Monday, November 3, 2014 - 4:00pm to 4:50pm
Pieter Abbeel
Thursday, October 30, 2014 - 11:00am to 12:00pm
Tillman Rendel
Monday, October 27, 2014 - 4:00pm to 4:50pm

Pages