Wednesday, May 25, 2022 - 1:00pm to 2:00pm
Zoom: https://oregonstate.zoom.us/j/93591935144?pwd=YjZaSjBYS0NmNUtjQzBEdzhPeDZ5UT09

Speaker Information

Shusen Liu
Research Scientist
Machine Intelligence Group
Lawrence Livermore National Laboratory

Abstract

Although the influence of deep learning in scientific domains is unmistakable, there are still fundamental barriers to utilizing these complex models for scientific discovery due in part to our inability to directly translate their predictive capabilities into scientific understanding. The root of the problem is twofold: 1) the challenge to evaluate the model in the context of the application; 2) the difficulties of reasoning about such models in terms that domain scientists can easily understand for knowledge extraction. This talk will provide a closer look at these unique challenges associated with applying deep learning to scientific applications. And cover some of our works for addressing the evaluation and explanation challenges in Scientific ML. Specifically, we will show how topological data analysis plays a crucial role in evaluating deep surrogate models for fusion science, and how deep generative models allow us to explore hypothetical materials and obtain actionable explanations that lead to improved material performance.

Speaker Bio

Shusen Liu is a research scientist with the Machine Intelligence Group at the Lawrence Livermore National Laboratory (LLNL).  His interests include fundamental research in explainable AI and high-dimensional data visualization, as well as their impact on scientific applications for advancing domain understanding. He received a Ph.D.in computing from the University of Utah in 2017.