Presentation videos are currently available in Whova during ISCA, and will be made available here afterwards.
CSE and ECE Departments
Bio: Nael Abu-Ghazaleh is a Professor with joint appointment in the CSE and ECE departments at the University of California, Riverside, and the director of the Computer Engineering program. His research interests include architecture support for security, high performance computing architectures, and networking and distributed systems. His group’s research has lead to the discovery of a number of vulnerabilities in modern architectures and operating systems which have been reported to companies and impacted commercial products. He has published over 200 papers, several of which have been nominated or recognized with best paper awards.
Title: Security challenges and opportunities at the Intersection of Architecture and ML/AI
Abstract: Machine learning is an increasingly important computational workload as data-driven deep learning models are becoming increasingly important in a wide range of application spaces. Computer systems, from the architecture up, have been impacted by ML in two primary directions: (1) ML is an increasingly important computing workload, with new accelerators and systems targeted to support both training and inference at scale; and (2) ML supporting architecture decisions, with new machine learning based algorithms controlling systems to optimize their performance, reliability and robustness. In this talk, I will explore the intersection of security, ML and architecture, identifying both security challenges and opportunities. Machine learning systems are vulnerable to new attacks including adversarial attacks crafted to fool a classifier to the attacker’s advantage, membership inference attacks attempting to compromise the privacy of the training data, and model extraction attacks seeking to recover the hyperparameters of a (secret) model. Architecture can be a target of these attacks when supporting ML, but also provides an opportunity to develop defenses against them, which I will illustrate with three examples from our recent work. First, I show how ML based hardware malware detectors can be attacked with adversarial perturbations to the Malware and how we can develop detectors that resist these attacks. Second, I will also show an example of a microarchitectural side channel attacks that can be used to extract the secret parameters of a neural network and potential defenses against it. Finally, I will also discuss how architecture can be used to make ML more robust against adversarial and membership inference attacks using the idea of approximate computing. I will conclude with describing some other potential open problems.
Pacific Northwest National Laboratory
Bio: Dr. Antonino Tumeo received the M.S degree in Informatic Engineering, in 2005, and the Ph.D degree in Computer Engineering, in 2009, from Politecnico di Milano in Italy. Since February 2011, he has been a research scientist in the PNNL’s High Performance Computing group. He Joined PNNL in 2009 as a post-doctoral research associate. Previously, he was a post-doctoral researcher at Politecnico di Milano. His research interests are modeling and simulation of high performance architectures, hardware-software codesign, FPGA prototyping and GPGPU computing.
Title: Intelligent Design Space Exploration for High-Level and System Synthesis
Abstract: In this talk, I will discuss some of the approaches that we have introduced in an existing open source HLS and design automation tool to enable the generation of accelerators optimized for irregular data analytics workflows. I will also discuss some of the (heuristic and bioinspired) design exploration algorithms that we previously integrated in such a framework, showing how they can facilitate the synthesis of optimized accelerators and systems. I will conclude by providing an brief overview of the Software Defined Accelerators From Learning Tools Environment (SODALITE), an automated open source high-level ML framework-to-verilog compiler targeting ML Application-Specific Integrated Circuits (ASICs) chiplets, and highlighting opportunities for ML aided design space exploration.
Senior Research Scientist
Baidu Research Institute, Silicon Valley AI Lab
Bio: Newsha Ardalani is a Senior Research Scientist at Baidu Research Institute, Silicon Valley AI Lab, where she works on co-designing systems and hardware accelerators for large-scale deep learning applications, and study the limits of scalability both theoretically and empirically. She got her Ph.D. in computer architecture from the University of Wisconsin-Madison in 2016. Her research area includes using machine learning to tackle systems problems and designing systems to accelerate machine-learning problems.
Title: The Horizon of AI Hardware Landscape
Abstract: The number of AI hardware solutions is growing, but it is not immediately obvious how to choose between them or design something better for any given application. Part of the challenge is that upcoming models are very large in terms of compute and memory requirements and must be scaled to many accelerator nodes. This makes the choice of best AI hardware complex, as it is intertwined with the choice of best parallelism strategy.
In this talk, we investigate whether or not we are on the right track, in terms of hardware design to exploit the upcoming parallelism trends. To answer that question, I introduce MechaFlow, our hardware/software/technology co-design space exploration framework, which we are using to answer many of these questions in a principled and efficient way.
STAR Lab, Oregon State Univeristy
Bio: Drew Penney received his Bachelor’s degree in Electrical and Computer Engineering with highest honors from Oregon State University. Drew is currently a PhD student at Oregon State and is a member of the System Technology and Architecture Research (STAR) Lab under Dr. Lizhong Chen. At the STAR lab, he explores novel machine learning applications to diverse architectural designs. He has published several papers on the topic and recently received the Best Paper Runner-up award in HPCA 2020 for his work on deep reinforcement learning in network-on-chip design.
Title: A Survey of Machine Learning Applied to Computer Architecture Design
Abstract: Machine learning has enabled significant benefits in diverse fields, but, with a few exceptions, has had limited impact on computer architecture. Recent work, however, has explored broader applicability for design, optimization, and simulation. Notably, machine learning based strategies often surpass prior state-of-the-art analytical, heuristic, and human-expert approaches. This talk reviews machine learning applied system-wide to simulation and run-time optimization, and in many individual components, including memory systems, branch predictors, networks-on-chip, and GPUs. This talk further analyzes current practice to highlight useful design strategies and identify areas for future work, based on optimized implementation strategies, opportune extensions to existing work, and ambitious long term possibilities. Taken together, these strategies and techniques present a promising future for increasingly automated architectural design.