Alan Fern


 

Professor and Associate Head of Research
School of Electrical Engineering and Computer Science
Oregon State University

Office Location: 2071 Kelley Engineering Center
(541) 737-9202 (office) (I never check phone messages, send an email instead)
(541) 737-1300 (fax)
E-mail: alan.fern@oregonstate.edu
Postal Address: Kelley Engineering Center, Corvallis, OR 97330-5501, U.S.A.

Quick Links: Teaching Publications (Google Scholar) Students


Education

B.S. Electrical Engineering, University of Maine, 1997

M.S. Computer Engineering, Purdue University, 2000

Ph.D. Computer Engineering, Purdue University, 2004 (advised by Robert Givan)


Teaching

o   Video Lectures

o   Lab Exercises: Distributed AI using Ray

Research

My primary research interests are in the field of artificial intelligence, where I focus on the sub-areas of machine learning and automated planning. I am particularly interested in the intersection of these areas. Some example projects include:

AgAID AI Institute – AI for Agricultural Applications: I am very excited to be part of the AgAIG AI Institute funded by the USDA-NIFA and led by Washington State University. The project kickoff is October 2021 and I’m leading a team of 13 OSU researchers focused on AI, robotics, and human-computer/robot interaction.  

Learning and Planning for Bipedal Robot Locomotion: I co-direct the Dynamic Robot Lab (DRL) with Jonathan Hurst. We are studying techniques for training the bipedal robot Cassie to exhibit agile biped locomotion. We study learning and planning for both low-level behavior and high-level planned behaviors. See our Youtube channel for examples of Cassie in the real world.

Machine Common Sense: This DARPA sponsored OSU-led project is in collaboration with Behavior Psychologist, Karen Adolph at NYU, and roboticist Tucker Hermans at the University of Utah. We are studying and developing learning and reasoning techniques to enable AI systems to exhibit common sense reasoning and planning capabilities on par with those of an 18-month infant. A key aspect of our approach is to study how to effectively combine the representation-learning capabilities of deep neural network with the powerful reasoning capabilities of state-of-the-art AI planning and reasoning engines.   

Explainable Artificial Intelligence: It is becoming increasingly common for autonomous and semi-autonomous systems, such as UAVs, robots, and virtual agents, to be develop via a combination of traditional programming and machine learning. Currently, acceptance testing of these systems is problematic due to the black box nature of machine-learned components, which does not allow testers to understand the rationale behind the learned decisions. Our research will develop the new paradigm of explanation-informed acceptance testing (xACT), which will allow testers to not only observe and evaluate the behavior of machine-learned systems, but to also evaluate explanations of the decisions leading to that behavior. As a result, the xACT paradigm allows testers to determine whether machine-learned systems are making decisions "for the right reasons", which provides stronger justification for trusting the system in (semi-)autonomous operation. The public will benefit from this technology via the availability of more understandable and, in turn, trustworthy  (semi-)autonomous systems for complex applications in  defense, industry, and everyday life.

Anomaly Detection and Explanation: We study how to best detect and explain anomalies with a particular focus on security applications and interaction with end-user analysts.


Current Graduate Students

 

Former Students