Alex Groce

Assistant Professor
Computer Science

Ph.D., Computer Science, Carnegie Mellon University, March 2005
B.S., Computer Science, North Carolina State University, May 1999 (summa cum laude)
B.S., Multidisciplinary Studies, North Carolina State University, May 1999 (summa cum laude)


Alex Groce received his PhD in Computer Science from Carnegie Mellon University in 2005, and B.S. degrees in Computer Science and Multidisciplinary Studies (with a focus on English literature) from North Carolina State University in 1999. Before joining the Oregon State University faculty in 2009, he was a core member of the Laboratory for Reliable Software at NASA’s Jet Propulsion Laboratory, and taught classes on Software Testing at the California Institute of Technology. His activities at JPL included a role as lead developer and designer for test automation for the Mars Science Laboratory mission's internal flight software test team, and lead roles in testing file systems for space missions.

His research interests are in software engineering, particularly testing, model checking, code analysis, debugging, and error explanation. He focuses on software engineering from an "investigative" viewpoint, with an emphasis on the execution traces that programs produce — software engineering as the art and science of building programs with a desired set of executions.

Research Interests: 

My general research interest is in software engineering: designing, specifying, implementing, testing, analyzing, verifying, understanding, and debugging software written in widely used programming languages, such as ANSI C and Java. My focus is on approaches and methods that naturally give rise to (semi-)automated tools for programmers, testers, and other software engineers — including end users when they must act as software engineers! The power of computer science is in automation — from design to testing, we need better tools to help us write better programs.

My work to date has usually involved some variety of model checking, but the specifics have varied from full exploration of highly abstracted systems (with MAGIC) to methods more akin to an unusual approach to testing (heuristics with JPF, model-driven verification of C programs with SPIN). Years of frustration when attempting to understand counterexamples produced by model checkers have convinced me that error explanation and fault localization are important (and interesting) topics, and my thesis work was an investigation of how a model checker can be used not only to find an error, but to explain and localize that error. I made use of distance metrics on program executions to find causes for errors, in a manner inspired by the counterfactual theory of causality proposed by David Lewis.

More recently, at JPL, I became interested in the connections between model checking and (randomized) testing -- including using the same models and tools for both, reusing simulation infrastructure, and hybrid approaches such as randomized testing with the complete nondeterminism of model checking. More "frivolously" I have also become fascinated by the taxonomy and nature of actual bugs in programs, particularly file systems.

I see model checking and testing primarily as useful tools for program understanding — the investigative side of software engineering, if you will. Model checking traditionally answers the question: does my (program, protocol, design) have a trace like this? Testing, in a sense, answers the same question. Even at the earliest stages of software design, thinking about how to make answering that question easier not only eventually aids verification, but helps us think about how a program will actually execute. One interesting possibility to consider is the use of model checking and other state-space exploration methods to help visualize the executions of a program. Recent work at IBM has shown the value of visualizing memory configurations of large systems; it seems that program trace-spaces would also be useful in software engineering and performance debugging.

More specific current topics of interest:

  • A “grand unified approach” to full coverage efficient test suite generation, combining random testing or model checking and constraint-based directed testing
  • Effective testing for machine-learned software, where subject experts (including YOU, the expert on classifying your email and your movie preferences) rather than software engineers or machine learning experts must test the code
  • New approaches to test case generation, based on machine learning techniques
  • Fundamental empirical research into properties of test coverage metrics on very large test suites produced by systematic methods
Gligoric, M., A. Groce, C. Zhang, R. Sharma, M. A. Alipour, and D. Marinov, "Comparing non-adequate test suites using coverage criteria", Proceedings of the 2013 International Symposium on Software Testing and Analysis - ISSTA 2013, Lugano, Switzerland, ACM Press, pp. 302-313, 07/2013. Abstract
Chen, Y., A. Groce, C. Zhang, W. - K. Wong, X. Z. Fern, E. Eide, and J. Regehr, "Taming compiler fuzzers", ACM SIGPLAN Conference on Programming Language Design and Implementation: Seattle, Washington, pp. 197-208, 06/2013. Abstract
Groce, A., A. Fern, M. Erwig, J. Pinto, T. Bauer, and A. Alipour, "Learning-Based Test Programming for Programmers", Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change, vol. 7609: Springer Berlin Heidelberg, pp. 572-586, 2012. Abstract
Groce, A., A. Fern, J. Pinto, T. Bauer, A. Alipour, M. Erwig, and C. Lopez, "Lightweight Automated Testing with Adaptation-Based Programming", IEEE 23rd International Symposium on Software Reliability Engineering (ISSRE), Dallas, TX, IEEE, pp. 161 - 170, 11/2012. Abstract
Alipour, M. A., and A. Groce, "Extended program invariants: applications in testing and fault localization", International Workshop on Dynamic Analysis, Minneapolis, MN, ACM Press, pp. 7-11 , 07/2012. Abstract
Groce, A., and M. Erwig, "Finding common ground: choose, assert, and assume", Proceedings of the 2012 Workshop on Dynamic Analysis - WODA 2012, Minneapolis, MN, ACM Press, pp. 12-17, 07/2012. Abstract
Groce, A., C. Zhang, E. Eide, Y. Chen, and J. Regehr, "Swarm testing", ACM International Symposium on Software Testing and Analysis, Minneapolis, MN, ACM Press, pp. 78-88 , 07/2012. Abstract
Groce, A., and M. Musuvathi, Model Checking Software: Proceedings of the 18th International SPIN Workshop, , vol. 6823, Berlin, Heidelberg, Springer Berlin Heidelberg, 2011.
Alipour, A., and A. Groce, "Bounded Model Checking and Feature Omission Diversity", Seventh International Workshop on Constraints in Formal Verification, San Jose, California, 11/2011. Abstract
Groce, A., "Coverage rewarded: Test input generation via adaptation-based programming", IEEE/ACM International Conference on Automated Software Engineering, Lawrence, KS, IEEE, pp. 380 - 383, 11/2011. Abstract
Holzmann, G. J., R. Joshi, and A. Groce, "Swarm Verification Techniques", IEEE Transactions on Software Engineering, vol. 37, issue 6, pp. 845 - 857, 11/2011. Abstract
, , , 09/2011.
Shinsel, A., T. Kulesza, M. M. Burnett, W. Curran, A. Groce, S. Stumpf, and W. - K. Wong, "Mini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study", 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Pittsburgh, PA, IEEE, pp. 47 - 54, 09/2011. Abstract
, , , 06/2011.
Kulesza, T., M. M. Burnett, S. Stumpf, W. - K. Wong, S. Das, A. Groce, A. Shinsel, F. Bice, and K. McIntosh, "Where are my intelligent assistant's mistakes? a systematic testing approach", Proceedings of the Third international conference on End-user development, Berlin, Heidelberg, Springer-Verlag, pp. 171–186, 06/2011. Abstract
Barringer, H., A. Groce, K. Havelund, and M. Smith, "Formal Analysis of Log Files", Journal of Aerospace Computing, Information, and Communication, vol. 7, issue 11, pp. 365 - 390, 11/2010. Abstract
Barringer, H., A. Groce, K. Havelund, and M. Smith, "An Entry Point for Formal Methods: Specification and Analysis of Event Logs", 1st Workshop on Formal Methods in Aerospace, Electronic Proceedings of Theoretical Computer Science (EPTCS), vol. 20, Eindhoven, Holland, pp. 16 - 21, 11/2009, 2010. Abstract
Groce, A., K. Havelund, and M. Smith, "From Scripts to Specifications: the Evolution of a Flight Software Testing Effort", ACM/IEEE International Conference on Software Engineering, Cape Town, South Africa, pp. 129–138, 05/2010. Abstract
Groce, A., "(Quickly) Testing the Tester via Path Coverage", Workshop on Dynamic Analysis, Chicago, Illinois, 07/2009. Abstract
Barringer, H., A. Groce, K. Havelund, and M. Smith, "Formal Analysis of Log Files", SMC-IT Workshop on Software Reliability for Space Missions, Pasadena CA, 07/2009.
Barringer, H., K. Havelund, D. E. Rydeheard, and A. Groce, "Rule Systems for Runtime Verification: A Short Tutorial", International Workshop on Runtime Verification, Grenoble, France, pp. 1-24, 06/2009. Abstract
Holzmann, G. J., R. Joshi, and A. Groce, "Model driven code checking", Automated Software Engineering, Special Issue on Trends in Automated Software Engineering, vol. 15, issue 3-4, pp. 283 - 297, 12/2008. Abstract
Andrews, J., A. Groce, M. Weston, and R. - G. Xu, "Random Test Run Length and Effectiveness", IEEE/ACM International Conference on Automated Software Engineering, L’Aquila, Italy, 09/2008. Abstract
Holzmann, G. J., R. Joshi, and A. Groce, "Swarm Verification", Conference on Automated Software Engineering (ASE), L'Aquila, Italy, pp. 1--6, 09/2008.
Holzmann, G. J., R. Joshi, and A. Groce, "Model Checking Software Tackling Large Verification Problems with the Swarm Tool", SPIN Workshop on Model Checking of Software (SPIN), vol. 5156, Los Angeles, CA, Springer Berlin Heidelberg, pp. 134 - 143, 08/2008. Abstract