Left: A Songmeter audio recording device in H.J. Andrews experimental research forest.

Middle: A spectrogram representation of audio recorded in H.J.A., after applying noise reduction algorithms.

Right: Stream networks and data collection sites in H.J.A.

About Us
Technical Reports

Oregon State University Bioacoustics Group

About Us

The OSU bioacoustics group is an interdisciplinary collaboration between researchers in ecology and machine learning.
Our goal is to develop a computational framework for intelligent bird species monitoring from in-situ audio recordings.

Since the summer of 2009, we have been collecting audio data in the H.J. Andrews forest using unattended digital recording devices known as Songmeters. We have collected over 20 terabyte of audio data, at sites with varying altitutes and vegetation.

Our efforts study a variety of interesting and challenging problems in bioacoustics, including the automatic extraction of bird syllables, the recognition of the vocalizing species and the estimation of population density.  Our goal is to build maps of bird activities at an unprecedented temporal resolution via audio monitoring. Audio data from H.J.A. poses many challenges for machine learning, such as noise from streams, wind and vehicles, and multiple species of birds vocalizing simultaneously.

  • Faculty

    • Raviv Raich, Associate Professor, School of Electrical Engineering and Computer Science, OSU
    • Xiaoli Fern, Associate Professor, School of Electrical Engineering and Computer Science, OSU
    • Matthew Betts, Associate Professor, Forest Wildlife Landscape Ecology, OSU
  • Current Students

    • Yuanli Pei, Ph.D. student in CS, OSU
    • Anh Pham, Ph.D. student in EE, OSU
    • Zeyu You, Ph.D. student in EE, OSU
    • Teresa Vania Tjahja, Master. student in CS, OSU
    • Revathy Narasimhan, M.S. student in EECS, OSU

  • Alumni
  • Collaborators

    • Dave Mellinger, Associate Professor, Senior Research Cooperative Institute for Marine Resources Studies, Hatfield Marine Science Center, OSU
    • Jed Irvine, Faculty Research Assistant, OSU
    • Jesus Perez, Associate Professor of Electrical Engineering, University of Cantabria

Journal Articles

  • Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern, Dynamic Programming for Instance Annotation in Multi-instance Multi-label Learning, in Review.(arxiv, pdf)

  • Forrest Briggs, Xiaoli Fern, and Raviv Raich,  Context-Aware MIML Instance Annotation: Exploiting Label Correlations With Classifier Chains, in Knowledge and Information Systems (2014)
  • Forrest Briggs, Xiaoli Fern and Raviv Raich, Instance Annotation for Multi-instance Multi-Label Learning, ACM Transactions on Knowledge Discovery from Data TKDD 7(3), 2013 (Preprint)
  • Forrest Briggs, Balaji Lakshminarayanan, Lawrence Neal, Xiaoli Fern, Raviv Raich, Matthew G. Betts, Sarah Frey, and Adam Hadley. "Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach." Journal of the Acoustical Society of America, 2012. (Preprint)

Conference Papers

  • Teresa V. Tjahja, Xiaoli Z. Fern, Raviv Raich, Anh T. Pham, Supervised hierarchical segmentation for bird song recording, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 2015
  • Anh T. Pham and Raviv Raich,  Kernel-based instance annotation in multi-instance multi-label learning.  Proc. IEEE Intl. workshop on Machine Learning for Signal Processing (MLSP), 2014 (pdf)

  • Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern,  Efficient instance annotation in multi-instance learning.  Proc. IEEE Intl. workshop on Statistical Signal Processing Workshop (SSP), 2014 (pdf)
  • Forrest Briggs, Xiaoli Fern and Raviv Raich, Context-Aware MIML Instance Annotation,  Proc. IEEE International Conference on Data Mining (ICDM 2013) (pdf)
  • Qi Lou, Raviv Raich, Forrest Briggs, Xiaoli Fern, Novelty Detection Under Multi-Label Multi-Instance Framework, IEEE International Workshop on  Machine Learning for Signal Processing (MLSP) 2013 (pdf)

Technical Reports


This material is based upon work supported by the National Science Foundation under Grants No. 1055113, 1254218, and 1356792. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).