Vision has proven to be a particularly effective sensor for robots operating on and above the surface of the earth. In this domain vision has been used to track features, build environmental representations, solve localization tasks, avoid obstacles and to provide a conduit for human-robot communication. But how well does this sensing modality work underwater?
Utilizing the AQUA2 underwater platform I have been involved in a longterm research project that has been developing solutions to these and other problems associated with underwater vehicles capable of operating in a 6DOF environment. Results for environmental reconstruction, localization and gait planning will be presented along with some highlights of ongoing work with a new vehicle (Milton) that underwent its first sea trials last summer, and which will be used extensively in trials early in 2017.
Michael Jenkin is a Professor of Electrical Engineering and Computer Science, and a member of the Centre for Vision Research at York University, Canada. Working in the fields of visually guided autonomous robots and virtual reality, he has published over 150 research papers including co-authoring Computational Principles of Mobile Robotics with Gregory Dudek and a series of co-edited books on human and machine vision with Laurence Harris.
Michael Jenkin's current research intrests include work on sensing strategies for AQUA, an amphibious autonomous robot being developed as a collaboration between Dalhousie University, McGill University and York University; the development of tools and techniques to support crime scene investigation; and the understanding of the perception of selfmotion and orientation in unusual environments including microgravity.
This guest lecture is partially supported by DFG GRK 1564 "Imaging New Modalities".