Prof. John J Leonard

Samuel C Collins Professor of Mechanical and Ocean Engineering

Areas of Interest and Expertise

Sensor Data Fusion in Marine Robotic Systems
Navigation and Control of Autonomous Underwater Vehicles
Acoustic Scene Reconstruction
Sensor Based Control
Information Processing
Applied Ocean Science and Engineering
Uncertainty Management
Data Association
Concurrent Mapping and Localization
Adaptive Echolocation

Research Summary

Professor Leonard's recent research has addressed the problem of simultaneous localization and mapping (SLAM) autonomous mobile robots. The problem of SLAM is stated as follows: starting from an initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to produce an estimate of its position while concurrently building a map of the environment. While the problem of SLAM is deceptively easy to state, it presents many theoretical challenges. The problem is also of great practical importance; if a robust, general-purpose solution to SLAM can be found, then many new applications of mobile robotics will become possible.

Under funding from the Sea Grant College Program and the Office of Naval Research, Leonard's research group is developing new SLAM algorithms for AUVs using sonar. In 2002, 2004, and 2005, we participated in a series of AUV experiments in Italy performed in collaboration with NATO SACLANT Undersea Research Centre. Work makes use of two new Odyssey III AUVs from Bluefin Robotics. (This work is being performed in collaboration with Prof. Henrik Schmidt of the MIT acoustics group and Prof. Chrys Chryssostomidis of the MIT Sea Grant Collge Program.)

The SLAM research has applicability a wide range of robots operating in a diverse set of environment, making use of laser, sonar, and/or visual sensing. One of the goals has been to enable a robot to autonomously navigate a large-scale environment, such as the buildings of the MIT campus.

The primary goal of this ongoing research is to pursue the challenge of persistent autonomy -- the capability for one or more robots to operate robustly for days, weeks and months at a time with minimal human supervision, in complex, dynamic environments. Taking the limit as time goes to infinity poses difficult challenges to our algorithms, but this is imperative for many applications of autonomous mobile robots. For example, security missions require the capability for robots to build and maintain maps of large areas, detecting changes and correcting their internal representations to maintain currency with the world. These capabilities are beyond the match of today's robots.

We believe that the critical challenges for future research in this area are two-fold: (1) coping with complex 3D scenes, and (2) achieving persistent autonomy. These two challenges are highly coupled, and to deal with them, our research group is working to create a new set of tools for coping with the tremendous amounts of data that a mobile robot's sensors can provide. Some of the questions that we wish to pose are: Can we provide robots with a long-term autonomous existence, enabling them to deal with changes in the environment, to recover from mistakes, and to achieve life-long learning? Can we create a robot (or team of robots) that can actively and repeatedly explore a portion of the world, building and maintaining a database that can be efficiently indexed and rapidly queried, yet easily modified as the world changes? Can we develop a system in which these robots mingle effortlessly with people, merging human acquired and human annotated data with robot-acquired databases from physical sensors?

Recent Work

  • Video

    2020 Autonomy Day 1 - John Leonard

    April 8, 2020Conference Video Duration: 28:32

    This talk will describe some of the challenges and opportunities in autonomy research today, with a focus on trends and lessons in self-driving research. We will discuss some of the major challenges and research opportunities in self-driving, including building and maintaining high-resolution maps, interacting with humans both inside and outside of vehicles, dealing with adverse weather, and achieving sufficiently high detection with low probabilities of false alarms in challenging settings. We will review the different approaches to automated driving, including SAE Level 2 and SAE Level 4 systems, as well as the Toyota Guardian approach, which flips the conventional mindset from having the human guard the AI (as in SAE Level 2 systems) to instead using AI to guard the human driver. We will discuss research opportunities in mapping, localization, perception, prediction, and planning and control to realize improved safety through advanced automation in the future.

    John Leonard - 2016-ICT-Conference

    April 27, 2016Conference Video Duration: 39:40

    Keynote: Mapping, Localization, and Self-Driving Vehicles

    This talk will discuss the critical role of mapping and localization in the development of self-driving cars and autonomous underwater vehicles (AUVs). After a discussion of some of the recent amazing progress and open technical challenges in the development of self-driving vehicles, we will discuss the past, present and future of Simultaneous Localization and Mapping (SLAM) in robotics. We will review the history of SLAM research and will discuss some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure. We will describe some of the challenges using SLAM for AUVs, and we will also present recent results on object-based mapping in dynamic environments and real-time
    dense mapping using RGB-D cameras.

    Joint work with Sudeep Pillai, Tom Whelan, Michael Kaess, John McDonald, Hordur Johannsson, Maurice Fallon, David Rosen, Ross Finman, Paul Huang, Liam Paull, Nick Wang, and Dehann Fourie.

    2016 MIT Information and Communication Technologies Conference