Intelligent Robot Learning Laboratory (IRL Lab) James M. Irwin

CONTACT INFORMATION:
James M. Irwin
MS Student, Computer Science
Email: james.irwin@wsu.edu
Office: Dana Hall 3
Links: Personal Website


My Story

I graduated from Washington State University in 2014 with a Bachelor’s Degree in Electrical Engineering and a minor in Computer Engineering. I am now pursuing a Master’s Degree in Computer Science.

Current Projects

By: Gabriel V. de la Cruz Jr., James M. Irwin, and Matthew E. Taylor

Undergraduates: Brandon Kallaher (WSU)

This is a joint project of WSU, University of Pennsylvania and Olin College. This project is about developing transfer learning methods that enable teams of heterogenous agents to rapidly adapt control and coordination policies to new scenarios. Our approach uses a combination of lifelong transfer learning and autonomous instruction to support continual transfer among heterogeneous agents and across diverse tasks. The resulting multi-agent system will accumulate transferrable knowledge over consecutive tasks, enabling the transfer learning process to improve overtime and the system to become increasingly versatile. We will apply these methods to sequential decision making (SDM) tasks in dynamic environments with aerial and ground robots. [1, 2]

[1] [pdf] David Isele, José Marcio Luna, Eric Eaton, Gabriel V. de la Cruz Jr., James Irwin, Brandon Kallaher, and Matthew E. Taylor. Lifelong Learning for Disturbance Rejection on Mobile Robots. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016. 48% acceptance rate
[Bibtex]
@inproceedings{2016IROS-Isele,
author={Isele, David and Luna, Jos\'e Marcio and Eaton, Eric and de la Cruz, Jr., Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
title={{Lifelong Learning for Disturbance Rejection on Mobile Robots}},
booktitle={{Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems ({IROS})}},
month={October},
year={2016},
note={48% acceptance rate},
video={https://youtu.be/u7pkhLx0FQ0},
bib2html_pubtype={Refereed Conference},
abstract={No two robots are exactly the same—even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Furthermore, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled.}
}
[2] [pdf] David Isele, José Marcio Luna, Eric Eaton, Gabriel V. de la Cruz Jr., James Irwin, Brandon Kallaher, and Matthew E. Taylor. Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots. In Proceedings of the Adaptive Learning Agents (ALA) workshop (at AAMAS), Singapore, May 2016.
[Bibtex]
@inproceedings{2016ALA-Isele,
author={Isele, David and Luna, Jos\'e Marcio and Eaton, Eric and de la Cruz, Jr., Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
title={{Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots}},
booktitle={{Proceedings of the Adaptive Learning Agents ({ALA}) workshop (at {AAMAS})}},
year={2016},
address={Singapore},
month={May},
abstract = {No two robots are exactly the same — even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Further, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled. These preliminary results are an initial step towards learning robust fault-tolerant control for arbitrary robots.}
}

News

Publications

  • David Isele, José Marcio Luna, Eric Eaton, Gabriel V. de la Cruz Jr., James Irwin, Brandon Kallaher, and Matthew E. Taylor. Lifelong Learning for Disturbance Rejection on Mobile Robots. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016. 48% acceptance rate
    [BibTeX] [Abstract] [Download PDF] [Video]

    No two robots are exactly the same—even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Furthermore, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled.

    @inproceedings{2016IROS-Isele,
    author={Isele, David and Luna, Jos\'e Marcio and Eaton, Eric and de la Cruz, Jr., Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
    title={{Lifelong Learning for Disturbance Rejection on Mobile Robots}},
    booktitle={{Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems ({IROS})}},
    month={October},
    year={2016},
    note={48% acceptance rate},
    video={https://youtu.be/u7pkhLx0FQ0},
    bib2html_pubtype={Refereed Conference},
    abstract={No two robots are exactly the same—even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Furthermore, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled.}
    }

  • Ruofei Xu, Robin Hartshorn, Ryan Huard, James Irwin, Kaitlyn Johnson, Gregory Nelson, Jon Campbell, Sakire Arslan Ay, and Matthew E. Taylor. Towards a Semi-Autonomous Wheelchair for Users with ALS. In Proceedings of Workshop on Autonomous Mobile Service Robots (at IJCAI), New York City, NY, USA, July 2016.
    [BibTeX] [Abstract] [Download PDF]

    This paper discusses a prototype system built over two years by teams of undergraduate students with the goal of assisting users with Amyotrophic Lateral Sclerosis (ALS). The current prototype powered wheelchair uses both onboard and offboard sensors to navigate within and between rooms, avoiding obstacles. The wheelchair can be directly controlled via multiple input devices, including gaze tracking — in this case, the wheelchair can augment the user’s control to avoid obstacles. In its fully autonomous mode, the user can select a position on a pre-built map and the wheelchair will navigate to the desired location. This paper introduces the design and implementation of our system, as well as performs three sets of experiments to characterize its performance. The long-term goal of this work is to significantly improve the lives of users with mobility impairments, with a particular focus on those that have limited motor abilities.

    @inproceedings{2016IJCAI-Xu,
    author={Ruofei Xu and Robin Hartshorn and Ryan Huard and James Irwin and Kaitlyn Johnson and Gregory Nelson and Jon Campbell and Sakire Arslan Ay and Matthew E. Taylor},
    title={{Towards a Semi-Autonomous Wheelchair for Users with {ALS}}},
    booktitle={{Proceedings of Workshop on Autonomous Mobile Service Robots (at {IJCAI})}},
    year={2016},
    address={New York City, NY, USA},
    month={July},
    bib2html_pubtype={Refereed Workshop or Symposium},
    abstract={This paper discusses a prototype system built over two years by teams of undergraduate students with the goal of assisting users with Amyotrophic Lateral Sclerosis (ALS). The current prototype powered wheelchair uses both onboard and offboard sensors to navigate within and between rooms, avoiding obstacles. The wheelchair can be directly controlled via multiple input devices, including gaze tracking --- in this case, the wheelchair can augment the user's control to avoid obstacles. In its fully autonomous mode, the user can select a position on a pre-built map and the wheelchair will navigate to the desired location. This paper introduces the design and implementation of our system, as well as performs three sets of experiments to characterize its performance. The long-term goal of this work is to significantly improve the lives of users with mobility impairments, with a particular focus on those that have limited motor abilities.}
    }

  • Yunshu Du, Gabriel V. de la Cruz Jr., James Irwin, and Matthew E. Taylor. Initial Progress in Transfer for Deep Reinforcement Learning Algorithms. In Proceedings of Deep Reinforcement Learning: Frontiers and Challenges workshop (at IJCAI), New York City, NY, USA, July 2016.
    [BibTeX] [Abstract] [Download PDF]

    As one of the first successful models that combines reinforcement learning technique with deep neural networks, the Deep Q-network (DQN) algorithm has gained attention as it bridges the gap between high-dimensional sensor inputs and autonomous agent learning. However, one main drawback of DQN is the long training time required to train a single task. This work aims to leverage transfer learning (TL) techniques to speed up learning in DQN. We applied this technique in two domains, Atari games and cart-pole, and show that TL can improve DQN’s performance on both tasks without altering the network structure.

    @inproceedings{2016DeepRL-Du,
    author={Du, Yunshu and de la Cruz, Jr., Gabriel V. and Irwin, James and Taylor, Matthew E.},
    title={{Initial Progress in Transfer for Deep Reinforcement Learning Algorithms}},
    booktitle={{Proceedings of Deep Reinforcement Learning: Frontiers and Challenges workshop (at {IJCAI})}},
    year={2016},
    address={New York City, NY, USA},
    month={July},
    bib2html_pubtype={Refereed Workshop or Symposium},
    abstract={As one of the first successful models that combines reinforcement learning technique with deep neural networks, the Deep Q-network (DQN) algorithm has gained attention as it bridges the gap between high-dimensional sensor inputs and autonomous agent learning. However, one main drawback of DQN is the long training time required to train a single task. This work aims to leverage transfer learning (TL) techniques to speed up learning in DQN. We applied this technique in two domains, Atari games and cart-pole, and show that TL can improve DQN’s performance on both tasks without altering the network structure.
    }
    }

  • David Isele, José Marcio Luna, Eric Eaton, Gabriel V. de la Cruz Jr., James Irwin, Brandon Kallaher, and Matthew E. Taylor. Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots. In Proceedings of the Adaptive Learning Agents (ALA) workshop (at AAMAS), Singapore, May 2016.
    [BibTeX] [Abstract] [Download PDF]

    No two robots are exactly the same — even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Further, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled. These preliminary results are an initial step towards learning robust fault-tolerant control for arbitrary robots.

    @inproceedings{2016ALA-Isele,
    author={Isele, David and Luna, Jos\'e Marcio and Eaton, Eric and de la Cruz, Jr., Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
    title={{Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots}},
    booktitle={{Proceedings of the Adaptive Learning Agents ({ALA}) workshop (at {AAMAS})}},
    year={2016},
    address={Singapore},
    month={May},
    abstract = {No two robots are exactly the same — even for a given model of robot, different units will require slightly different controllers. Furthermore, because robots change and degrade over time, a controller will need to change over time to remain optimal. This paper leverages lifelong learning in order to learn controllers for different robots. In particular, we show that by learning a set of control policies over robots with different (unknown) motion models, we can quickly adapt to changes in the robot, or learn a controller for a new robot with a unique set of disturbances. Further, the approach is completely model-free, allowing us to apply this method to robots that have not, or cannot, be fully modeled. These preliminary results are an initial step towards learning robust fault-tolerant control for arbitrary robots.}
    }