Intelligent Robot Learning Laboratory (IRL Lab) Current Projects

Autonomous UAV for Bird Deterrance/Avoidance from the Cherry Orchards

By: Shivam Goel and Matthew E. Taylor

Cherry, grape, honeycrisp apple, and blueberry growers lose $80 million annually to bird damage in the state of Washington (WA) alone. Over 50% of sweet cherry growers in Washington consider bird damage as one of the significant factors affecting profit. Netting, auditory scare devices, visual scare devices, chemical applications, and active methods such ...
Read More

Effective Transfer Learning

By: Zhaodong Wang and Matthew E. Taylor

Many learning methods such as reinforcement learning suffers from a slow beginning especially in complicated domains. The motivation of transfer learning is to use limited prior knowledge to help learning agents bootstrap at the start and thus achieve overall improvements on learning performance. Due to limited quantity or quality of prior knowledge, how ...
Read More

Topic INdependent Gamification Learning Environment (TINGLE)

By: Chris Cain, and Matthew E. Taylor

A system designed to tie a student’s individual progress in class to progress in a game selected for them and played outside the classroom. Rather than the game being a learning medium, it’s a positive reinforcer for effort applied in class. The plan is to create an initial model for which games motivate which students ...
Read More

Computer-Aided Design Intelligent Tutoring System Teaching Strategic Flexibility

By: Yang Hu, and Matthew E. Taylor

Taking a Computer-Aided Design (CAD) class is a prerequisite for Mechanical Engineering freshmen at many universities, including ours. To make the learning process easier and more interesting, we designed and implemented a tutorial for an open source CAD program, FreeCAD, to teach students how to use Boolean operations to construct complex objects from multiple simple ...
Read More

Baton Vehicle Project

By: Viresh Duvvuri, Heon Joo, Matthew E. Taylor, Konstantin Matveev, and John P Swensen Undergraduates: Jessie Bryant

There has been a significant amount of increase in the research and development of UAS (Unmanned Aerial Systems) over past few years. UAS are of different types, for example – fixed wing ...
Read More

Single Operator Multiple Aerial Vehicle (SOMAV)

By: Marshall Wingerson, Matthew Foreman, and Matthew E. Taylor

Current commercial applications for small Unmanned Aerial Systems (sUAS) can require several operators to ensure safe flight of one sUAS. Our project is to create hardware and companion software to allow a one or two operators to control many sUAS with a ...
Read More

Data Mining the CougarCard for Student Fitness

By: Yunshu Du and Matthew E. Taylor

WSU's Recreation Center (the Rec) is among the most frequently visited campus facilities. However, students may prefer to avoid the Rec when it is most crowded. Our work aims to solve this problem by predicting how crowded the Rec will be at different times by leveraging the university's CougCard system. Read More

Transfer in Deep Reinforcement Learning

By: Yunshu Du, Gabriel V. de la Cruz Jr., James Irwin, and Matthew E. Taylor

As one of the first successful models that combines reinforcement learning technique with deep neural networks, the Deep Q-network (DQN) algorithm has gained attention as it bridges the gap between high-dimensional sensor inputs and autonomous agent learning. However, one ...
Read More

Lifelong Learning for Heterogenous Robot Teams

By: Gabriel V. de la Cruz Jr., James M. Irwin, and Matthew E. Taylor Undergraduates: Brandon Kallaher (WSU)

This is a joint project of WSU, University of Pennsylvania and Olin College. This project is about developing transfer learning methods that enable teams of heterogenous agents to ...
Read More

Agent Corrections to Pac-Man from the Crowd

By: Gabriel V. de la Cruz Jr., Bei Peng, and Matthew E. Taylor

Reinforcement learning suffers from poor initial performance. Our approach uses crowdsourcing to provide non-expert suggestions to speed up learning of an RL agent. Currently, we are using Mrs. Pac-Man as our application domain for its popularity as a game. From ...
Read More

Agent Learning from Discrete Human Feedback

By: Bei Peng and Matthew E. Taylor

In this project, we consider the problem of a human trainer teaching an agent via providing positive or negative feedback. Most existing work has treated human feedback as a numerical value that the agent seeks to maximize, and has assumed that all trainers will give feedback in the same way when teaching the ...
Read More

Training an Agent to Ground Commands with Reward and Punishment

By: Bei Peng and Matthew E. Taylor

As increasing need for humans to convey complex tasks to robot without any technical expertise, conveying tasks through natural language provides an intuitive interface. But it needs the agent to learn a grounding of natural language commands. In this work, we developed a simple simulated home environment in which the robot needs to ...
Read More

Bin-dog: An Intelligent In-Orchard Bin-Managing System

By: Zhaodong Wang and Matthew E. Taylor

The purpose of this project is to build an intelligent multi-robot system to manage the usage of bins for harvest work in orchard. It is involved with the auto navigation of robots in orchard environment and the cooperation with human pickers. The value of this multi-robot bin managing system is in realizing the ...
Read More

Agents Teaching Humans and Agents

By: Yusen Zhan and Matthew E. Taylor

We developed a advice model framework to provide theoretical and practical analysis for agents to teach humans and agents in sequential reinforcement learning tasks. The teacher agents assist the students (humans or agents) with action advice when the teachers observe the students reach some critical states. Assuming the teachers are optimal, the students ...
Read More

Human-Robot Interaction

By: Bei Peng, Tanay Nigam, Francis Pascual and Matthew E. Taylor Undergraduates: Mitchell Scott (WSU) and Madeline Chili (Elon University)

In this project, we developed an exploratory study where participants piloted a commercial UAS (unmanned aerial system) through an obstacle course. We examined the effect of varying instructions on the participant's performance ...
Read More