Idan Shenfeld

I am a first year Ph.D. student in EECS at MIT CSAIL advised by Professor Pulkit Agrawal. I'm currently interested in making it easier and faster to train RL agents, especially in challenging cases such as history-dependent policies and partial observability.

Before MIT, I worked as an AV applied researcher at GM Ultra Cruise project. My main research there was on 3D segmentation and detection algorithms from an array of RGB cameras. In my time there I had the pleasure of working with Dr. Netalee Efrat Sela and Dr. Shaul Oron.

Prior to that, I completed my bachelor's degree in EECS from the Technion where I worked with Professor Aviv Tamar. During my bachelor's degree I was supported by the Rothschild Fellowship.

Email  /  Resume  /  LinkedIn  /  Scholar

profile photo
Research
PontTuset Offline Meta Reinforcement Learning - Identifiability Challenges and Effective Data Collection Strategies
Ron Dorfman, Idan Shenfeld, Aviv Tamar
NeurIPS, 2021
openreview / bibtex

Consider the following instance of the Offline Meta Reinforcement Learning (OMRL) problem: given the complete training logs of N conventional RL agents, trained on N different tasks, design a meta-agent that can quickly maximize reward in a new, unseen task from the same task distribution. In particular, while each conventional RL agent explored and exploited its own different task, the meta-agent must identify regularities in the data that lead to effective exploration/exploitation in the unseen task. Here, we take a Bayesian RL (BRL) view, and seek to learn a Bayes-optimal policy from the offline data. Building on the recent VariBAD BRL approach, we develop an off-policy BRL method that learns to plan an exploration strategy based on an adaptive neural belief estimate. However, learning to infer such a belief from offline data brings a new identifiability issue we term MDP ambiguity. We characterize the problem, and suggest resolutions via data collection and modification procedures.


Template