Dinesh Jayaraman
Dinesh Jayaraman
Home
Research Group
Publications
Teaching
Reinforcement Learning
An Exploration of Embodied Visual Exploration
Santhosh K Ramakrishnan*
,
Dinesh Jayaraman
,
Kristen Grauman
SMIRL: Surprise Minimizing RL in Dynamic Environments
We formulate homeostasis as an intrinsic motivation objective and show interesting emergent behavior from minimizing Bayesian surprise with RL across many environments.
Glen Berseth
,
Daniel Geng
,
Coline Devin
,
Chelsea Finn
,
Dinesh Jayaraman
,
Sergey Levine
Fighting Copycat Agents in Behavioral Cloning from Multiple Observations.
Chuan Wen
,
Jierui Lin
,
Trevor Darrell
,
Dinesh Jayaraman
,
Yang Gao
Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings
How to train RL agents safely? We propose to pretrain a model-based agent in a mix of sandbox environments, then plan pessimistically when finetuning in the target environment.
Jesse Zhang
,
Brian Cheung
,
Chelsea Finn
,
Sergey Levine
,
Dinesh Jayaraman
Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors
To plan towards long-term goals through visual prediction, we propose a model based on two key ideas: (i) predict in a goal-conditioned way to restrict planning only to useful sequences, and (ii) recursively decompose the goal-conditioned prediction task into an increasingly fine series of subgoals.
Karl Pertsch
,
Oleg Rybkin*
,
Frederik Ebert
,
Chelsea Finn
,
Dinesh Jayaraman
,
Sergey Levine
Causal Confusion in Imitation Learning
“Causal confusion”, where spurious correlates are mistaken to be causes of expert actions, is commonly prevalent in imitation learning, leading to counterintuitive results where additional information can lead to worse task performance. How might one address this?
Pim de Haan
,
Dinesh Jayaraman
,
Sergey Levine
Emergence of Exploratory Look-Around Behaviors Through Active Observation Completion
Santhosh K Ramakrishnan*
,
Dinesh Jayaraman
,
Kristen Grauman
REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
We propose a low-cost compact easily replicable hardware stack for manipulation tasks, that can be assembled within a few hours. We also provide implementations of robot learning algorithms for grasping (supervised learning) and reaching (reinforcement learning). Contributions invited!
Brian Yang
,
Jesse Zhang
,
Vitchyr Pong
,
Sergey Levine
,
Dinesh Jayaraman
REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
We propose a low-cost compact easily replicable hardware stack for manipulation tasks, that can be assembled within a few hours. We also provide implementations of robot learning algorithms for grasping (supervised learning) and reaching (reinforcement learning). Contributions invited!
Brian Yang
,
Jesse Zhang
,
Dinesh Jayaraman
,
Sergey Levine
End-to-End Policy Learning For Active Visual Categorization
Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.
Dinesh Jayaraman
,
Kristen Grauman
«
»
Cite
×