Dinesh Jayaraman
Dinesh Jayaraman
Home
Research Group
Publications
Teaching
Prediction
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Yecheng Jason Ma
,
Andrew Shen
,
Osbert Bastani
,
Dinesh Jayaraman
Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings
How to train RL agents safely? We propose to pretrain a model-based agent in a mix of sandbox environments, then plan pessimistically when finetuning in the target environment.
Jesse Zhang
,
Brian Cheung
,
Chelsea Finn
,
Sergey Levine
,
Dinesh Jayaraman
Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors
To plan towards long-term goals through visual prediction, we propose a model based on two key ideas: (i) predict in a goal-conditioned way to restrict planning only to useful sequences, and (ii) recursively decompose the goal-conditioned prediction task into an increasingly fine series of subgoals.
Karl Pertsch
,
Oleg Rybkin*
,
Frederik Ebert
,
Chelsea Finn
,
Dinesh Jayaraman
,
Sergey Levine
DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
We design and demonstrate a new tactile sensor for in-hand tactile manipulation in a robotic hand.
Mike Lambeta
,
Po-Wei Chou
,
Stephen Tian
,
Brian Yang
,
Benjamin Maloon
,
Victoria Rose Most
,
Dave Stroud
,
Raymond Santos
,
Ahmad Byagowi
,
Gregg Kammerer
,
Dinesh Jayaraman
,
Roberto Calandra
Manipulation by Feel: Touch-Based Control with Deep Predictive Models
High-resolution tactile sensing together with visual approaches to prediction and planning with deep neural networks enables high-precision tactile servoing tasks.
Stephen Tian
,
Frederik Ebert
,
Dinesh Jayaraman
,
Mayur Mudigonda
,
Chelsea Finn
,
Roberto Calandra
,
Sergey Levine
Time-Agnostic Prediction: Predicting Predictable Video Frames
In visual prediction tasks, letting your predictive model choose which times to predict does two things: (i) improves prediction quality, and (ii) leads to semantically coherent “bottleneck state” predictions, which are useful for planning.
Dinesh Jayaraman
,
Frederik Ebert
,
Alexei A Efros
,
Sergey Levine
End-to-End Policy Learning For Active Visual Categorization
Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.
Dinesh Jayaraman
,
Kristen Grauman
Embodied Learning for Visual Recognition
Dinesh Jayaraman
Learning Image Representations Tied to Egomotion from Unlabeled Video
An agent’s continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.
Dinesh Jayaraman
,
Kristen Grauman
Look-Ahead Before You Leap: End-to-End Active Recognition By Forecasting the Effect of Motion
Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.
Dinesh Jayaraman
,
Kristen Grauman
»
Cite
×