Dinesh Jayaraman
Dinesh Jayaraman
Home
Research Group
Publications
Teaching
Active Perception
Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Kun Huang
,
Edward Hu
,
Dinesh Jayaraman
An Exploration of Embodied Visual Exploration
Santhosh K Ramakrishnan*
,
Dinesh Jayaraman
,
Kristen Grauman
DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
We design and demonstrate a new tactile sensor for in-hand tactile manipulation in a robotic hand.
Mike Lambeta
,
Po-Wei Chou
,
Stephen Tian
,
Brian Yang
,
Benjamin Maloon
,
Victoria Rose Most
,
Dave Stroud
,
Raymond Santos
,
Ahmad Byagowi
,
Gregg Kammerer
,
Dinesh Jayaraman
,
Roberto Calandra
Emergence of Exploratory Look-Around Behaviors Through Active Observation Completion
Santhosh K Ramakrishnan*
,
Dinesh Jayaraman
,
Kristen Grauman
Manipulation by Feel: Touch-Based Control with Deep Predictive Models
High-resolution tactile sensing together with visual approaches to prediction and planning with deep neural networks enables high-precision tactile servoing tasks.
Stephen Tian
,
Frederik Ebert
,
Dinesh Jayaraman
,
Mayur Mudigonda
,
Chelsea Finn
,
Roberto Calandra
,
Sergey Levine
End-to-End Policy Learning For Active Visual Categorization
Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.
Dinesh Jayaraman
,
Kristen Grauman
Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks
Task-agnostic visual exploration policies may be trained through a proxy “observation completion” task that requires an agent to “paint” unobserved views given a small set of observed views.
Dinesh Jayaraman
,
Kristen Grauman
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
By exploiting high precision tactile sensing with deep learning, robots can effectively iteratively adjust their grasp configurations to boost grasping performance from 65% to 94%.
Roberto Calandra
,
Andrew Owens
,
Dinesh Jayaraman
,
Justin Lin
,
Wenzhen Yuan
,
Jitendra Malik
,
Edward H Adelson
,
Sergey Levine
Embodied Learning for Visual Recognition
Dinesh Jayaraman
Learning Image Representations Tied to Egomotion from Unlabeled Video
An agent’s continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.
Dinesh Jayaraman
,
Kristen Grauman
»
Cite
×