Self-Supervised Learning

Discovering Deformable Keypoint Pyramids

Jul 25, 25250

An Exploration of Embodied Visual Exploration
An Exploration of Embodied Visual Exploration

Mar 1, 1010

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

We learn reward functions in unsupervised object keypoint space, to allow us to follow third-person demonstrations with model-based RL.

Oct 15, 15150

MAVRIC: Morphology-Agnostic Visual Robotic Control
MAVRIC: Morphology-Agnostic Visual Robotic Control

We demonstrate visual control within 20 seconds on a robot with unknown morphology, from a single uncalibrated RGBD camera.

May 17, 17170

DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation

We design and demonstrate a new tactile sensor for in-hand tactile manipulation in a robotic hand.

May 17, 17170

Manipulation by Feel: Touch-Based Control with Deep Predictive Models

High-resolution tactile sensing together with visual approaches to prediction and planning with deep neural networks enables high-precision tactile servoing tasks.

Jan 1, 1010

Emergence of Exploratory Look-Around Behaviors Through Active Observation Completion

Jan 1, 1010

ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids

Appearance-based image representations in the form of viewgrids provide a useful framework for learning self-supervised image representations by training a network to reconstruct full object shapes, or scenes.

Jan 1, 1010

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

By exploiting high precision tactile sensing with deep learning, robots can effectively iteratively adjust their grasp configurations to boost grasping performance from 65% to 94%.

Jan 1, 1010

Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks

Task-agnostic visual exploration policies may be trained through a proxy "observation completion" task that requires an agent to "paint" unobserved views given a small set of observed views.

Jan 1, 1010

Learning Image Representations Tied to Egomotion

An agent's continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.

Jan 1, 1010