Active Perception

Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning

Sep 13, 13130

An Exploration of Embodied Visual Exploration
An Exploration of Embodied Visual Exploration

Mar 1, 1010

DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation

We design and demonstrate a new tactile sensor for in-hand tactile manipulation in a robotic hand.

May 17, 17170

Manipulation by Feel: Touch-Based Control with Deep Predictive Models

High-resolution tactile sensing together with visual approaches to prediction and planning with deep neural networks enables high-precision tactile servoing tasks.

Jan 1, 1010

Emergence of Exploratory Look-Around Behaviors Through Active Observation Completion

Jan 1, 1010

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

By exploiting high precision tactile sensing with deep learning, robots can effectively iteratively adjust their grasp configurations to boost grasping performance from 65% to 94%.

Jan 1, 1010

Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks

Task-agnostic visual exploration policies may be trained through a proxy "observation completion" task that requires an agent to "paint" unobserved views given a small set of observed views.

Jan 1, 1010

End-to-End Policy Learning For Active Visual Categorization

Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.

Jan 1, 1010

Learning Image Representations Tied to Egomotion from Unlabeled Video

An agent's continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.

Jan 1, 1010

Embodied Learning for Visual Recognition

Jan 1, 1010

Pano2Vid: Automatic Cinematography For Watching 360-degree Videos

By exploiting human-uploaded web videos as weak supervision, we may train a system that learns what good videos look like, and tries to automatically direct a virtual camera through precaptured 360-degree videos to try to produce human-like videos.

Jan 1, 1010

Look-Ahead Before You Leap: End-to-End Active Recognition By Forecasting the Effect of Motion

Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.

Jan 1, 1010

Learning Image Representations Tied to Egomotion

An agent's continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.

Jan 1, 1010