Dinesh Jayaraman
  • Home
  • Research Group
  • Publications
  • Teaching
    • Articulate-Anything: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model
    • Illustrated Landmark Graphs for Long-Horizons Policy Learning
    • Learning to Achieve Goals with Belief State Transformers
    • Leveraging Symmetry to Accelerate Learning of Trajectory Tracking Controllers for Free-Flying Robotic Systems
    • REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments
    • The Value of Sensory Information to a Robot
    • Vision Language Models are In-Context Value Learners
    • ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos
    • Task-Oriented Hierarchical Object Decomposition for Visuomotor Control
    • Eurekaverse: Environment Curriculum Generation via Large Language Models
    • Recasting Generic Pretrained Vision Transformers As Object-Centric Scene Encoders For Manipulation Policies
    • Composing Pre-Trained Object-Centric Representations for Robotics From "What" and "Where" Foundation Models
    • Can Transformers Capture Spatial Relations between Objects?
    • DrEureka: Language Model Guided Sim-To-Real Transfer
    • DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
    • Eureka: Human-Level Reward Design via Coding Large Language Models
    • Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object Transport
    • Memory-Consistent Neural Networks for Imitation Learning
    • Open X-Embodiment: Robotic Learning Datasets and RT-X Models
    • Privileged Sensing Scaffolds Reinforcement Learning
    • Training self-learning circuits for power-efficient solutions
    • Universal Visual Decomposer: Long-Horizon Manipulation Made Easy
    • ZeroFlow: Fast Zero Label Scene Flow via Distillation
    • Prospective Learning: Principled Extrapolation to the Future
    • TLControl: Trajectory and Language Control for Human Motion Synthesis
    • Vision-Based Contact Localization Without Touch or Force Sensing
    • LIV: Language-Image Representations and Rewards for Robotic Control
    • Learning a Meta-Controller for Dynamic Grasping
    • Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
    • Planning Goals for Exploration
    • VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
    • Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
    • Discovering Deformable Keypoint Pyramids
    • How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $ f $-Advantage Regression
    • Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming
    • SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
    • Know Thyself: Transferable Visuomotor Control Through Robot-Awareness
    • Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
    • Prospective Learning: Back to the Future
    • Keyframe-Focused Visual Imitation Learning
    • Femtomolar SARS-CoV-2 Antigen Detection Using the Microbubbling Digital Assay with Smartphone Readout Enables Antigen Burden Quantitation and Dynamics Tracking
    • Conservative Offline Distributional Reinforcement Learning
    • Likelihood-Based Diverse Sampling for Trajectory Forecasting
    • How Are Learned Perception-Based Controllers Impacted by the Limits of Robust Control?
    • An Exploration of Embodied Visual Exploration
    • SMIRL: Surprise Minimizing RL in Dynamic Environments
    • Embracing the Reconstruction Uncertainty in 3D Human Pose Estimation
    • Object Representations Guided By Optical Flow
    • Fighting Copycat Agents in Behavioral Cloning from Multiple Observations.
    • Model-Based Inverse Reinforcement Learning from Visual Demonstrations
    • Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings
    • Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors
    • DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
    • MAVRIC: Morphology-Agnostic Visual Robotic Control
    • Causal Confusion in Imitation Learning
    • Emergence of Exploratory Look-Around Behaviors Through Active Observation Completion
    • Manipulation by Feel: Touch-Based Control with Deep Predictive Models
    • REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
    • REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
    • Time-Agnostic Prediction: Predicting Predictable Video Frames
    • End-to-End Policy Learning For Active Visual Categorization
    • Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks
    • More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
    • ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids
    • Techniques for Rectification of Camera Arrays
    • Techniques for Improved Focusing of Camera Arrays
    • Divide, Share, and Conquer: Multi-Task Attribute Learning With Selective Sharing
    • Embodied Learning for Visual Recognition
    • Learning Image Representations Tied to Egomotion from Unlabeled Video
    • Look-Ahead Before You Leap: End-to-End Active Recognition By Forecasting the Effect of Motion
    • Object-Centric Representation Learning from Unlabeled Videos
    • Pano2Vid: Automatic Cinematography For Watching 360-degree Videos
    • Slow and Steady Feature Analysis: Higher Order Temporal Coherence in Video
    • Learning Image Representations Tied to Egomotion
    • Decorrelating Semantic Visual Attributes by Resisting the Urge to Share
    • Zero-Shot Recognition With Unreliable Attributes
    • Objective Quality Assessment of Multiply Distorted Images

Fighting Copycat Agents in Behavioral Cloning from Multiple Observations.

Oct 15, 15150·
Chuan Wen
,
Jierui Lin
,
Trevor Darrell
,
Dinesh Jayaraman
,
Yang Gao
· 0 min read
PDF Cite arXiv
Type
1
Publication
In NeurIPS
Last updated on Oct 15, 15150
Imitation Learning Causality Reinforcement Learning Distributional Shift

← Object Representations Guided By Optical Flow Jan 1, 1010
Model-Based Inverse Reinforcement Learning from Visual Demonstrations Oct 15, 15150 →

© 2025 Dinesh Jayaraman. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.