Aniket Didolkar

I am a Ph.D. Student at Mila, Montreal and The University of Montreal advised by Prof. Yoshua Bengio and Dr. Anirudh Goyal . Prior to joining my Ph.D., I was a masters student at Mila.

I received my bachelors degree in computer science from Manipal Institute of Technology, India . During my bachelors, I worked as a researcher at MIDAS Lab at IIIT Delhi advised by Professor Rajiv Ratn Shah . I have also interned at IISc Bangalore advised by Prof. Aditya Gopalan and Prof. Himanshu Tyagi for my bachelor's thesis on air quality monitoring and prediction.

I have also spent some time in the industry at Microsoft Research - NYC (Fall 2022), Ubisoft (Summer 2019) and Symbl.ai (summer 2018). I also spent the summer of 2019 building ChainerX -- an open source deep learning library -- as a part of Google Summer of Code.

CV  /  Email  /  Google Scholar  /  Twitter  /  Github

profile photo
Recent News
Research

I am interested in building models inspired by human inductive biases that can build on prior knowledge to solve various out-of-distribution scenarios.

Cycle Consistency Driven Object Discovery
Aniket Didolkar, Anirudh Goyal Yoshua Bengio
ICLR 2024
arXiv

Introduced auxillary objectives based on cycle consistency for unsupervised object discovery.

Agent-Controller Representations : Principled Offline RL with Rich Exogenous Information
Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm Van Seijen, Remi Tachet Des Combes, John Langford
ICML 2023
arXiv

Utilized the multi-step inverse dynamics objective for learning robust RL policies in noisy environments.

Representation Learning in Deep RL with Discrete Information Bottleneck
Riashat Islam, Hongyu Zang, Manan Tomar, Aniket Didolkar, Md Mofijul Islam, Anirudh Goyal, Samin Yeasar Arnob, Xin Li, Tariq Iqbal, Nicolas Hess, Alex Lamb.
AISTATS 2023
arXiv

Utilized discrete bottlenecks such as VQVAE to learn robust RL policies in noisy RL environments.

Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning
Aniket Didolkar, Kshitij Gupta, Anirudh Goyal, Alex Lamb, Nan Rosemary Ke, Yoshua Bengio
NeurIPS, 2022
arXiv / slides

Introducing a recurrent stream into the Transformer architecture.

Guaranteed Discovery of Controllable Latent States with Multi-Step Inverse Models
Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, John Langford,
TMLR 2023
arXiv

Algorithm for discovery of the minimal controllable latent state that has all the information for controlling an agent while learning to discard all other irrelevant information

Coordination Among Neural Modules Through a Shared Global Workspace
Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, Yoshua Bengio
ICLR, 2022 (Oral Presentation - top 5% of accepted paper)
arXiv

Facilitating communication between modules using limited-capacity bottleneck.

Neural Production Systems
Aniket Didolkar*, Anirudh Goyal*, Nan Rosemary Ke, Charles Blundell, Philippe Beaudoin, Nicolas Heess, Michael Mozer, Yoshua Bengio
NeurIPS, 2021
arXiv / open review

Dynamically constructing sparse graphs for modulating communication between objects.

Systematic Evaluation of Causal Discovery for Visual Model Based Reinforcement Learning
Nan Rosemary Ke*, Aniket Didolkar*, Sarthak Mittal, Anirudh Goyal, Guillaume Lajoie, Stefan Bauer, Danilo Rezende, Yoshua Bengio, Michael Mozer, Christopher Pal
NeurIPS Dataset and Benchmark Track, 2021
arXiv / NeurIPS Dataset and Benchmark Track Proceedings / Code

A new and highly-flexible benchmark for evaluation of causal discovery in model-based RL.

Augmenting NLP models using Latent Feature Interpolations
Amit Jindal, Aniket Didolkar, Arijit Ghosh Chowdhury, Ramit Sawhney, Rajiv Ratn Shah, Di Jin
Coling, 2020
PDF

Proposed a new formulation of mixup for NLP.

SpeechMix - Augmenting Deep Sound Recognition using Hidden Space Interpolations
Amit Jindal, Narayanan Elavathur, Ranganatha, Aniket Didolkar, Arijit Ghosh Chowdhury, Ramit Sawhney, Rajiv Ratn Shah, Di Jin
Interspeech, 2020
PDF

Applied mixup in the domain of speech processing.

ARHNet-Leveraging Community Interaction for Detection of Religious Hate Speech in Arabic
Aniket Didolkar*, Arijit Ghosh Chowdhury*, Ramit Sawhney, Rajiv Shah
ACL - Student Research Workshop, 2019
PDF

Leveraging social media connectivity through a graph for profiling hate speech on twitter.

[Re] h-detach: Modifying the LSTM gradient towards better optimization
Aniket Didolkar
ReScience, volume 5, issue 2
PDF

Reproduced the paper h-detach: Modifying the LSTM gradient towards better optimization as a part of the ICLR 2019 reproducibility challenge.

Experience
June 2023 - Present
Machine Learning Intern
Advisor - Dr. Jason Hartford
Aug 2022 - Present
Research Intern
Advisor - Dr. Alex Lamb
Aug 2021 - Present
Graduate Student Researcher
Advisors - Prof. Yoshua Bengio and Dr. Anirudh Goyal

Aug 2020 - Aug 2021
Research Intern
Advisors - Dr. Anirudh Goyal and Prof. Yoshua Bengio
Jan 2020 - July 2020
Research Intern
Advisors - Prof. Aditya Gopalan and Prof. Himanshu Tyagi
April 2019 - Aug 2020
Research Intern
Advisor - Prof. Rajiv Ratn Shah
May 2019 - Aug 2019
Student Developer
Organisation - NumFocus (Suborganisation - Chainer)
May 2019 - July 2019
Automation Intern
June 2018 - July 2018
Data Science Intern
Feb 2018 - Feb 2019
Undergraduate Researcher