top of page
IMG-20210814-WA0001_3.jpg

Caglar Gulcehre

Professor and Lead of CLAIRE lab @ EPFL

Ex: Staff Research Scientist, DeepMind

Google Scholar: Click here!

Twitter: caglarml@

Github: github.com/caglar ***Not up to date!***

Email: ca9lar At Gmail

Location: Lausanne, Switzerland 

Important note: I am receiving hundreds of emails every day from students all around the world. At the moment, it is really difficult for me to deal with them all. Thus, unfortunately, if you email me about working with my lab, I will likely not be able to answer it. We are planning to implement a remedy to aid that, hopefully. I will post it here when our process is in place. I am sorry about it. For now, instead of cold-emailing me directly, please check this page.

Home: Welcome
_edited_edited.jpg

Bio

I am a staff research scientist at DeepMind working on the intersection of Reinforcement Learning, Deep Learning, Representation Learning, and Natural Language Understanding.

I am interested in building agents that can learn from a feedback signal (often weak, sparse, and noisy in the real world) while utilizing unlabeled data available in the environment. I am interested in improving our understanding of the existing algorithms and developing new ones to enable real-world applications with positive social impact. I am particularly fascinated by the scientific applications of machine learning algorithms. I enjoy working in a multi/cross-disciplinary team and am often inspired by neuroscience, biology, and cognitive sciences when working on algorithmic solutions.  

I finished my Ph.D. under the supervision of Yoshua Bengio at MILA.

I defended my thesis "Learning and time: on using memory and curricula for language understanding" in 2018 with Christopher Manning as my external examiner. Currently, the research topics that I am working on include but are not limited to reinforcement learning, offline RL, large-scale deep architectures (or foundational models. as they call it these days), and representation learning (including self-supervised learning, new architectures, causal representations, etc.) I have served as an area chair and reviewer to significant machine learning conferences such as ICML, NeurIPS, ICLR, and journals like Nature and JMLR. I  have published at numerous influential conferences and journals such as Nature, JMLR, NeurIPS, ICML, ICLR, ACL, EMNLP, etc... My work has received the best paper award at the Nonconvex Optimization workshop at NeurIPS and an honorable mention for best paper at ICML 2019. 

I have co-organized the Science and Engineering of Deep Learning workshops at NeurIPS and ICLR.

Neural Networks
Language
Computation
Brain, Cognition
Home: Welcome

Recent Updates

Selected Publication

Regularized Behavior Value Estimation

BVE_in_real_world.png

Authors

Caglar GulcehreSergio Gómez ColmenarejoZiyu WangJakub SygnowskiThomas PaineKonrad ZolnaYutian ChenMatthew HoffmanRazvan PascanuNando de Freitas

Abstract

Offline reinforcement learning restricts the learning process to rely only on logged-data without access to an environment. While this enables real-world applications, it also poses unique challenges. One important challenge is dealing with errors caused by the overestimation of values for state-action pairs not well-covered by the training data. Due to bootstrapping, these errors get amplified during training and can lead to divergence, thereby crippling learning. To overcome this challenge, we introduce Regularized Behavior Value Estimation (R-BVE). Unlike most approaches, which use policy improvement during training, R-BVE estimates the value of the behavior policy during training and only performs policy improvement at deployment time. Further, R-BVE uses a ranking regularisation term that favours actions in the dataset that lead to successful outcomes. We provide ample empirical evidence of R-BVE's effectiveness, including state-of-the-art performance on the RL Unplugged ATARI dataset. We also test R-BVE on new datasets, from bsuite and a challenging DeepMind Lab task, and show that R-BVE outperforms other state-of-the-art discrete control offline RL methods.

Work Experience

DeepMind (2017-)
Research
Scientist 
 

MSR (2016) 
Part-time researcher

IBM Research (2015-2016)
Research Intern

DeepMind (2014)
Research Intern

Maluuba (2015)
Part-time researcher

Tubitak (2008-2011)
Researcher

MILA (2012-2017)
PhD and Research Assistant

METU (2008-2010)
Developer

Home: About

Caglar Gulcehre

London, United Kingdom

  • mastodon_logo_icon_145082
  • Twitter
  • LinkedIn

Thanks for submitting!

Home: Contact
bottom of page