top of page

Welcome to Caglar Gulcehre's academic page. 

Professor and PI of CLAIRE lab @ EPFL,

Research Consultant at Google DeepMind

Ex: Staff Research Scientist @ Google DeepMind

Google Scholar: Click here!

Twitter: caglarml@

Github: github.com/caglar 

Email: ca9lar At Gmail

Location: Lausanne, Switzerland 

Important note for students: I receive a lot of emails from students, and it is impossible for me to reply to all of them. For now, instead of cold-emailing me directly for PhD or MSc positions, please check the contact me page for more info.

​

Caglar Gulcehre-IMG_0255~2.jpg

BIO

I am currently a professor at EPFL and lead the CLAIRE research lab. I was a staff research scientist at Google DeepMind, working on the intersection of reinforcement learning, foundation models, novel architectures and training paradigms, safety + alignment, and natural language understanding. During my time at DeepMind, I led or co-led several projects, ranging from the next generation of sequence modeling architectures, alignment, and safety to offline RL.

​​

I am interested in building agents that can learn from a feedback signal (often weak, sparse, and noisy in the real world) while utilizing unlabeled data available in the environment. I am interested in improving our understanding of the existing algorithms and developing new ones to enable real-world applications with positive social impact. I am particularly fascinated by the scientific applications of machine learning algorithms. I enjoy working in a multi/cross-disciplinary team and am often inspired by neuroscience, biology, and cognitive sciences when working on algorithmic solutions.  

​

I finished my Ph.D. under the supervision of Yoshua Bengio at MILA.

​

I defended my thesis "Learning and time: on using memory and curricula for language understanding" in 2018 with Christopher Manning as my external examiner. Currently, the research topics that I am working on include but are not limited to reinforcement learning, offline RL, large-scale deep architectures (or foundational models. as they call it these days), and representation learning (including self-supervised learning, new architectures, causal representations, etc.) I have served as an area chair and reviewer to significant machine learning conferences such as ICML, NeurIPS, ICLR, and journals like Nature and JMLR. I  have published at numerous influential conferences and journals such as Nature, JMLR, NeurIPS, ICML, ICLR, ACL, EMNLP, etc... I gave the Memory augmented neural networks tutorial in EMNLP 2018. My work has received the best paper award at the Nonconvex Optimization workshop at NeurIPS and an honorable mention for best paper at ICML 2019. ​I have co-organized seven workshops at the top machine learning conferences such as NeurIPS, ICML, and ICLR.

​

Research Areas: Machine Learning, Reinforcement Learning, Deep Learning,  Reasoning, Alignment and Safety, etc.

Updates

​​​I gave a keynote talk at Deep Learning Indaba in Senegal Dakar alongside other ​incredible speakers like Samy Bengio on shortcomings of transformers and large language models. I was a panelist in Long Context Foundation Models (LCFM) Panel at ICML 2024. Organizing the Next Generation of Sequence Models workshop at ICML 2024. Our paper Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers is on arxiv! Gave a lecture on transformers and foundation models at EEML 2024. We published our work on Griffin, which is an efficient, high-performant state space model architecture for foundation models. Published our Reinforced Self-Training work for an efficient approach to the alignment of LLMs. Gave a talk at the EPFL IC department on the evolution of LLM architecture. Gave a talk at ICRC in Geneva on the open-source State of Art Foundation models November 2023). Gave a talk UN in Geneva workshop on foundation models about the risks and potentials of foundation models for humanitarian operations (October 2023.) Gave a talk at EEML 2023 in Albania on the History of Large Language models. Our paper ReST about Reinforcement Learning from Human Feedback is on Arxiv! We developed a new sequence modeling paradigm called LRU (Linear Recurrent Units), and our paper was published at ICML 2023. Our paper "On integrating a language model into neural machine translation" got the best research paper award at Interspeech 2022. asdasdOur paper "An Empirical Study of Implicit Regularization in Deep Offline RL" is on arXiv. We are organizing the ML Evaluation Standards workshop at ICLR 2022. We presented our paper "StarCraft II Unplugged: Large Scale Offline Reinforcement Learning" at the Deep RL workshop at NeurIPS 2021. Our paper, Active Offline Policy Selection, has been accepted to NeurIPS 2021. I have presented Intro to RL (part 1 slides) and Offline RL lectures (part 2 slides) at DeepLearn 2021 Summer School. We have released DeepMind Lab and Bsuite datasets for Offline RL Under RL Unplugged. Our paper On Instrumental Variable Regression for Deep Offline Policy Evaluation is on arXiv. Our paper, Regularized behavior value estimation on a single-step policy improvement method, is on arXiv. Our paper Addressing Extrapolation Error in Deep Offline Reinforcement Learning was Oral at the Offline RL Workshop at NeurIPS 2020. We released the hard-eight task suite used in the "Making Efficient Use of Demonstrations" paper.

CaglarInaugrualLecture.png

Advising and contributing to academic and research initiatives is my passion. I specialize in foundation models, reinforcement learning, natural language processing, machine learning, and deep learning applications.

This space is dedicated to sharing insights into my academic background, research contributions, and the impact of my work. Feel free to get in touch to discuss potential collaborations or research opportunities.

Research Collaborations

Explore some of the academic institutions and organizations I have worked at:

image.png
image.png
image.png
image_edited.jpg
image_edited.png
image_edited.png
Image by Yanuka Deneth

My Group at EPFL

I am fortunate enough to work with incredible individuals at the CLAIRE (Caglar Gulcehre Laboratory of Artificial Intelligence Research) on developing efficient, trustworthy, reliable  and algorithms that has system-2 level reasoning abilities.

Recent Blogposts

Coming soon... I will be posting blogs here. 

Keep an eye on it.

© 2024 by Caglar Gulcehre

bottom of page