Hi! I’m a researcher working on Natural Language Processing and Machine Learning. I’ve recently completed a PhD at University of Edinburgh advised by Ivan Titov and Alexander Koller.
My PhD thesis was on compositional generalization in semantic parsing and introducing inductive biases for the kinds of structures that are relevant for language (e.g. syntax) into deep learning models. I’m also interested in making neural models of language more data efficient, better understanding how they work internally and making them more modular, efficient and easier to explain and debug. Before I came to Edinburgh, I got my Bachelor’s and Master’s in computational linguistics from Saarland University.
News
- Paper with John Gkountouras on Language Agents Meet Causality - Bridging LLMs and Causal World Models has been accepted to ICLR 2025!
- Student researcher at Google DeepMind with Miloš Stanojević from July to December 2024.
- Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations has been accepted to EMNLP 2024!
- Invited Talk at McGill/Mila NLP reading group.
- Guest Lecture at the University of Amsterdam.
- SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation has been accepted to ACL 2024 and Cache & Distil: Optimising API Calls to Large Language Models has been accepted to ACL Findings!
- Talk on Structural Inductive Biases for Seq2Seq Models at the CL Seminar at the University of Amsterdam.
- Got an Outstanding Paper Award at ACL 2023 for Compositional Generalization without Trees using Multiset Tagging and Latent Permutations 🏅
- Compositional Generalisation with Structured Reordering and Fertility Layers has been accepted to EACL 2023.