Skip to content


We are dedicated to advancing the field of deep representation learning, particularly for sequentially ordered data such as biomedical time series and natural texts. Through these efforts, we aim to improve the diagnosis of various diseases and the utility of large language models to perform new tasks based on natural language instructions. Leveraging data-driven modeling through deep neural networks, we focus on self-supervised and semi-supervised learning techniques to address challenges posed by limited labeled examples. Our work also explores the potential of transfer learning to build effective predictive models in scenarios with scarce training data. A key aspect of our research is ensuring that these models can generalize to unseen data, a factor that is particularly critical in medical applications. We are fortunate to collaborate with a diverse network of partners, including teams at Utrecht University and the University of Dresden.

Recent topics

Deep Learning for analyzing sleep with expert performance.

Improving our understanding of the many roles of sleep can help us gain insight into various diseases and ways to improve mental and physical well-being. We are investigating and developing predictive models that analyze sleep (electroencephalographic recordings, EEG) with expert-level performance. This line of research has the potential to facilitate large-scale studies by automating tedious and error-prone tasks, and to explore novel features that may predict prodromal stages of disease. Our models detect sleep stages in humans and mice. One of our recent models identifies sleep spindles, an important element of the sleep microarchitecture that is difficult to detect manually, with expert-level performance. With SomnoBot, we are enabling medical researchers to use state-of-the-art neural networks to analyze sleep, bridging the gap between research and application.

Large Language Models (LLMs) for tasks in low-resource languages.

In recent years, large-scale language models have revolutionized natural language processing (NLP). These models show better performance with increasing model complexity, training time, and size of the training set, which is largely composed of English texts. We are investigating fine-tuning and prompting strategies to use LLMs to solve tasks in non-English settings, especially in German. Diverse application areas include hate speech detection in German Facebook posts, automated assessment of text complexity of German Wikipedia pages, and speaker attribution in parliamentary debates of the German Bundestag. With our contributions, we have been ranked twice as the best academic team (GermEval 2021, GermEval 2022) and once as the 2nd best team (GermEval 2023) in international NLP competitions.


We highly regard the principles of open-source software and believe that making code publicly available for the replication of scientific results is crucial. Below is a compilation of code repositories that accompany our scholarly work.

Recent publications

For a full list of publications, visit Google Scholar.