Selected Publications.

Please see my Google Scholar for an up-to-date list of all publications.

Why do LLaVA Vision-Language Models Reply to Images in English?

Musashi Hinck, Carolin Holtermann, Matthew Lyle Olson, Florian Schneider, Sungduk Yu, Anahita Bhiwandiwalla, Anne Lauscher, Shaoyen Tseng, Vasudev Lal

We uncover a surprising multilingual bias occurring in a popular class of multimodal vision-language models (VLMs). Including an image in the query to a LLaVA-style VLM significantly increases the likelihood of the model returning an English response, regardless of the language of the query.

Evaluating the Elementary Multilingual Capabilities of Large Language Models with MultiQ

Carolin Holtermann, Paul Röttger, Timm Dill and Anne Lauscher

We investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use. Specifically, we introduce a new silver standard benchmark which we use to assess the models' multilingual language fidelity and question answering accuracy.

What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition

Carolin Holtermann, Markus Frohmann, Navid Rekabsaz and Anne Lauscher

We propose a novel framework for zero-shot module composition, which encompasses existing and some novel variations for selecting, weighting, and combining parameter modules under a single unified notion. Focusing on the scenario of domain knowledge and adapter layers, our framework provides a systematic unification of concepts, allowing us to conduct the first comprehensive benchmarking study of various zero-shot knowledge composition strategies.

ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale

Markus Frohmann, Carolin Holtermann, Shahed Masoudian, Anne Lauscher and Navid Rekabsaz

Multi-task learning (MTL) has shown considerable practical benefits, particularly when using pre-trained language models (PLMs). On the flip side, current two-stage MTL methods come with the cost of introducing a substantial number of additional parameters. In this work, we address this issue by leveraging the usefulness of linearly scaling the output representations of source adapters for transfer learning.

Fair and Argumentative Language Modeling for Computational Argumentation

Holtermann, Carolin and Lauscher, Anne and Ponzetto, Simone

Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models.