Carolin Holtermann (she/her)

I’m Carolin Holtermann, a second year PhD student at the University of Hamburg, Germany.

Why do LLaVA Vision-Language Models Reply to Images in English?

We uncover a surprising multilingual bias occurring in a popular class of multimodal vision-language models (VLMs). Including an image in the query to a LLaVA-style VLM significantly increases the likelihood of the model returning an English response, regardless of the language of the query.

What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition

We propose a novel framework for zero-shot module composition, which encompasses existing and some novel variations for selecting, weighting, and combining parameter modules under a single unified notion. Focusing on the scenario of domain knowledge and adapter layers, our framework provides a systematic unification of concepts, allowing us to conduct the first comprehensive benchmarking study of various zero-shot knowledge composition strategies.

ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale

Multi-task learning (MTL) has shown considerable practical benefits, particularly when using pre-trained language models (PLMs). On the flip side, current two-stage MTL methods come with the cost of introducing a substantial number of additional parameters. In this work, we address this issue by leveraging the usefulness of linearly scaling the output representations of source adapters for transfer learning.

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.

Work

  1. Company
    University of Hamburg
    Role
    PhD Candidate
    Date
  2. Company
    Blue Yonder
    Role
    Data Science Consultant
    Date
  3. Company
    SAP
    Role
    Cloud Consultant
    Date
  4. Company
    SAP
    Role
    Cooperative Student
    Date
Download CV