Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Bias in artificial intelligence is prevalent, especially among generative models. One such model, Contrastive Language Image Pretraining (CLIP) is used to classify images in one-shot tasks, and for pre-training the image generation model DALL-E. Bias reflected in these models are harmful towards individuals of protected classes (e.g., race, gender, age, and sexuality). This thesis proposes two debiased models of CLIP: CLIP-Race and Intersectional-CLIP. Debiased versions of CLIP on race and intersectional ethnicity and gender respectively. Both models follow a proposed debiasing protocol, which uses an adversarial classifier to prepend learnable prompt tokens to train and debias CLIP. Results show reduced bias in both instances with bias measured on 6 metrics.

Details

PDF

Statistics

from
to
Export
Download Full History