Abstract
Generative modelling (GM) has advanced significantly in recent years, especially in computer vision, where it is used for various purposes, from supplementing limited sample datasets to creating art. As GMs become more integrated into our daily activities, discussions about their efficacy are becoming more prevalent. This is largely due to the potential biases they may contain, which could then influence downstream tasks and proliferate biases in society. In this dissertation, we make important contributions in improving fairness in generative models by identifying and addressing constraints which may limit their broader adoption.
First, we investigate the existing fairness enforcement methods in GM, where we find that current state-of-the-art (SOTA) perform poorly under limited data constraints, in addition to being computationally expensive to implement. To address this, we propose a new fairness enforcement methodology, while still using the existing setup, for a fair transfer learning process known as FAIRTL/FAIRTL++.
Secondly, we analyse the existing fairness measurement framework and identify errors that arise from the lack of consideration of inaccuracies in the sensitive attribute classifiers, leading to unreliable performance measurements. To rectify this, we propose the Classifier-Error-Aware-Measurement (CLEAM) framework, a statistical approach that accounts for classifier errors and minimises measurement inaccuracies.
Finally, we extend our study to consider fairness in text-to-image generative models. Our study shows that while the current SOTA achieves fairness effectively, distortions derived from the input prompt compromise the global structure of the output sample early in the diffusion process, leading to a decline in overall sample quality. To mitigate this issue, we introduce FairQueue, an algorithm that ensures the correct formation of the global structure before implementing fairness adaptations. In general, our research introduces several improvements to fair generative modelling, enabling better reliability and accessibility.
Speaker’s Profile
Teo Tzu Hsuan Christopher received his B.Sc. degree in Engineering Product Development (EPD) from the Singapore University of Technology and Design (SUTD) in 2018. He is currently pursuing a Ph.D. in Information Systems Technology and Design (ISTD) at SUTD. His research focuses on fairness in generative models, exploring ways to improve sensitive attribute representation through advanced generative modelling techniques. His broader research interests include AI ethics and bias mitigation in AI systems.