You are invited to attend Changjiang Li's public PhD Proposal.
Everyone is welcome!
Who: Changjiang Li
When: Wednesday, November 13 at 02:00 PM
Where: Zoom https://stonybrook.zoom.
Title: Mitigating Risks in Self-Supervised Representation Learning: Safeguarding Against Backdoor Attacks
Abstract: Self-supervised representation learning (SRL) has emerged as a pivotal advancement in machine learning, offering high-quality data representations without the need for labeled datasets. While SRL has demonstrated enhanced adversarial robustness compared to supervised learning, its resilience against other attack types, particularly backdoor attacks, remains an open question. Recent studies have revealed potential vulnerabilities in SRL, underscoring the necessity for a comprehensive security analysis. However, existing research often extrapolates attacks from supervised learning paradigms, neglecting the unique challenges and opportunities inherent to self-supervised mechanisms.
This thesis proposal aims to address three critical objectives in the realm of self-supervised learning: (1) exploring novel attack vectors, (2) implementing and evaluating practical attacks, and (3) developing robust countermeasures. We focus on two key SRL paradigms: Contrastive Learning and Diffusion Models. For Contrastive Learning, we synthesize existing security vulnerabilities and introduce innovative attack vectors, such as CTRL, to uncover distinctive risks. We conduct a comparative analysis of contrastive and supervised learning approaches in their defense against these threats, exploring potential safeguards and highlighting the limitations of current protective measures in self-supervised contexts. Regarding Diffusion Models, we demonstrate inherent vulnerabilities in their application to adversarial purification.
Our research aims to illuminate the unique challenges posed by emerging attack vectors in self-supervised learning, fostering technical advancements to address underlying security risks in real-world applications. By contributing to the development of more resilient and secure self-supervised representation learning systems, we seek to enhance their reliability and trustworthiness in practical scenarios. This comprehensive examination of SRL's security landscape will provide valuable insights for the broader machine-learning community and pave the way for more robust AI systems.
Dates
Wednesday, November 13, 2024 - 02:00pm to Wednesday, November 13, 2024 - 03:00pm
Location
Zoom
Event Description
Event Title
PhD Thesis Proposal: Mitigating Risks in Self-Supervised Representation Learning: Safeguarding Against Backdoor Attacks