Research Interests
Our team designs and develops secure and efficient computer systems for high intelligence and automation. We research and explore security and privacy issues and solutions in various systems. We also build large-scale distributed systems for emerging applications. We have been working on the following topics:
Deep learning security.
The emerging deep learning technology has been widely commercialized in our daily life. However, it also introduces new security and privacy threats, which could bring disastrous consequences in critical scenarios. Our goal is to investigate the security vulnerabilities as well as defense solutions for various deep learning systems, such as computer vision, natural language processing, reinforcement learning, distributed federated learning, etc. We are interested in the following security problems.
- Large Language Model and Multi-modal Model Security, Privacy and Safety [S&P25, NDSS25, CCS24-c, Usenix Security24-c, NDSS24-a, NDSS24-b, S&P24, NeurIPS24-a, ACL24, EMNLP24-a, EMNLP24-b, ECCV24, ICML24-b, MM24-a,]
- Adversarial examples [ECCV22, MM22-b, TBD22-b, AAAI20]
- DNN backdoor and bit flip attacks [MM24-b, ICML24-a, ICLR24-c, Usenix Security23, ICCV23-a, ICCV23-b, CVPR23, ICLR23-b, AAAI23, ACL23-a, TDSC22-b, TDSC23-d, NAACL22, MM22-a, ICLR22-b, AsiaCCS21-b]
- Model and data privacy (model extraction, model inversion, membership inference) [ICLR24-a, TDSC23-a, ICLR23-a, ICLR22-a, AsiaCCS21-a, ACSAC19]
- DNN IP protection (watermarking, fingerprinting) [TCSVT22-a, IJCAI21, AAMAS21, CVPR19]
- Federated and decentralized learning [TPAMI23, TIFS23-b, TCSVT23, TBD22-a, TCSVT22-b, TCSVT22-c, CVPR21]
- Privacy-preserving machine learning with cryptography (homomorphic encryption, secure multi-party encryption) [Usenix Security24-a, Usenix Security24-b, ICML23, TDSC23-b, TDSC23-c, TIFS23-a, TDSC22-a, TIFS22-c, NeurIPS22, ACSAC21]
Robotics and autonomous driving security.
Benefiting from the advances in mechanics, sensors, artificial intelligence and networked systems, a variety of powerful robots have been designed to perform different tasks autonomously, ranging from space exploration to daily life housework to manufacturing. However, the complexity in the robot systems inevitably enlarges the attack surface, and brings new security challenges in the design and development of robot applications. Our group aims to explore the security problems at different layers of robotic and autonomous driving systems, and possible mitigation solutions.
- Attacks against perception systems (sensor spoofing, backdoor) [NeurIPS24-b, CCS24-a, CCS24-b, MM22-a, TITS23-a]
- Robot Operation System security [CCS22-a, RAID22-a, TDSC24-a ]
- Cloud-robotic and multi-robotic system security [RAID22-b, InfS21, IPDPS21]
- Access control in autonomous and robotic systems [TITS23-b, TITS23-c, TIFS22-a]
Machine learning system optimization and acceleration.
Modern deep learning models are becoming more complex with huge production and deployment cost. Consequently, IT corporations, research institutes and cloud providers build large-scale GPU clusters to ease the development of DL training and inference jobs. It is important to schedule these jobs and allocate the valuable resources in an efficient and scalable way. We are interested in designing new systems and frameworks to manage, optimize and accelerate deep learning workloads in large-scale datacenters.
- Survey of deep learning workload scheduling [CSUR24]
- Optimization of large model training [NSDI24]
- Deep learning training workload scheduling [ICS24-a, ICS24-b, OSDI23, ICCD22, SoCC21, TPDS22, SC21]
- Interpretable machine learning systems [ASPLOS23, ATC22]
- Efficient machine learning at edge [WWW24, ICLR24-b]
- Graph Neural Network optimization [SC24, ICDE24]
Computer architecture and cloud computing security.
Infrastructure-as-a-Service (IaaS) clouds provide computation and storage services to enterprises and individuals with increased elasticity and low cost. Cloud customers rent resources in the form of virtual machines (VMs). However, these VMs may face various security threats. We build security-aware computer architectures to attest and protect the security of cloud applications. We also design new methodologies to assess and mitigate micro-architectural side-channel attacks.
Projects
- Ongoing Grants:
- [2024-2027] Co-PI, CRPO: Secure, Private, and Verified Data Sharing for Large Model Training and Deployment
- [2024-2027] Co-PI, DTC: Combatting Prejudice in AI: A Responsible AI Framework for Continual Fairness Testing, Repair, and Transfer
- [2024-2026] PI, NRF NCR DeSCEmT: Computation-efficient and Unified Defenses against Side-channel Attacks and Integrity Attacks
- [2024-2025] PI, NTU S-Lab: Towards Performance Optimization and Improvement for Big Models
- [2023-2027] Co-PI, CSA: Trustworthy AI Centre NTU (TAICeN)
- [2023-2025] Co-PI, AISG Grand Challenge: Towards Building Unified AV Scene Representation for Physical AV Adversarial Attacks and Visual Robustness Enhancement
- [2023-2025] PI, AISG: Resource-Efficient AI “Human mesh reconstruction, Learning from small datasets, Self-supervised learning”
- [2023-2024] PI, CSA: Securing Open-source Packages in the Software Supply Chain Through Visibility and Verification
- [2022-2025] PI, MoE AcRF Tier2: A Framework for Intellectual Property Protection of Deep Learning Applications
- [2021-2024] Co-PI, MoE AcRF Tier2: Securing Faceprint: An Imaging Polarimetry Approach to Face Anti-spoofing
- [2020-2025] PI, NTU S-Lab: Efficient GPU Cluster Scheduler for Distributed Deep Learning
- [2019-2025] PI, Continental NTU Corp Lab: Smart Infrastructure for Data Driven Societies
- Completed Grants:
- [2022-2024] PI, NTU Desay: A Systematic Study about the Integrity Threats and Protection of Sensory Data in Autonomous Vehicles
- [2021-2024] PI, AISG: Trustworthy and Explainable AI “Safe, Fair and Robust AI System Development, Transparent or Explainable AI System Development, Explainability and Trust (Safe, Fair, Robust) Assessment”
- [2021-2024] Co-PI, MoE AcRF Tier2: Smart Safe and Robust Motion Control for Multi-Robot Systems
- [2020-2024] PI, NRF NCR CHFA: Building Security Tools for Investigating and Introspecting Applications in Trusted Execution Environment
- [2020-2024] PI, MoE AcRF Tier1 Seed: Design and Evaluation of Cyber Attacks against AI Chips
- [2020-2024] PI, MoE AcRF Tier1: Detecting and Preventing Robotic Attacks
- [2020-2023] PI, NTU Desay: A Cloud-based Framework for Protecting Autonomous Vehicles
- [2019-2023] PI, NTU SUG: General Frameworks for Quantifying and Defeating Side-channel Attacks