Research
Research Projects
Secure Algorithms for Vertically Federated Multi-Task Representation Learning Byzantine robustness, Accelerated Pruning for Edge Inference, Multi-Task Representation Learning Relevant paper: C5, C6
Imagine you want to predict lung disease using chest X-rays collected from different hospitals. To protect patient privacy, each hospital trains a model locally and only shares updates. However, some hospitals might use unreliable or compromised systems that send incorrect updates. These issues, known as Byzantine failures, can harm the overall learning process.
We developed a method called Byzantine Resilient Federated Alternating Gradient Descent that can still train accurate models even when several participants are Byzantine. It uses robust statistics and takes advantage of the low-rank structure in the data to learn in a sample efficient way.
It works for many low rank matrix recovery problems. One useful example is a web-based recommender system that suggests movies to users based on their ratings and reviews. Our method also applies to compressed sensing, used in Magnetic Resonance Imaging to speed up scans. It also reduces training time by allowing large language models to be trained faster using multiple GPUs or servers.
Fast Federated Few-Shot Learning PyTorch, Docker, AWS Source code Relevant paper: C4, J1
A Byzantine-resilient federated algorithm, AltGDmin, for low-dimensional representation learning a.k.a. Few-Shot Learning. It is communication-efficient, robust to adversarial attacks, and guarantees convergence. Deployed on AWS using Docker Swarm, the model achieves high accuracy using 5% of the data compared to the problem dimension.
Resilient Federated Principal Subspace Estimation PyTorch Source code Relevant paper: C3
A novel technique, Subspace Median, along with a Python package compatible with PyTorch, enabling Principal Component Analysis (PCA) on distributed or federated datasets across multiple devices, even in the presence of up to 50% corrupt or erroneous devices.
Federated Reinforcement Learning (RL) PyTorch, Open AI’s Gym Source code Relevant paper: C1
A federated learning framework to efficiently manage data samples in dynamic systems like the Open AI’s CartPole-v1 environment, utilizing policy gradient methods. This work was developed using PyTorch, Open AI’s Gym library. Integrated principles of algorithm optimization and control theory were used to provide convergence guarantees.
Publications
Journal papers
- [J1] AP Singh and N Vaswani, Byzantine-Resilient Federated PCA and Low Rank Column-wise Sensing
IEEE Transactions on Information Theory, August 2024.
Conference papers
- [C6] AP Singh, AA Abbasi, and N Vaswani, Byzantine-Resilient Federated Alternating Gradient Descent and Minimization for Partly-Decoupled Low Rank Matrix Learning
International Conference on Machine Learning 2025 (ICML 2025), July 2025.
Presentation
Quick View - [C5] AP Singh and N Vaswani, Secure Algorithms for Vertically Federated Multi-Task Representation Learning
IEEE International Symposium on Information Theory (ISIT 2025), June 2025.
Presentation - [C4] AP Singh and N Vaswani, Byzantine Resilient and Fast Federated Few-Shot Learning
International Conference on Machine Learning 2024 (ICML 2024), July 2024.
Presentation
Quick View - [C3] AP Singh and N Vaswani, Byzantine-Resilient Federated Principal Subspace Estimation
IEEE International Symposium on Information Theory (ISIT 2024), July 2024.
Presentation - [C2] AP Singh and N Vaswani, Byzantine-resilient Federated Low-Rank Column-wise Compressive Sensing
IEEE Annual Allerton Conference on Communication, Control, and Computing, September 2023.
Presentation - [C1] AP Singh and R Tali, Byzantine Resilient Federated REINFORCE (GM-FedREINFORCE)
IEEE International Conference on Machine Learning and Applications (ICMLA 2023), December 2023.
Quick View
Reviewing service
- International Conference on Artificial Intelligence and Statistics (AISTATS) (2024).
- IEEE International Symposium on Information Theory (ISIT) (2024).