Hello! I am Ajinkya Mulay, a sixth-year PhD student at Purdue University. I am pursuing my Ph.D. in the Electrical and Computer Engineering Department under the guidance of Prof. Xiaojun Lin. My Ph.D. research focuses on developing theoretical results for exact basis recovery in the context of Differential Privacy and Federated Learning for sparse models in the under-determined domain. My other research work spans the fields of Computational Social Sciences, Adversarial Robustness, and their relation to privacy. Some of my notable research/industry experiences have been at Facebook (Meta) and the University of Tokyo.

Please find the list of my publications under the research page and my CV under the CV page. If you’re curious about my other software projects, check out the curious page (inactive until early next year).

Updates:

  • March 2024: We recently added a preprint on sparse basis recovery guarantees in the Differential Privacy-Federated Learning (DP-FL) domain. We demonstrate that even with few samples (n), but where the data dimensions (p) are high (i.e. p » n), we can recover the exact sparse model with high probability. This is the first theoretical work that proves such guarantees with limited samples under the DP-FL setting. Theory and experiments accompany in the pre-print.

  • February 2024: Joined Meta (Facebook) as a Research Scientist.

  • November 2023: Passed my PhD preliminary exam! Onto the final stretch!

  • May 2023: Started working as a Graduate Research Assistant for Prof. Xiaojun Lin. We are continuing to focus on developing theoretical results that guarantee exact basis recovery under privacy for sparse models in the under-determined domain.

  • January 2023: Accepted into the OpenMined Padawan Open Source program! Working with Ishan Mishra (Engineering Tech Lead) and Ionesio (Team Lead) for integrating into the core open-source team of PySyft. PySyft is a data science library that enables machine learning without transferring data from the client.

  • December 2022: Our recent pre-print on optimal client sampling for differentially private federated networks (LOCKS: User Differentially Private and Federated Optimal Client Sampling) is now available on Arxiv.

  • December 2022: Updated blog with most recommended workflow tools.

  • November 2022: Time for some open-source. Started contributing to Hugging Face’s Gradio and Diffusers libraries. Working with The Anvil (at Purdue) to generate team-matching algorithms using Natural Language algorithms.

  • August 2022: Completed my internship at Meta working on Federated Semi-Supervised Learning (vision) algorithms.

  • June 2022: Excited to be a part of the Cohere for AI Initiative led by Sara Hooker!

  • June 2022: Our papers ‘Private Hypothesis Testing for Social Sciences’ and ‘PowerGraph: Using neural networks and principal components to multivariate statistical power trade-offs’ have been accepted (poster and talk) to the ICML workshops of Theory of Differential Privacy and AI for Science held alongside ICML 2022 in Baltimore, USA.

  • May 2022: Excited to be back at Meta for a summer internship!

  • May 2022: Our paper ‘Private Hypothesis Testing for Social Sciences’ is now available on Arxiv.

  • Mar 2022: I launched the 101 Days of NLP a popular way of entering a new field. Over the next few months I will be studying NLP in every form that I can. You can follow along on the curious page or on my new Twitter Account 101 Days of NLP. All release code will be a 100% open-source and reusable.

  • Mar 2022: We presented our preliminary work, ‘How to promote open science under privacy,’ an article at the confluence of Social Sciences and Privacy at the Psychological Sciences Department at Purdue University. We are working on a tool in R/Python to enable easy sharing of datasets under privacy guarantees.

  • Mar 2022: Our article ‘PowerGraph: Using neural networks and principal components to determine multivariate statistical power trade-offs’ has been accepted for an Individual Oral Presentation at the International Meeting of the Psychometric Society (IMPS) 2022

  • Jan 2022: We released a preprint of our recent work on efficiently graphing multivariate statistical power manifolds with supervised learning techniques. Manuscript now available at PowerGraph: Using neural networks and principal components to multivariate statistical power trade-offs.

  • Nov 2021: We discuss recent progress in graphing multivariate statistical power manifolds with novel Machine Learning at the MCP Colloquium at Purdue University. Work done with the SuperPower group. Slides and paper incoming soon.

  • June 2021: Our article Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning was accepted at the ICML Workshop on Socially Responsible Machine Learning.

  • May 2021: Started internship as PhD SWE at Facebook, Menlo Park.

  • Feb 2021: Started a new blog on differential privacy and federated learning along with my journey in my PhD.

  • Feb 2021: Added a new blog post for workflows in Machine Learning Systems at ML Workflows for Research Scientists. Published in Editor’s Picks on Towards Data Science publication (at link).

  • Jan 2021: Promoted to Machine Learning Team Lead in the SuperPower research group.

  • Nov 2020: Our article FedPerf: A Practitioners’ Guide to Performance of Federated Learning Algorithms, got accepted to NIPS 2020 Preregistration Workshop. We were amongst the 8 papers invited for a contributed talk for the workshop. Further open-source updates coming soon!

  • Oct 2020: Submitted research article FedPerf: A Practitioners’ Guide to Performance of Federated Learning Algorithms to NIPS 2020 Preregistration Workshop.

  • Aug 2020: Joined SuperPower Team (PIs: Dr. Erin Hennes and Dr. Sean Lane) as a Graduate Research Assistant, designing automated and smart algorithms for parameter space exploration using Machine Learning Techniques.

  • April 2020: Joined OpenMined as a Research Scientist.

  • Feb 2020: Published my first blog.

  • Aug 2019: Started the website. My research paper DFC: Dynamic UL-DL Frame Configuration Mechanism for Improving Channel Access in eLAA, (at NeWS Lab at IIT Hyderabad), got published in IEEE Networking Letters.