Anant Raj

Hello 

Marie-Curie Fellow
SIERRA Project Team (Inria),
Ecole Normale Supérieure, PSL Research University.
Coordinated Science Laboratory (CSL),
University of Illinois at Urbana-Champaign (UIUC).

Email: araj at illinois and anant dot raj at inria
Contact: Room 121, CSL, Urbana.

Short Bio

I am currently a Marie-Curie Fellow, jointly hosted by Prof. Francis Bach at SIERRA Project Team (Inria) and Prof. Maxim Raginsky at Coordinated Science Laboratory (CSL), UIUC. Before that, I completed my PhD in Machine Learning at Max-Planck Institute for Intelligent Systems in the Empirical Inference Department under the supervision of Prof. Bernhard Schoelkopf. I received my B.Tech-M.Tech dual degree in Electrical Engineering from IIT Kanpur where my research was advised by Prof. Rajesh M Hegde, Prof. Vinay Namboodiri, Prof. Amitabha Mukerjee and Prof. Tinne Tuytelaars (External Master's Thesis Advisor). My research interest is in understanding problems in general machine learning theory and applications. More specifically, I am interested in optimization theory, kernel methods and theoretical foundation of machine learning. I have also a vivid interest in understanding the resource efficient learning such as active learning, coresets and distributed inference. On the application side, I am interested in application of machine learning methods in healthcare domain.

Recent News

  • Mar 2023: New preprint on Variational Principle of Mirror Descent and Mirror-Langevin is available now. Check it out.

  • Feb 2023: New preprint on Stochastic Gradient based Sampling is available now. Check it out.

  • Jan 2023: New preprint on algorithmic stability for heavy tailed SGD for general loss functions is available now. Check it out.

  • Jan 2023: Our paper on explicit regularisation via noise injection has been accepted to AISTATS 2023.

  • Dec 2022: Our paper on algorithmic stability for heavy-tailed SGD has been accepted to ALT, 2023.

  • Aug 2022: Our paper on causal feature selection has been accepted to TMLR, 2022.

  • July 2022: Presented our work on active learning at ICML, 2022.

  • July 2022: Visited Google Research, India and Indian Institute of Science, Banglore.

  • Jun 2022: New preprint on explicit regularization via noise injection is available now. Check it out.

  • Jun 2022: New preprint on algorithmic stability for heavy tailed continuous time SGD is available now. Check it out.