Current: Designing better proteins at EvolutionaryScale Postdoc@MIT PhD@Cornell University, CS

Ithaca, NY
Joined November 2015
Very excited to share our recent work on structural watermarking of Large Language Models; joint work with the incredible Adam Block (at Microsoft Research) and Sasha Rakhlin (MIT). Our method, named Gaussmarks, offers a straightforward approach to watermarking language models. It's easy to implement, provides statistically rigorous p-values, and maintains high model quality without significant distortion. Paper link: arxiv.org/pdf/2501.13941
2
6
1
79
This is very exciting! It also gives me a lot of joy to see TIFR in the list!
Together with @Googleorg, we’re introducing the AI for Math initiative, bringing together five prestigious research institutions pioneering the use of AI in mathematics. ⤵️
50
Ayush Sekhari retweeted
We are only at the beginning of understanding everything AI can do. By empowering world-leading mathematicians, we could open up new pathways of research, advance human knowledge and move towards more breakthroughs across the scientific disciplines. → goo.gle/4qz2aqS
3
19
168
Ayush Sekhari retweeted
Relearning from finetuning on unrelated data happens easily and quickly. For MACE, only 50 images were required to cause a 16% resurgence in unlearned concepts. Similarly, only 20 finetuning steps were required to cause a 12% resurgence in unlearned concepts. (7/9)
Ayush Sekhari retweeted
We tested seven different state-of-the-art concept unlearning algorithms on both SD 1.4 and 2.1. Relearning occurs on all algorithms and models across a variety of tasks regarding celebrities, objects, copyright, and unsafe content. (6/9)
Ayush Sekhari retweeted
Even more surprising and concerning, the unlearned model would produce unlearned concepts when prompted with unrelated concepts! Simply prompting for the unlearned concept turns out to be insufficient for evaluating robustness to relearning. (5/9)
Ayush Sekhari retweeted
We started with unlearning celebrities and very quickly found that it was pretty easy to relearn celebrities when finetuning on completely unrelated people. (4/9)
Ayush Sekhari retweeted
Concept unlearning has been suggested as a safeguard for open-weight diffusion models to prevent harmful generations. Concepts can be easily relearned during finetuning. However, we find that certain concepts can be relearned even when finetuning on unrelated concepts! (3/9)
Ayush Sekhari retweeted
Diffusion models don’t have similar safety finetuning/alignment safeguards as LLMs. Most harmful generations, even in open-weight models (e.g. Stable Diffusion), are mitigated through filtering. For open-weight models, these filters can be turned off quite easily. (2/9)
Ayush Sekhari retweeted
🚨 Your diffusion model might relearn what you made it forget, even without trying! 🚨 In new work, we show concept unlearning can be reversed when finetuning on unrelated data. Even worse, unlearned concepts can resurface when prompting for an unrelated concept! (1/9)
2
5
6
Ayush Sekhari retweeted
Microsoft Research New York City is seeking applicants for multiple Postdoctoral Researcher positions in ML/AI! These are positions for up to 2 years, starting in July 2026. Application deadline: October 22, 2025
5
44
1
255
Ayush Sekhari retweeted
Wanna chat with an experienced learning theory researcher? We have open office hours online with four awesome folks, Surbhi Goel (@SurbhiGoel_), Gautam Kamath (@thegautamkamath), Ayush Sekhari (@ayush_sekhari), Lydia Zakynthinou (@zakynthinou)! Book a slot and ask them anything!
1
8
3
34
An incredible list of keynote speakers at an incredible conference!
RLC Keynotes are now live! Covering: Wanting vs liking, Agent factories, Theoretical limit of LLMs, Pluralist value, RL teachers, Knowledge flywheels... piped.video/playlist?list=PL…
8
Ayush Sekhari retweeted
RLC Keynotes are now live! Covering: Wanting vs liking, Agent factories, Theoretical limit of LLMs, Pluralist value, RL teachers, Knowledge flywheels... piped.video/playlist?list=PL…
2
37
7
246
Ayush Sekhari retweeted
Announcing the first workshop on Foundations of Language Model Reasoning (FoRLM) at NeurIPS 2025! 📝Soliciting abstracts that advance foundational understanding of reasoning in language models, from theoretical analyses to rigorous empirical studies. 📆 Deadline: Sept 3, 2025
1
26
5
162
Ayush Sekhari retweeted
Two Research Assistant / Pre-Doctoral Fellow positions are open IISc Bangalore, jointly with CSA & ECE depts. Work on cutting-edge research in Machine Learning, Statistics, & Applied Probability under Prof. Anant Raj & Prof. Shubhada Agrawal. Apply: forms.cloud.microsoft/r/49Np…
1
11
Ayush Sekhari retweeted
Delighted to announce that the 2nd edition of our workshop has been accepted to #NeurIPS2025! We have an amazing lineup of speakers: @WenSun1, @ajwagenmaker, @yayitsamyzhang, @MengdiWang10, @nanjiang_cs, Alessandro Lazaric, and a special guest!
1
8
2
31
Ayush Sekhari retweeted
🧵 Academic job market season is almost here! There's so much rarely discussed—nutrition, mental and physical health, uncertainty, and more. I'm sharing my statements, essential blogs, and personal lessons here, with more to come in the upcoming weeks! ⬇️ (1/N)
3
38
262
Ayush Sekhari retweeted
Nice summary of some of basic statistical problems in online RL that I would still like to see resolved. Problem 1 in particular I have spent a lot of time thinking about (and pushing others to think about) over the last few years.
like everyone else i am hopping on the blog post trend gene.ttic.edu/blog/incomplet…
1
5
30