Hello!
I'm a researcher at OpenAI, where I lead the Post-training Frontiers team. We train the agentic models shipped across Codex, the API, and ChatGPT Thinking/Pro (e.g., o3, GPT-5 Thinking, GPT-5.3 Codex, GPT-5.5, etc.). My goal is to train and deploy models that provide maximal value to users.
Previously, I was a PhD student at Stanford University, co-advised by Percy Liang and Tatsu Hashimoto and funded by a Knight-Hennessy Scholarship. My research focused on instruction following (Alpaca, Alpaca-eval). Before that, I traveled around doing research in representation learning and worked at startups including Grab, where I built NLP systems for under-researched languages (Thai, Khmer, Burmese, …) before LLMs were a thing.
Note: please do not contact me about quant jobs; I'm not interested regardless of compensation.
Featured talks
Stanford lecture on LLMs
GPT-5 release
Agentic AI MOOC
