profile photo

Hanqing Zhu 「朱汉卿」

&nbsp|&nbsp 📈 Experience &nbsp|&nbsp ⭐️ Publications &nbsp|&nbsp 🤟 Honors &nbsp|&nbsp 🐱 My Cats

I work on scaling up intelligence by new methods at xAI 🤔.

I obtained my Ph.D at the ECE Department of UT Austin, advised by Prof. David Z. Pan (Fellow of ACM, IEEE, SPIE) and Prof. Ray T. Chen (Fellow of NAI, IEEE, OSA, SPIE). I have the privilege to work closely with Prof. Zhangyang "Atlas" Wang at UT Austin on efficient AI.

Previously, I received my bachelor's degree from Shanghai Jiaotong University with highest honor, in 2020.

I was recognized as the ML and Systems Rising Star in 2025, received MLSys'25 Outstanding Paper Award (Honorable Mention), CVPR'25 AI4CC Workshop Best Paper Award, Texas ECE Achievement Award and more honors

&nbsp&nbsp 👉 Email &nbsp|&nbsp 👉 CV &nbsp|&nbsp 👉 Github &nbsp|&nbsp 👉 Google Scholar &nbsp|&nbsp 👉Twitter

I was raised, and completed all my schooling before college, in a small town in southwest China. I’m grateful for the opportunities that followed; the path I’m on now was beyond anything I imagined growing up. 🙂

My research aims to develop efficient and scalable AI algorithms and systems. I am particularly excited about redesigning AI algorithms to be both theoretically grounded and practically efficient.

  • 🤗 Scalable and Theory-grounded Optimization for Foundation Models: Pre-Training & Post-training (RL)
  • 🤗 Hardware-software Co-design for Efficient AI Deployment

  • ♾️ Meta AI | May '25 – Dec '25 | Research Scientist Intern
    Topic: Theory-driven Efficient Learning for RLVR   |   Advisor: Dr. Yuandong Tian, Dr. Zechun Liu, Dr. Kai Sheng Tai
  • ♾️ Meta AI | May '24 – Oct '24 | Research Scientist Intern
    Topic: Efficient Large-scale Pre-Training   |   Advisor: Dr. Jinwon Lee
  • 💡 Lightelligence Inc. | May '23 – Sept '23 | Software Research Intern
    Topic: Low-bit Chip-aware Training   |   Advisor: Dr. Weifeng Zhang
  • 🧠 Google Brain | Jul '22 – Nov '22 | Student Researcher
    Topic: RL-based Chip Placement   |   Advisor: Dr. Joe Jiang

I have published papers in top conferences in machine learning/ system/computer architecture/design automation, including MLSys, HPCA, NeurIPS, ICCV, COLM, DAC, ICCAD, and TCAD.

The Path Not Taken: RLVR Provably Learns Off the Principals (Arxiv)
Hanqing Zhu,&nbsp Zhenyu Zhang,&nbsp Hanxian Huang,&nbsp DiJia Su,&nbsp Zechun Liu,&nbsp Jiawei Zhao,&nbsp Igor Fedorov,&nbsp Hamed Pirsiavash,&nbsp Zhizhou Sha,&nbsp Jinwon Lee,&nbsp David Z. Pan,&nbsp Zhangyang Wang*†,&nbsp Yuandong Tian*†,&nbsp Kai Sheng Tai*†
Arxiv 2025. NeurIPS 2025 Workshop on Efficient Reasoning (Spotlight)

[ Paper / Blog / X post / 量子位 / 新智元 ]
First theory-driven RLVR study and guidance for geometry-aligned RL optimization
Can Test-Time Scaling Improve World Foundation Model? (Arxiv)
Wenyan Cong*,&nbsp Hanqing Zhu*,&nbsp Peihao Wang,&nbsp Bangya Liu,&nbsp Dejia Xu,&nbsp Kevin Wang,&nbsp David Z. Pan,&nbsp Yan Wang,&nbsp Zhiwen Fan,&nbsp Zhangyang Wang
Conference on Language Modeling (COLM), 2025

[ Paper / Code ]
First efficient test-time scaling for world foundation model
APOLLO: SGD-like Memory, AdamW-level Performance (Arxiv)
Hanqing Zhu*,&nbsp Zhenyu Zhang*,&nbsp Wenyan Cong,&nbsp Xi Liu,&nbsp Sem Park,&nbsp Vikas Chandra,&nbsp Bo Long,&nbsp David Z. Pan,&nbsp Zhangyang Wang,&nbsp Jinwon Lee
Conference on Machine Learning and Systems (MLSys), 2025

[ 🏆 Outstanding Paper Honorable Mention / Paper / Code / Hacker News / HuggingFace / LLaMA-Factory / FluxML / axolotl / 机器之心 ]
Theory-driven scalable memory-efficient training with new-recording memory efficiency
PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices (Arxiv)
Hanqing Zhu,&nbsp Wenyan Cong,&nbsp Guojin Chen,&nbsp Shupeng Ning,&nbsp Ray Chen,&nbsp Jiaqi Gu,&nbsp David Z. Pan
Conference on Neural Information Processing Systems (NeurIPS), 2024

[ Paper / Code ]
Theory-grounded efficient and fast operator model for scientific simulation
Lightening-Transformer: A Dynamically-operated Optically-interconnected Photonic Transformer Accelerator (Arxiv)
Hanqing Zhu,&nbsp Jiaqi Gu,&nbsp Hanrui Wang,&nbsp Zixuan Jiang,&nbsp Zhekai Zhang,&nbsp Rongxin Tang,&nbsp Chenghao Feng,&nbsp Song Han,&nbsp Ray T. Chen,&nbsp David Z. Pan
IEEE International Symposium on High Performance Computer Architecture (HPCA), 2024
(Acceptance Rate: 18.3%)

[ Paper / Code ]
Hardware-software Co-design; First photonic transformer accelerator

  • Best Paper Award , CVPR AI for Content Creation Workshop, 2025
  • Honorable Mention Outstanding Paper , MLSys, 2025
  • ML and Systems Rising Stars (38 out of 150+), ML Commons, 2025
  • DAC Ph.D. Forum, DAC 2025
  • ICLR Notable Reviewer, ICLR 2025
  • MLSys Student Travel Award, MLSys 2025
  • Texas ECE Graduate Achievement Award, UT Austin 2024
  • UT Graduate School Continuing Fellowship Nomination (1 of 2 nominees in ECE), UT Austin 2024
  • 1st Place in IEEE/ACM MLCAD FPGA Macro-Placement Contest, MLCAD, 2023
  • MLSys Student Travel Award, MLSys 2023
  • Winner of Robert S. Hilbert Memorial Optical Design Competition, Synopsys, 2022
  • DAC Young Fellow, 2021
  • Shanghai Outstanding Graduate, 2020
  • Hongyi Scholarship, 2019
  • Outstanding Undergraduate Scholarship, 2019
  • Samsung Scholarship, 2018
  • Zhiyuan College Honors Scholarship, 2018
  • 1st Prize, National Mathematical Contest in Modeling, Shanghai Division, 2018
  • Academic Excellence Scholarship, 2017-2019

I have two lovely cats :)

Fubao
Fubao
Meimei
Meimei


This template is a modification to Jon Barron's website and Rishab Khincha's website.