• Haojie WANG
• Assistant Researcher
• Department of Computer Science and Technology
• Joined Department: 2023
• Email: wanghaojie@tsinghua.edu.cn
• Phone: 010-62795471
Education background
Bachelor in the School of Aerospace, Tsinghua University, Beijing, China, 2015;
Ph.D. in the Department of Computer Science and Technology, Tsinghua University, Beijing, China, 2021.
Areas of Research Interests
AI Compiler, High Performance Computing
Research Status
Haojie Wang is an Assistant Researcher in the Department of Computer Science at Tsinghua University, specializes in artificial intelligence compilers and high-performance computing. His research findings were published in top conferences and journals including OSDI, ATC, PPoPP, SC, PLDI, TC, TPDS, et. al., and received the Best Student Paper Award at ICS 2021 and the Best Paper Runner-up at TPDS 2022.
Honors And Awards
The Tsinghua University Outstanding Doctoral Dissertation Award, 2021.
The Outstanding Graduates of Beijing, 2021.
Shuimu Tsinghua Scholar Program, 2021.
The ACM SIGHPC China Outstanding Doctoral Dissertation Award, 2022.
The Young Scientists Fund of the National Natural Science Foundation of China, 2022.
The Young Elite Scientist Sponsorship Program by BAST, 2023.
The Tsinghua University Outstanding Postdoctoral Award, 2023.
Academic Achievement
[1] Wang, Haojie, Jidong Zhai, Mingyu Gao, Feng Zhang, Tuowei Wang, Zixuan Ma, Shizhi Tang et al. "Optimizing DNNs with Partially Equivalent Transformations and Automated Corrections." IEEE Transactions on Computers (2023).
[2] Wang, Haojie , Zixuan Ma, Liyan Zheng, Yuanwei Wang, Fei Wang, and Jidong Zhai. "Efficient memory allocator for the New Generation Sunway supercomputer." Journal of Tsinghua University (Science and Technology) 62, no. 5 (2022): 943-951.
[3] Wang, Haojie, Jidong Zhai, Mingyu Gao, Zixuan Ma, Shizhi Tang, Liyan Zheng, Yuanzhi Li, Kaiyuan Rong, Yuanyong Chen, and Zhihao Jia. "{PET}: Optimizing tensor programs with partially equivalent transformations and automated corrections." In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), pp. 37-54. 2021.
[4] Wang, Haojie, Jidong Zhai, Xiongchao Tang, Bowen Yu, Xiaosong Ma, and Wenguang Chen. "Spindle: Informed memory access monitoring." In 2018 USENIX Annual Technical Conference (USENIX ATC 18), pp. 561-574. 2018.
[5] Zheng, Liyan, Haojie Wang, Jidong Zhai, Muyan Hu, Zixuan Ma, Tuowei Wang, Shuhong Huang et al. "{EINNET}: Optimizing Tensor Programs with {Derivation-Based} Transformations." In 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23), pp. 739-755. 2023.
[6] Shi, Tianhui, Jidong Zhai, Haojie Wang, Qiqian Chen, Mingshu Zhai, Zixu Hao, Haoyu Yang, and Wenguang Chen. "GraphSet: High Performance Graph Mining through Equivalent Set Transformations." In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-14. 2023.
[7] Zhang, Chen, Haojie Wang, Zixuan Ma, Lei Xie, Zeyu Song, and Jidong Zhai. "UniQ: a unified programming model for efficient quantum circuit simulation." In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-16. IEEE, 2022.
[8] Tang, Shizhi, Jidong Zhai, Haojie Wang, Lin Jiang, Liyan Zheng, Zhenhao Yuan, and Chen Zhang. "FreeTensor: a free-form DSL with holistic optimizations for irregular tensor programs." In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, pp. 872-887. 2022.
[9] Jin, Yuyang, Haojie Wang, Runxin Zhong, Chen Zhang, and Jidong Zhai. "PerFlow: A domain specific framework for automatic performance analysis of parallel applications." In Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 177-191. 2022.
[10] Ma, Zixuan, Jiaao He, Jiezhong Qiu, Huanqi Cao, Yuanwei Wang, Zhenbo Sun, Liyan Zheng et al. "BaGuaLu: targeting brain scale pretrained models with over 37 million cores." In Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 192-204. 2022.
[11] Ma, Zixuan, Haojie Wang, Guanyu Feng, Chen Zhang, Lei Xie, Jiaao He, Shengqi Chen, and Jidong Zhai. "Efficiently emulating high-bitwidth computation with low-bitwidth hardware." In Proceedings of the 36th ACM International Conference on Supercomputing, pp. 1-12. 2022.
[12] Zhai, Jidong, Liyan Zheng, Feng Zhang, Xiongchao Tang, Haojie Wang, Teng Yu, Yuyang Jin, Shuaiwen Leon Song, and Wenguang Chen. "Detecting Performance Variance for Parallel Applications Without Source Code." IEEE Transactions on Parallel and Distributed Systems 33, no. 12 (2022): 4239-4255. Best Paper Runner-up.
[13] Zhang, Chen, Zeyu Song, Haojie Wang, Kaiyuan Rong, and Jidong Zhai. "HyQuas: hybrid partitioner based quantum circuit simulation system on GPU." In Proceedings of the ACM International Conference on Supercomputing, pp. 443-454. 2021. Best Student Paper Award.
[14] Jin, Yuyang, Haojie Wang, Teng Yu, Xiongchao Tang, Torsten Hoefler, Xu Liu, and Jidong Zhai. "ScalAna: Automating scaling loss detection with graph analysis." In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-14. IEEE, 2020.
[15] Tang, Xiongchao, Haojie Wang, Xiaosong Ma, Nosayba El-Sayed, Jidong Zhai, Wenguang Chen, and Ashraf Aboulnaga. "Spread-n-share: improving application performance and cluster throughput with resource-aware job placement." In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-15. 2019.