I graduated with a PhD at Stanford University (advised by Prof. Matthias Ihme). I was also a Graduate Fellow (‘23) at the Stanford Institute for Human-Centered AI, and have affiliations with the Center for Open and REproducible Science, Predictive Science Academic Alliance Program, and Stanford Flow Physics and Computational Engineering Group.
My thesis focused on improving our understanding of the science behind rocket propulsion and novel energy systems with Machine Learning, AI for Science, and High-Performance Computing techniques.
I’ve also curated terabytes of high-fidelity 3D fluid data at [https://blastnet.github.io/].
Check out my CV for more detailed info.
My research at Stanford involved exploring the application of AI/ML in solving complex problems in turbulent reacting flows. These phenomena have significant implications for critical applications such as rocket propulsion, wildfires, and energy systems. This field of work not only improves our understanding of these important phenomena (tied to space exploration and climate change), but also presents exciting technical challenges within the computational sciences.
In particular, my research has focused on two core themes:
1. Building accurate and cost-efficient predictive models for/with large-scale computing systems:
Solving turbulence remains one of the grand challenges in classical physics and mechanics [info], and research typically involves creating precise simulations using peta-/exa-scale high performance computing systems [info]. In my research, I use AI/ML to discover models that can fill gaps in our understanding of turbulent phenomena [info]. Challenges in scaling can become especially pronounced when thousands of chemical species are considered within the turbulent flows, leading to significant challenges tied to computational complexity in real-world systems (rockets, wildfires, etc). To address these challenges, part of my research involves developing AI/ML techniques that can accelerate costly aspects of computation [info].
2. Addressing gaps in data availability in new areas of AI/ML research:
From an AI/ML perspective, new domains (such as in the sciences) can pose interesting challenges that arise from their relative lack of accessible data, especially when compared to data-rich fields such as computer vision and NLP. Hence, much of my research attempts to address the sparsity of data in new areas of AI/ML research. This can involve hybrid approaches (instead of solely relying on pure regression) such as embedding classification models with interpretable symbolic models, which limits AI/ML errors [info]. This can also involve brute-force approaches such as curating terabytes of 3D data for directly addressing these gaps [info].
W.T. Chung, B. Akoush, P. Sharma, A. Tamkin, K.S. Jung, J.H. Chen, J. Guo, D. Brouzet, M. Talei, B. Savard, A.Y. Poludnenko, M. Ihme. Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with BLASTNet 2.0 Data. Adv. Neural Inf. Process. Syst. (NeurIPS) 36, 2023. [Press]
*See my Google Scholar or my CV for a complete list.
I am grateful to my research collaborators at Stanford, SLAC, TUM, NASA, LLNL, Sandia, UniMelb, UConn, and PolyMontréal.
Thank you to the following funding sources for making my research possible: