Transformers as Neural Operators for Solutions of Differential Equations with Finite Regularity
B. Shih, A. Peyvan, Z. Zhang, and G. E. Karniadakis
I am a graduate student at the Stanford Institute for Computational and Mathematical Engineering. My current research asks how language models represent and reuse features, using mechanistic interpretability tools such as sparse and interpretable representations to study hierarchy, absorption, and related structure. In Fall 2026, I will join Jump Trading as a Quantitative Researcher in New York.
Previously, I received an Sc.B. with Honors in Applied Mathematics–Computer Science and an A.B. in Mathematics from Brown University.
Transformers as Neural Operators for Solutions of Differential Equations with Finite Regularity
B. Shih, A. Peyvan, Z. Zhang, and G. E. Karniadakis
Temporal Learning Capacity of Transformers in Non-Markovian Dynamical Systems
B. Shih
At Stanford, I work in the DASH Lab with Eric Darve on mechanistic interpretability, particularly around feature organization in language models. Previously at Brown, I worked in the CRUNCH group with Zhongqiang Zhang and George Em Karniadakis on neural operators for differential equations.
Complete CV available upon request. Please contact me at benjamin.shih@stanford.edu.