Benjamin Shih

I am a graduate student at the Stanford Institute for Computational and Mathematical Engineering. My current research asks how language models represent and reuse features, using mechanistic interpretability tools such as sparse and interpretable representations to study hierarchy, absorption, and related structure. In Fall 2026, I will join Jump Trading as a Quantitative Researcher in New York.

Previously, I received an Sc.B. with Honors in Applied Mathematics–Computer Science and an A.B. in Mathematics from Brown University.

Research interests

  • Scientific ML
  • Mech interp
  • Theoretical ML

Selected research

Transformers as Neural Operators for Solutions of Differential Equations with Finite Regularity

B. Shih, A. Peyvan, Z. Zhang, and G. E. Karniadakis

Computer Methods in Applied Mechanics and Engineering, Vol. 434, Article 117560, 2025.

Temporal Learning Capacity of Transformers in Non-Markovian Dynamical Systems

B. Shih

Senior Honors Thesis, Brown University, 2024.

Research overview

At Stanford, I work in the DASH Lab with Eric Darve on mechanistic interpretability, particularly around feature organization in language models. Previously at Brown, I worked in the CRUNCH group with Zhongqiang Zhang and George Em Karniadakis on neural operators for differential equations.

CV

Complete CV available upon request. Please contact me at benjamin.shih@stanford.edu.