Research

My research spans mechanistic interpretability and theoretical and scientific machine learning.

Research overview

In the DASH Lab at Stanford, advised by Eric Darve, I investigate feature organization in language models: how features form hierarchies, absorb one another, and trade off interpretability with model fidelity.

Previously at Brown, I worked in the CRUNCH group with Zhongqiang Zhang and George Em Karniadakis on neural operators for differential equations, including transformer-based operator learning in finite-regularity settings.

Publications

Transformers as Neural Operators for Solutions of Differential Equations with Finite Regularity

B. Shih, A. Peyvan, Z. Zhang, and G. E. Karniadakis

Computer Methods in Applied Mechanics and Engineering, Vol. 434, Article 117560, 2025.

Bachelor's thesis

Temporal Learning Capacity of Transformers in Non-Markovian Dynamical Systems

B. Shih

Senior Honors Thesis, Brown University, 2024.

Research experience

Mechanistic interpretability

Current research in the DASH Lab at Stanford, advised by Eric Darve. I study feature organization in language models, including hierarchy, absorption, and the interpretability tradeoffs of sparse representations.

Neural operators and scientific machine learning

Previous work at Brown on neural operators for differential equations with the CRUNCH group, advised by Zhongqiang Zhang and George Em Karniadakis.

GWAS of neurodegenerative diseases

Earlier research on genome-wide association studies of neurodegenerative diseases with Dr. Li-San Wang at the University of Pennsylvania Wang Lab.

Talks