I am the CEO of a stealth startup in the data-centric AI space. If you are an experienced research scientist and/or engineer who wants to build tools to improve deep learning models by intervening on training data, please get in touch.
Previously, I was a senior staff research scientist at Meta AI Research (FAIR Team) in Menlo Park working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. Most recently, my work has focused on understanding properties of data and how these properties lead to desirable and useful representations. I have worked on a variety of topics, including self-supervised learning, the lottery ticket hypothesis, the mechanisms underlying common regularizers, and the properties predictive of generalization, as well as methods to compare representations across networks, the role of single units in computation, and on strategies to induce and measure abstraction in neural network representations.
My work has been honored with Outstanding Paper awards at both NeurIPS and ICLR, two of the top machine learning conferences. Before my time at FAIR, I worked at DeepMind in London.
I earned my PhD working with Chris Harvey at Harvard University. For my thesis, I developed methods to understand how neuronal circuits perform the computations necessary for complex behavior. In particular, my research focused on how parietal cortex contributes to evidence accumulation for decision-making. For my undergraduate work, I attended UCSD, where I worked with Rusty Gage to investigate the role of REST/NRSF in adult neurogenesis.
When I'm not working, I like to go on adventures with my wife, Julia, and our awesome dogs, Maui and Loki. I'm also a history buff and love learning about the history of science, the two World Wars, and the Cold War.
I have had the privilege of working with, learning from, and mentoring many talented students, engineers, interns, and residents: