I’m Sam 👋, a Postdoctoral Research Scientist at FAIR, Meta’s fundamental AI research group. I currently split my time between Paris, France and Cambridge, UK.
While machine learning is often thought of as a pipeline from data to decision, my research investigates the orthogonal pipeline from researcher decisions to impacts, both positive and negative, in broader society. The choices we make as researchers—how to collect data, which model architecture to use, how we evaluate performance or even how we define the question—result in different conclusions, different systems, and ultimately different outcomes for those our systems affect.
For example, my recent NeurIPS paper applied an approximated multiverse analysis in order to understand the effect of researcher choices on the robustness of scientific claims, and my recent FAccT paper, connecting contemporary models’ inductive bias towards simplicity to performance disparities, showed that the choice of model architecture can have real, yet unpredictable, impact. In broad strokes, my work aims to highlight the active and concrete role that researchers and model developers must play in creating fair and equitable machine learning systems, no matter how abstract or fundamental our research may seem.
In 2022 I completed my PhD in machine learning as part of the ML@CL group at the University of Cambridge, supervised by Prof. Neil Lawrence. Previously, I have studied at The Alan Turing Institute, obtained a master’s in natural language processing at the Cambridge Computer Laboratory, and did my bachelor’s in computer science at the University of Manchester.
I’m also the Founder and Chair of The Preptrack Foundation, a registered charity building technology for HIV prevention. Our first app, Preptrack, helps people who use PrEP, the medication that eliminates the risk of HIV infection.
If you’re interested in learning more about what I do, or discussing research collaborations or opportunities, please do drop me a line. My email is samueljamesbell [at] gmail [dot] com. I’d love to hear from you.
Confusing Large Models by Confusing Small ModelsIn Out of Distribution Generalization in Computer Vision Workshop, ICCV, 2023
Simplicity Bias Leads to Amplified Performance DisparitiesIn ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023
Modeling the Machine Learning MultiverseIn Advances in Neural Information Processing Systems (NeurIPS), 2022
The Effect of Task Ordering in Continual Learning2022
Behavioral Experiments for Understanding Catastrophic ForgettingIn AI Evaluation Beyond Metrics (EBeM) Workshop, IJCAI, 2022
ICLRPerspectives on Machine Learning from Psychology’s Reproducibility CrisisIn Science and Engineering of Deep Learning (SEDL) Workshop, ICLR, 2021