Sewon Min is an Assistant Professor in EECS at UC Berkeley, affiliated with Berkeley AI Research (BAIR), and a Research Scientist at the Allen Institute for AI. Her research lies at the intersection of natural language processing and machine learning, with a focus on large language models (LLMs). She studies the science of LLMs and develops new models and training methods for better performance, flexibility, and adaptability, such as retrieval-based LMs, mixture-of-experts, and modular systems. She also studies LLMs for information-seeking, factuality, privacy, and mathematical reasoning. She has organized tutorials and workshops at major conferences (ACL, EMNLP, NAACL, NeurIPS, ICLR), served as a Senior Area Chair, and received honors including best paper and dissertation awards (including ACM Dissertation Award Runner-up), a J.P. Morgan Fellowship, and EECS Rising Stars. She earned her Ph.D. from the University of Washington and has held research roles at Meta AI, Google, and Salesforce.