User profiles for Tatsunori Hashimoto

Tatsunori Hashimoto

Assistant Professor, Stanford
Verified email at stanford.edu
Cited by 13969

Fairness without demographics in repeated loss minimization

T Hashimoto, M Srivastava… - International …, 2018 - proceedings.mlr.press
Abstract Machine learning models (eg, speech recognizers) trained on average loss suffer
from representation disparity—minority groups (eg, non-native speakers) carry less weight in …

Emergent abilities of large language models

…, D Zhou, D Metzler, EH Chi, T Hashimoto… - arXiv preprint arXiv …, 2022 - arxiv.org
Scaling up language models has been shown to predictably improve performance and
sample efficiency on a wide range of downstream tasks. This paper instead discusses an …

Diffusion-lm improves controllable text generation

…, I Gulrajani, PS Liang, TB Hashimoto - Advances in Neural …, 2022 - proceedings.neurips.cc
Controlling the behavior of language models (LMs) without re-training is a major open
problem in natural language generation. While recent works have demonstrated successes on …

Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization

S Sagawa, PW Koh, TB Hashimoto, P Liang - arXiv preprint arXiv …, 2019 - arxiv.org
Overparameterized neural networks can be highly accurate on average on an iid test set yet
consistently fail on atypical groups of the data (eg, by learning spurious correlations that …

Learning population-level diffusions with generative RNNs

T Hashimoto, D Gifford… - … Conference on Machine …, 2016 - proceedings.mlr.press
We estimate stochastic processes that govern the dynamics of evolving populations such as
cell differentiation. The problem is challenging since longitudinal trajectory measurements …

On the opportunities and risks of foundation models

…, S Grossman, N Guha, T Hashimoto… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Holistic evaluation of language models

…, SM Xie, S Santurkar, S Ganguli, T Hashimoto… - arXiv preprint arXiv …, 2022 - arxiv.org
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …

Alpacafarm: A simulation framework for methods that learn from human feedback

…, C Guestrin, PS Liang, TB Hashimoto - Advances in …, 2024 - proceedings.neurips.cc
Large language models (LLMs) such as ChatGPT have seen widespread adoption due to
their ability to follow user instructions well. Developing these LLMs involves a complex yet …

[HTML][HTML] Benchmarking large language models for news summarization

…, P Liang, K McKeown, TB Hashimoto - Transactions of the …, 2024 - direct.mit.edu
Large language models (LLMs) have shown promise for automatic summarization but the
reasons behind their successes are poorly understood. By conducting a human evaluation on …

Whose opinions do language models reflect?

…, C Lee, P Liang, T Hashimoto - International …, 2023 - proceedings.mlr.press
Abstract Language models (LMs) are increasingly being used in open-ended contexts, where
the opinions they reflect in response to subjective queries can have a profound impact, …