User profiles for Tatsunori Hashimoto
Tatsunori HashimotoAssistant Professor, Stanford Verified email at stanford.edu Cited by 13969 |
Fairness without demographics in repeated loss minimization
T Hashimoto, M Srivastava… - International …, 2018 - proceedings.mlr.press
Abstract Machine learning models (eg, speech recognizers) trained on average loss suffer
from representation disparity—minority groups (eg, non-native speakers) carry less weight in …
from representation disparity—minority groups (eg, non-native speakers) carry less weight in …
Emergent abilities of large language models
Scaling up language models has been shown to predictably improve performance and
sample efficiency on a wide range of downstream tasks. This paper instead discusses an …
sample efficiency on a wide range of downstream tasks. This paper instead discusses an …
Diffusion-lm improves controllable text generation
Controlling the behavior of language models (LMs) without re-training is a major open
problem in natural language generation. While recent works have demonstrated successes on …
problem in natural language generation. While recent works have demonstrated successes on …
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization
Overparameterized neural networks can be highly accurate on average on an iid test set yet
consistently fail on atypical groups of the data (eg, by learning spurious correlations that …
consistently fail on atypical groups of the data (eg, by learning spurious correlations that …
Learning population-level diffusions with generative RNNs
T Hashimoto, D Gifford… - … Conference on Machine …, 2016 - proceedings.mlr.press
We estimate stochastic processes that govern the dynamics of evolving populations such as
cell differentiation. The problem is challenging since longitudinal trajectory measurements …
cell differentiation. The problem is challenging since longitudinal trajectory measurements …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Alpacafarm: A simulation framework for methods that learn from human feedback
Large language models (LLMs) such as ChatGPT have seen widespread adoption due to
their ability to follow user instructions well. Developing these LLMs involves a complex yet …
their ability to follow user instructions well. Developing these LLMs involves a complex yet …
[HTML][HTML] Benchmarking large language models for news summarization
Large language models (LLMs) have shown promise for automatic summarization but the
reasons behind their successes are poorly understood. By conducting a human evaluation on …
reasons behind their successes are poorly understood. By conducting a human evaluation on …
Whose opinions do language models reflect?
…, C Lee, P Liang, T Hashimoto - International …, 2023 - proceedings.mlr.press
Abstract Language models (LMs) are increasingly being used in open-ended contexts, where
the opinions they reflect in response to subjective queries can have a profound impact, …
the opinions they reflect in response to subjective queries can have a profound impact, …