Evaluating LLMs is a minefield
Annotated slides from a recent talk
We have released annotated slides for a talk titled Evaluating LLMs is a minefield. We show that current ways of evaluating chatbots and large language models don't work well, especially for questions about their societal impact. There are no quick fixes, and research is needed to improve evaluation methods.
The challenges we highlight are somewhat distinct from those faced by builders of LLMs or by developers interested in comparing between LLMs for adoption. Those challenges are better understood and tackled by evaluation frameworks such as HELM.
You can view the annotated slides here.
The slides were originally presented at a launch event for Princeton Language and Intelligence, a new initiative to strengthen LLM access and expertise in academia.
You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.
The talk is based on the following previous posts from our newsletter: