Model alignment protects against accidental harms, not intentional ones
What the executive order means for openness in AI
How Transparent Are Foundation Model Developers?
Evaluating LLMs is a minefield
Is the future of AI open or closed? Watch today’s Princeton-Stanford workshop
One year update: book submitted; TIME 100; Sep 21 online workshop
Does ChatGPT have a liberal bias?
Introducing the REFORMS checklist for ML-based science
ML is useful for many things, but not for predicting scientific replicability
Is GPT-4 getting worse over time?
Generative AI companies must publish transparency reports
Three Ideas for Regulating Generative AI
Is AI-generated disinformation a threat to democracy?
Licensing is neither feasible nor effective for addressing AI risks
Is Avoiding Extinction from AI Really an Urgent Priority?
Quantifying ChatGPT’s gender bias
I set up a ChatGPT voice interface for my 3-year old. Here’s how it went.
A misleading open letter about sci-fi AI dangers ignores the real risks
OpenAI’s policies hinder reproducible research on language models
GPT-4 and professional benchmarks: the wrong answer to the wrong question
What is algorithmic amplification and why should we care?
Artists can now opt out of generative AI. It’s not enough.
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?
AI cannot predict the future. But companies keep trying (and failing).
People keep anthropomorphizing AI. Here’s why