Is the future of AI open or closed? Watch today’s Princeton-Stanford workshop
By Sayash Kapoor, Rishi Bommasani, Percy Liang, Arvind Narayanan
Perhaps the biggest tech policy debate today is about the future of AI, especially foundation models and generative AI. Will AI be open or closed? Will we be able to download and modify these models, or will a few companies control them? The stakes couldn’t be higher. A closed path could lead to a concentration of power never before seen in the history of capitalism, especially if AI turns out to have a transformative effect on the economy. On the other hand, open models might be easier for bad actors to misuse.
Today, we're organizing a workshop on the principles, practices, and policies for the responsible development and release of open foundation models (8 AM – 2:30 PM PT/11 AM – 5:30 PM ET). You can watch the livestream on this link or via the embedded video below:
Of course, open vs. closed isn’t a binary. There are many possible ways to release models, and different developers will choose different release strategies. We organized this workshop because we believe that open models — models whose weights are downloadable — have a place in this mix, for many reasons that we will explore during the event. At the same time, we organized the workshop out of a belief that open doesn’t mean a free for all, and is in no way incompatible with responsible development and release.
When we started planning the workshop, we didn’t predict how timely it would be. We have an amazing set of speakers, including experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help. Join us and tell us what you think.