8 Comments
Aug 18, 2023·edited Aug 18, 2023Liked by Arvind Narayanan

Getting models to avoid expressing opinions feels like the wrong strategy to me. The models still have the same weights internally, so their discussions of those topics will still have the same slant in their underlying analysis.

The problem becomes analogous to the challenges we're seeing with media organizations. Organizations don't explicitly state their point of view, but the articles still reflect that. Seeing a particular point of view in pieces that are claimed to be objective is part of what is eroding public confidence in these sources. Also, any point on the spectrum will seem "biased" to many people, so there's no way to avoid this issue other than to disclose the basis for a particular output.

On the flip side, I do not necessarily want a hamstrung, "both sides" presentation from a model. I want the actual conclusion. Especially considering how many mundane topics are now political contests in our society, ruling out political opinions results in a crippled model if followed too far. Should a model be able to express an opinion on whether Covid vaccines are effective, and to what extent? That's now a political question. So is whether nuclear power helps mitigate climate change. So is whether we should build market rate housing. To the extent any company succeeds in making their model avoid opining on topics like these, other models become more useful in comparison, which could lead to the unintended consequence of more people using the more opinionated models than otherwise would.

Expand full comment

Clearest analysis of Gen AI bias to date. Love the 3 level breakdown. This should help disambiguate general claims to bias. I love your overall mission to deescalate polarizing rhetoric. Part of wonders if sound research will really have an impact knowing the way things work in this country and our media. But I sure am rooting for you!

Expand full comment
Aug 18, 2023Liked by Arvind Narayanan

Great text. I share the suspicion that many of these papers that attempt to measure different types of bias are pretty dubious. I have recently written a blog about a similar topic, you might want to check it out: http://opensamizdat.com/posts/self_report/

Expand full comment

My area of (limited) expertise is contemporary China.

ChatGPT is completely useless for such a contested subject. It's trained on State Department and NYT handouts, all of which have proven to be wrong. GIGO, I say, GIGO.

Expand full comment

Hey Arvind, fascinating read. Your insights into ChatGPT's political leanings and its interaction with users shed light on an intricate topic. It's evident that the complexity of bias in AI models demands a multi-dimensional analysis, where prompts and user interactions play a pivotal role. Looking ahead, I believe research like yours will guide us toward a clearer understanding of AI's role in shaping discourse. Kudos.

Expand full comment

Great essay. I explored the layers of bias in AI and took a slightly different but related approach that broke them down into three layers

1. Ethical Bias - The systems and structures that we think about as problematic

2. Data Bias - How we collect, what we collect, and how we currate data

3. Mathematical Bias - Simply put, AI is weighted math to reduce large data to outputs. It is, by definition, bias.

The key to eliminating bias is that we might have to apply bias to bias bias.

https://www.polymathicbeing.com/p/eliminating-bias-in-aiml

Expand full comment

The reality is Republicans have been wrong about a lot of issues this century…I mean Bush “won” in 2000 by advocating for Elian Gonzalez to remain with his American kidnappers…and he won in 2004 by invading Iraq and making the election about foreign policy.

Expand full comment

Why do the authors aspire to be the LK99 of LLMs? I suppose any publicity is good publicity...

Expand full comment