Four more things we worked on in 2022
We had a busy 2022. Here are a few things we worked on but didn’t cover here.
We’re grateful to you for reading this blog / newsletter. It’s made our book project much more rewarding.
We had a busy 2022. Here are links to things we worked on but didn’t cover here.
1. The reproducibility crisis in ML-based science. AI hype isn’t limited to commercial products. Researchers hype their results just as much. This has led to overoptimism about ML in many scientific fields including medicine and political science. Over the summer, we organized an online workshop on the topic. Over a thousand people registered and the YouTube livestream has been watched over 5,000 times. The talk videos, slides, and an annotated reading list are available on the workshop website. The event was covered by Nature News. Sayash gave an overview of our work on this topic in a talk at the Lawrence Livermore National Lab.
We have been leading an effort to create a set of guidelines and a checklist to help researchers make their ML-based research reproducible. Please reach out if you’re interested in a draft version.
2. The dangers of flawed AI. One type of AI is particularly ethically worrisome: making decisions about people based on a prediction about what they might do in the future. Examples include criminal risk prediction and some hiring algorithms. In a new working paper titled Against Predictive Optimization, we (along with Angelina Wang and Solon Barocas) challenge the legitimacy of these algorithms. Please reach out if you’re interested in a copy of the paper.
Arvind coauthored a book on fairness and machine learning. It is available online and is nearing publication: all four peer reviewers strongly recommended publication and we have sent the final version to our publisher, MIT Press. Building on some of the points in the book, Arvind presented a lecture/paper on the limits of the quantitative approach to discrimination.
3. Recommendation algorithms. Arvind is visiting the Knight First Amendment Institute at Columbia, where he is writing about recommender systems on social media — specifically, how they amplify some types of speech and suppress others. He is co-organizing a symposium on algorithmic amplification on April 27/28.
Arvind and Sayash are both on Mastodon. Although Mastodon’s lack of a recommendation algorithm appeals to many of its users, Arvind argues that algorithms aren’t the enemy and should be redesigned instead of abandoned. In another blog post, he explains why TikTok’s seemingly magical recommendation algorithm is actually nothing special, and its real secret sauce is something else.
4. AI hype: podcasts, radio, press quotes. Arvind talked about AI hype on a podcast with Ethan Zuckerman and on a CBC radio interview. Sayash talked about what AI can and can’t do on a KGNU Radio Interview. We were quoted on ChatGPT in various places including the Washington Post, Nature, and Bloomberg. In our previous post, we explained why ChatGPT can be amazingly useful despite being a bullshit generator.
Happy new year!
Great stuff, not least re AI hype (and snake oil).
Of which, there are many examples in the AIAAIC Repository:
https://www.aiaaic.org/aiaaic-repository
https://docs.google.com/spreadsheets/d/1Bn55B4xz21-_Rgdr8BBb2lt0n_4rzLGxFADMlVW0PYI/edit?usp=sharing
This free, open resource may be useful to you and your readers.