Mar 3, 2023Liked by Arvind Narayanan

I appreciate your article from my personal experience in an ongoing horror story; one that seems controlled by an Ai company's control over their corporate customers. This 3rd party company does pre-screening for potential tenants in hundreds of apartment complexes across the U.S. It is incredulous to me that these large landlords allow a non-fiduciary to make decisions that affect their bottom line, all without oversight.

My spotless record of rental history, no criminal history, and no credit history (I'm sorry, I don't borrow money nor do I owe anyone) has been targeted by the Ai to deny me rental in several rent applications.

There is no human to give me answers, only that "they "are required to go by what the Ai tells them without question." What if the Ai is crazy af? I''ve never once experienced the slightest hesitation in getting a rental before. This is as incredulous as it is futile.

Disputing the Ai's data and decision to deny me rent, calls to property managers, and an inability to reach human managers who purposely hide from consumers... has gotten me nowhere. It has now been six weeks with me living out of a suitcase, jumping from hotel to airbnb in a futile attempt to find a place to rent and call home. I'm going broke for no reason whatsoever? Attorneys wil not take my case, and I'm left frustrated and baffled, without answers. It's heinous! It's infuriating and stupid. It is ludicrous to let Ai ruin lives like this. Sad to be living in these awful times...

Expand full comment

Thanks for starting this newsletter and looking forward to the book.

What is missing for me (at least for now) in this sneak peek is the connection with social and behavioural sciences. And some historical background. I know Dr Narayanan

and others are working with social science experts on this so that is definitely the right approach. I hope the book project will reflect that.

The narrative used by the media is one of the components. I did a short piece on this in my newsletter (https://sadnewsletter.substack.com/p/from-snake-oil-to-theranos) a while back to think about snake oil and myth from a socio-religious-historical perspective. Yes, the power of media helps in perpetuating the myth and we are dazzled by the hype. But there's some structural and historical basis for it. I agree that like the bias issue "a similar shift needs to happen about AI’s accuracy" and of course "Companies deploying AI need to improve their standards for public reporting". But we still considerable investor attention to such AI hypes. Media is amplifying no doubt but that is not happening in a vacuum.

Great work!

Expand full comment

Can you please share the "Table Of Contents" of the book?

Expand full comment

Excellent read, and thank you, Aravind and Sayash for making an effort to reach out to people and educate about AI.

Carl Sagan rightly put it when he said - “We are born curious”.

I cannot put my finger on it, this inexplicable change, as we grow old, for most, the curiosity turns into a sense of dazzlement, and we tend to let go of the cynicism and the skeptical approach which helps our own mind to filter the peddled myths around us. We accept things and notions too easily!

AI, at this point of time is in its nascent stages and as such must be tendered to carefully and developed with the right goals and in a landscape with certain parameters and checks in place, so that it develops into something which is not just another technology jargon which the corporates and can peddle around and sell their stupid products, but helps solve humankind’s problems and enhances the overall experience.

And each one of us must contribute.

Btw, where do I read rest of the book?

Expand full comment

This is all great stuff, but I want to point out that, at least according to one study, the COMPAS model is equally well-calibrated for both black and white subjects. There can be a trade-off between calibration equality and false positive rate equality: https://arxiv.org/abs/2007.02890

Which is the thing to care more about depends on each specific application.

Expand full comment

Good essay. One issue is that unmeasurable qualitative factors often are more important than quantitative measurable ones, but we rely on those we can measure, because it's all the data we have. It's the fallacy of the drunk looking for keys under the lighted lamp-post (not where he dropped them). If you could really measure grit, determination, conscience, integrity and other human qualities better, it might predict GPA more than less important things like parental incomes. If people noticed that failing to measure qualitative factors is a source of bias (not just racial bias) they'd ease up on relying on this fallacious thinking, and probe deeper into qualitative measures.

Expand full comment

We work at evaluating AI companies in order to determine if they may be able to go the course with brand security and value to their subsection and technology defensibility. I look forward to more information on this topic. we are at www.thevineyardgroup.net

Expand full comment