3 Comments

Unfortunately, I think the last decade proved to business folks that transparency is a trap — Facebook got rakes over the coals for transparency while Google skated by despite YouTube being a cesspool.

Expand full comment

"ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so."

This is exactly why any AI transparency reports will fall short.

Quibbles about the perfect being the enemy of the good will be met with fines simply being a cost of doing business. The enshittification of everything will continue unimpeded.

Expand full comment

No, no, no… Users of AI must learn that the answers given are not intelligent. They are the equivalent of asking your family members advice and averaging their answers, or doing a voxpop.

AI depends on training data, and super quality training data does not appear out of thin air. Whoever trains the machine learning model, must know what good data is. With the current LLM models, that seems unreasonable. The number of areas the model tries to cover is huge. And there will always be risk of very crappy answers. The users have to be aware, and dont trust the answer more than an answer from any other you just met.

Expand full comment