Discussion about this post

User's avatar
Michael House's avatar

What will slow research isn’t AI. It’s the flood of preprints being treated like peer-reviewed work across AI and computer science. Right now, an undergrad with a Canva poster and a faculty sponsor can push out ten preprints in a semester and get them cited like they’ve reshaped the field. OSF allows researchers to delete preregistrations, which sounds harmless until it’s used to quietly erase bad or fraudulent work. If something gets flagged, it’s gone. No history, no accountability. That’s a perfect setup for bad actors.

And we still haven’t dealt with the reproducibility crisis. We didn’t fix it. We just buried it under buzzwords, hype, and career incentives. Simultaneously, we are using completely broken scientific metaphors to justify AI architectures. We’re still pretending spiking neurons are equivalent to RNNs. That synaptic noise is optimization. That the behavior of starving mice tells us how humans think. These comparisons aren’t science. They’re branding.

Research architectures are more expensive, more power-hungry, and more opaque than ever. Despite the lack of a clear path to profitability, AI continues to consume billions of dollars in funding. The hype keeps growing. Amplified work often prioritizes speed, clout, and marketability over real understanding.

AI isn't a threat to science. The hype is. The culture is around it. The people enabling it are.

Expand full comment
Leif Hancox-Li's avatar

See this paper claiming that the results from Park et al are due to a bug in plotting. https://arxiv.org/abs/2402.14583

Expand full comment
14 more comments...

No posts