Ethan Marcotte, in his beautiful “Against the protection of stocking frames”:
[D]escribing “AI” as a failure doesn’t mean it doesn’t have useful, individual applications; it’s possible you’re already thinking of some that matter to you. But I think it’s important to see those as exceptions to the technology’s overwhelming bias toward failure. In fact, I think describing the technology as a thing that has failed can be helpful in elevating what does actually work about it. Heck, maybe it’ll even help us build a better alternative to it.
So much of language is being used to dismantle what’s good, to twist words into their opposites. I love the idea of doing the same to diminish and dismantle what’s deranged, and ultimately use language to shape it, sift away the crap and the crud, and hopefully retain whatever can be salvaged—if anything.
[T]he ubiquity of LLMs is another sign of the technology’s failure. It is not succeeding on its own merits. Rather, it’s being propped up by terrifying amounts of investment capital, not to mention a recent glut of government contracts. Without that fiscal support, I very much doubt LLMs would even exist at the scale they currently do.
[…]
[T]he most consistent application of LLMs at work has been through top-down corporate mandate: a company’s leadership will suggest, urge, or outright require employees to incorporate “AI” in their work.
What irritates me even more is that work environments that aren’t or shouldn’t be inherently corporate—higher education, for one—are blindly following the same patterns as the corporate world and jumping on this trend before even doing a shred of research. Worse: it seems to me that the research they do is plagued by confirmation bias, and exists only to support the choice of putting “AI” everywhere. FOMO is an ugly beast.
I’ve heard repeatedly about a kind of stifling social pressure: an implicit, unstated expectation that “AI” has to be seen as good and useful; pointing out limitations or raising questions feels difficult, if not dangerous.
My tactic to fight this includes avoidance and varying degrees of explicit skepticism, which only scratch the surface of my true feelings about all this.
But here’s the thing: this is a success only if tech workers allow it to be. I’m convinced we can turn this into a failure, too. And we do that by getting organized.
That’s the hard part for me, for many reasons. One day I’ll get into those.