the sophistry of sore winners

2025-06-04

this was previously under another title, but i have expanded it substantively. so, it’s been republished with a fresh timestamp.


have you read neil postman’s “five things we need to know about technological change”? i first encountered this piece in a society-and-technology course i took in undergrad (i am very interested in the study of sociotechnical systems; but also, i had to).

it’s a good read, and a short one; i encourage you to read through all of it. i wanted to zoom in on his second idea and quote a bit:

This leads to the second idea, which is that the advantages and disadvantages of new technologies are never distributed evenly among the population. This means that every new technology benefits some and harms others.

[…]

And now, of course, the winners speak constantly of the Age of Information, always implying that the more information we have, the better we will be in solving significant problems–not only personal ones but large-scale social problems, as well. But how true is this? If there are children starving in the world–and there are–it is not because of insufficient information. We have known for a long time how to produce enough food to feed every child on the planet. How is it that we let so many of them starve?

[…]

That is why it is always necessary for us to ask of those who speak enthusiastically of computer technology, why do you do this? What interests do you represent? To whom are you hoping to give power? From whom will you be withholding power?

neil postman gave this talk in 1998. i don’t want to, umm, glaze him too much. i quite like recipe websites and youtube tutorials and online banking1. they are genuine quality of life improvements! plus, neil postman was a clearly a little too obsessed with television.

and yet, i can’t stop thinking about this second idea of his, because it pertains to how people talk about technological change.


i don’t recall much about the first computer revolution, but i do recall the second. i was just a kid. i know there was a lot of worry about social media and facebook among adults, and i know i was very eager (if anxious) to sign up for a facebook account. i liked a lot of early “web 2.0” (i still keep a last.fm account around.) i didn’t get the point of twitter until it was on the verge of disappearing2.

“technology” (networked computer technology is perhaps a more precise term) just wasn’t as primary to our lives. but now it is essential.

let’s imagine 2009, which is when i reckon social media really took off. it was marketed at this fun thing, and back then people did have concerns. here’s an article from the bbc published in 2007 with a quote from a critic of social media:

[…] Om Malik, an influential blogger, wrote: “This is yet another small step in the overall erosion of personal privacy.”

“We are slowly leaving digital litter all over the web, and some day it is going to cause problems.” source

hmm. i wonder how his prediction panned out.

anyway, facebook was being marketed as this harmless fun thing that would help you connect with others. now imagine if the people who ran social media were saying instead, “get with the program and get on social media, losers, or you basically won’t matter”.

if you believed you had more to lose than win, how would you react?


i’ve been reading a lot of discussion from influencers3 in the tech industry who believe they will win from generative AI, particularly in regards to software engineering. these critiques follow a similar pattern:

those aligned with the winners4 seem like they don’t want to address critique. their arguments are logically unconnected5 and to me it seems like they want to voice their concerns mostly to say that it’s time to move on.

sometimes, these posts are clothed in reasonable tones; sometimes, these posts come off as unfathomably arrogant. (when pushed, some might even claim that no one actually ever cared about technological ethics.)


hostility is an unproductive and irrational emotion, i know. it’s a lose-lose situation; neither party feels good after the fact. i myself have learned to not let it affect me. most people have to, to some degree, some more than others.

but i find it especially worrying that proponents of this technology acknowledge substantive critiques of generative AI exist and continue to equate them with random drive-by internet harrassment.

incidentally, we now know hostility is actively facilitated through social media, whose algorithms are designed to provoke emotions like hostility because it increases engagement.6


keep in mind: we have been talking about winners and losers, but this is a rigged game. technological change is constantly pushed and bandied about by prominent CEOs and politicians as the next great thing, as inevitable as water evaporating when it’s hot outside, even as they claim that this technology will cause massive societal shift and mass job loss.

i know that there is a frenetic pace of development as i write this, especially pertaining to software tooling. but you would have to possess the short-term memory of a goldfish cracker to think that the introduction of a shiny new tool negates concerns about the underlying technology’s longitudinal impact. that is what substantive critiques of generative AI are worried about, and we will not see a clear picture of those, by definition, for many years—yes, even about efficacy!

i do not know what the future will bring. but i’m not going to pretend to be a neutral player here; i believe the long-term consequences of generative AI are bad for us. i can attempt to predict from past results.7 so here’s neil postman again, talking about his critiques of the computer age:

But to what extent has computer technology been an advantage to the masses of people? […] These people have had their private matters made more accessible to powerful institutions. They are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them. They are more than ever reduced to mere numerical objects. They are being buried by junk mail. They are easy targets for advertising agencies and political institutions.

is it not true that our private lives (much of which is spent online) are easily tracked? that we are subject to the profiling and targeting of algorithms whose inner workings are still unknown even to machine learning researchers? that we have become more reduced to numbers? that we encounter an unfathomable amount of junk content online? god, we don’t even know how much we lost.8

so why not extrapolate from the damage that has already been done by LLM-based technology? that is not the point of this post but i feel obligated to elaborate. here is what we know has happened: LLM has been used to automate unethical influence operations. it has induced psychosis among it users. it has been used to dehumanize and demean9. and every chatbot query funnels money into the hands of billionaires publicly aligned with fascists. i choose to focus on events for their tangible impact, assume that the future with LLM will generate further events like the ones we have seen. on top of the deleterious effects i have seen from past technological adoption, this has convinced me to resist and oppose LLM, regardless about apparent utility10, especially as i feel it is being coerced onto me.11 furthermore, it would be daft for me to see how terrible this technology has been for virtually every knowledge and creative domains and exempt one from somehow becoming liable.


i understand why some people are freaking out when they hear the message that by refusing this technology, you will lose your livelihood. 12 when avid users of this technology respond by talking past this fear, by making yet another sales pitch, and when that fails, deeming the entire discussion as tainted and hostile—i’m sorry. you can’t just prompt your critics into agreeing with you.

in the meantime, i know where i stand in all this. i have decided i will count myself among the losers, because i have seen the thought process that goes into convincing yourself you’re the winner.


  1. i remember as a kid just going along with my parents as they queued to update their bank balance in their paper passbook, which they would do by inserting a passbook into a machine which would print account updates into the pages of their book. it was annoying to wait around, for sure. but! i would love seeing that machine go. it felt like magic to me. i don’t think any such machine exists anymore.↩︎

  2. the best statistic i can find claims that in february of 2025, X—the social media website that shares the same source code as twitter but with a dramatically divergent vision and mission—has 586 million active users. that’s 7% of the world. that number pales in comparison to tiktok (1.5 billion, or 18% of the world) and instagram (2 billion, 25% of the world). facebook has an astonishing 3 billion people on it per month—37.5% of the world!↩︎

  3. it’s not just one person. really, it isn’t. this is a pattern.↩︎

  4. recall again neil postman: the real winners of any technological change will go to great lengths to convince the losers they are actually winners.↩︎

  5. it would be like if you were to outline your reasoning, arrive at a completely different conclusion unconnected from your reasoning, and then become so embarrassed you hide your reasoning away so no one can see it.↩︎

  6. i will give you a source for this: here. or, talk to any out trans person online. or, see how Facebook facilitated a genocide. or, just think about it. has a post you encountered on social media made you angry recently? anyway, it doesn’t really matter, right? just touch grass! social media is just a tool.↩︎

  7. some proponents of generative ai would say that that’s all intelligence is.↩︎

  8. this is neil postman’s fifth point, that technology, once integrated, becomes mythic and difficult to critique. it seems that some are eager to accelerate the validation of his thesis.↩︎

  9. image and video generators, including ChatGPT’s image generator, use LLM to facilitate their generation. see a post by ethan mollick, who explains the sophistication of contemporaneous, LLM-guided image generation.↩︎

  10. my friends, whom i respect deeply, have a variety of opinions on the actual utility of LLM tech. some think it is quite useful; others think it is okay; others think it is detrimental. i do not, in fact, condescendingly wave off the opinions of my friends when they merely disagree with me.↩︎

  11. i believe that AI itself should be read as a political system that reifies the ability for political powers to inflict harm. whether you agree or not with this reading, i like the article “defining ai” by ali alkhatib which elaborates further.↩︎

  12. by the way, critical views of any technological change are among the vast minority. from what i can tell, most people, it seems, are more checked out than freaked out.↩︎