AI as the enshitification of human knowledge? Think again.
Lately my LinkedIn and X feeds have become increasingly populated by people who argue that large language models like ChatGPT merely regurgitate the "average" of human knowledge.
The criticism often comes bundled with concerns about "enshitification” — the idea that as AI systems scale and dominate, they homogenise and degrade content, reducing the rich complexity of human creativity into bland mediocrity.
But is this perception accurate?
And does it underestimate what these models truly do?
I'd argue it isn't accurate, and that it grossly underestimate's what LLMs actually do.
To tackle the misconception, I think we first need to dissect what it means for AI to generate insights and to also examine how these systems handle data...
The "enshitification" argument is shallow one
Critics of generative AI often suggest that LLMs operate as sophisticated parrots, reflecting the most common, mediocre elements of human knowledge.
This is what some call the "enshitification" phenomenon, where algorithms prioritise popular (or easily monetised content), eroding diversity and quality.
At first glance, the criticism seems plausible.
LLMs are in fact trained on vast datasets of human language and tasked with predicting the next word in a sequence.
So I can see how people could jump to the conclusion that LLMs therefore default to regurgitating the “average” of what they’ve seen?
It's not the case though.
The "enshitification" argument confuses how LLMs are trained, with how they perform.
Training on diverse data doesn’t constrain these systems to mediocrity.
In fact, their design enables them to surface patterns of thought, synthesise high-quality insights, and provide tailored responses.
To understand why, we next need some statistical thinking. (I promise to make it as non-technical and interesting as possible)...
Mode vs. average & where AI truly operates
In statistics, the "average" is a measure that's often understood as the mean in general vocab.
The average smooths out peaks and troughs, blending everything into one undistinguished midpoint.
On the other hand, the "mode" reflects the most frequent value within a dataset.
As mentioned above, generative AI critics often suggest that just parrot back the "average" of human knowledge.
LLMs in fact, operate much closer to the "mode" than the "average".
Iif you're keeping score at home, score a point for me here, although I do concede this is largely just semantics, so it's really a meaningless and minor point. But a point nonetheless.)
When you ask a nuanced (nuanced being the most important word here) request about a complex topic — say, strategies for scaling an SME or ethical AI deployment — the system doesn't simply "average out" responses from its training data.
Instead, it identifies the dominant patterns of high-quality insights, synthesising a response based on what aligns most closely with your nuanced intent.
Why do I place importance on giving your LLM a "nuanced" instruction or enquiry?
Because if you ask a bland request that lacks detail of an LLM, it will respond with content relating to the mode of information it has encountered in its training data relating to that bland topic definition you have given it.
Whereas if you ask an LLM a nuanced request with lots of detail, it will respond with the mode of information it has encountered in its training data relating to that nuanced and detailed topic definition you have given it.
It’s like Spotify’s curated playlists...
Spotify doesn’t just recommend you the "average" of songs everyone in the world listens to. It analyses the mode of your specific preferences and blends them with patterns of similar, relevant tracks from algorithmically similar listeners to you, to craft a playlist that feels personal and meaningful.
In this case, by analysing your specific preferences, Spotify gathers its own nuance and detail about what you want.
The TL;DR of my Spotify analogy...
It would be unreasonable to expect Spotify to give you high quality recommendations 3 seconds after you open your account for the first time.
It simply wouldn't have the any nuanced or contextual knowledge about what represents "great" content for you as an individual, to give you high quality recommendations.
It would also be unfair in that scenario to say; "therefore Spotify represents the enshitification of music."
Similarly, without giving your LLM chatbot enough nuance and contextual knowledge in your prompt, it doesn't have enough pointers towards what represents a "great" reply for you as an individual, relating to your specific task.
It is similarly unfair in that scenario to blame a poor LLM response on AI "enshitification" of human knowledge.
Contextual relevance is the secret ingredient
To double down on my point above, and to bring the Spotify analogy into a business context, I'll repeat that one of the greatest strengths of LLMs is their ability to provide contextually relevant responses.
Unlike static encyclopaedias or keyword-based search engines, these models synthesise insights dynamically, often combining information across disciplines to create a response that’s tailored to the user’s needs.
Let's consider a practical example...
A warehouse manager asks an AI model;
“How could I potentially integrate AI into my logistics operations? Here is our company website [inserts URL] so you can read about the specifics of our operation, customers, and company mission. Also, my top 3 challenges for my team this coming financial year are [insert challenges]”
Now, far from just echoing the average of generic advice, the AI will likely combine data from operations research, real-world case studies, and emerging technologies to deliver a precise and actionable plan.
This level of synthesis requires far more than an "average of existing knowledge" — the LLM will construct a response that is both personalised and contextually aligned.
This capacity stems from the AI’s ability to process vast quantities of information while discerning patterns that matter.
By tailoring outputs to specific needs, these LLM systems operate at a level far beyond mediocrity.
"Sh#t, did I add 3 or 4 pinches of 'context', the secret ingredient???"
AI as a reflection of high-level thinking
Don't forget that frontier LLM models are trained on some of the most profound insights humanity has produced, from academic papers to thought leadership articles.
When prompted effectively, they don’t just summarise information — when asked properly they can extract and amplify the most impactful threads of human thought.
For example, asking an LLM about the ethical implications of AI deployment in hiring practices would yield a synthesis of philosophical perspectives, legal considerations, and real-world examples.
The response may cite principles from ethical AI frameworks while offering actionable advice for balancing fairness with efficiency.
This isn’t a reflection of the “average” but of the highest caliber of available information. Much higher thought assistance than the "average" person could hope to have at their fingertips amongst an "average" network of humans around them.
Human interpretation as the missing link
Of course, LLMs are not infallible. They are not even close to it.
While they excel at surfacing and synthesising high-quality insights, the application of those insights relies on human discernment.
A Spotify playlist might surface hidden gems you’d never have found on your own, but it’s up to you to decide which songs resonate most with you. And which to ignore.
Similarly, LLMs provide cognitive guidance and excellent first drafts, but they shouldn't be considered final answers.
I recommend business leaders in New Zealand who are using AI approach it as a collaborator, not a replacement for critical thinking.
This means verifying outputs, applying domain expertise, and adapting insights to fit the unique nuances of their circumstances.
Challenging the narrative
The idea that LLMs reflect the “average” of human knowledge does these tools a great disservice.
While they’re trained on broad datasets, their outputs are anything but generic.
By leveraging statistical insights, contextual relevance, and high-level thinking, LLMs can operate more like curators of knowledge than blind imitators.
Just as Spotify’s playlists feel personal and bespoke, the outputs of an LLM — when used effectively — feel tailored and insightful.
The important part is understanding their strengths (what is often call "AI literacy"), applying human discernment, and moving past misconceptions about their capabilities.
Far from "enshittifying" the world, I see LLMs as incredible tools to surface and amplify the best of what humanity has to offer.
So, next time someone dismisses AI as a generator of "averageness", I encourage you to challenge them.
Just like a well crafted playlist that hits the spot exactly when needed, these systems might surprise them with just how nuanced and high-quality the outputs can be.
Got something to add? Chime in below...