AI applications are becoming increasingly important in our daily lives, but if they are helpful to increase productivity for humans, it has a dangerous potential downside – the possibility of a “Knowledge Collapse.”
Highlights:
- Knowledge Collapse refers to the dangerous narrowing of available information over time.
- AI’s tendency to amplify mainstream ideas risks neglecting valuable fringe knowledge.
- Over-reliance on AI curation could degrade the diversity of knowledge and ideas in society in the future
What is a Knowledge Collapse?
Knowledge collapse refers to the risk that as humans become overly reliant on AI systems, the breadth of information and perspectives we are exposed to could progressively narrow over time.
This will degrade the diversity of knowledge and ideas in society in the future.
A paper by Andrew J. Peterson at the University of Poitiers introduces the concept of knowledge collapse as the “progressive narrowing over time of the set of information available to humans, along with a concomitant narrowing in the perceived availability and utility of different sets of information.”
How can a Knowledge Collapse occur?
AI models are trained on massive datasets aiming to approximate the “center” or most common patterns in that data. Because of this, an AI system tasked with summarizing information on a topic will tend to provide the most widely represented mainstream views or ideas, while neglecting niche perspectives, contrarian ideas, and “long tail” knowledge outside of the centre.
This will mean the risk of losing touch with wild, unorthodox, unique ideas on the fringes of knowledge that often cause novel discoveries and inventions.
If we increasingly get our information and thoughts filtered through AI systems, over time these systems could reinforce a narrow cluster of already-popular ideas from their training data.
Along with this, the long tails of fringe but potentially important knowledge get cut off and forgotten. This leads to a loss of creativity, thinking, and knowledge which will have significant implications in the coming years.
This occurs because of a process referred to as the “majority rule” which is being taken to the extreme. Mainstream views get amplified through AI curation, while unconventional and dissenting perspectives get unknowingly excluded, even if some of those ideas could be of great value.
Thus, AI could homogenize and saturate our collective knowledge and constrain the boundaries of what we consider possible to know.
There are Some Major Things at stake
One significant risk is diminished innovation and scientific progress. Truly groundbreaking ideas often seem fringe or even crazy at first before being validated.
If we neglect exploring the fringes of knowledge, we may miss the next great breakthrough. The paper also states that: “the resulting curtailment of the tails of human knowledge would have significant effects on…lost gains in innovation.”
Additionally, knowledge collapse could also contribute to reduced cultural diversity. Much of the richness of human culture lies in the diversity of beliefs, traditions, languages, and artistic practices across societies. If AI systems provide only the most popular mainstream views, it could accelerate the extinction of indigenous knowledge systems.
If only mainstream ideas dominate what we know, those ideas could become separated from the rich cultural backgrounds and historical contexts that originally gave rise to them. This would weaken our complete understanding of what those ideas truly mean.
Another major concern is a lack of resilience. With knowledge becoming increasingly centralized around a core of mainstream ideas, humans would be ill-equipped to respond to unexpected or “black swan” events that fall outside of our narrowing knowledge or “epistemic horizon.”
Unconsidered perspectives or ideas may prove vital when any unknown situation is encountered. Rather than creating harmony, this problem of a knowledge collapse could also have political implications.
Homogenized information flows could amplify political polarization by pushing everyone towards simplistic populist views and away from a deep understanding of complex issues.
Conclusion
Although AI plays a crucial role in today’s world, its usage presents risks that we must proactively guard ourselves against. Avoiding a “knowledge collapse” is central to ensuring that AI remains a resource for expanding rather than contracting the horizons of human understanding.