Ignorance Part 1
Jun 21, 2025

Throughout human history, we ourselves have been the most complex systems in the known universe, even as our knowledge of the universe expands rapidly. Yet we remain largely ignorant of how exactly it is that we function. Of course, we’ve managed to glean some manner of insight over twenty thousand years of ‘progress’, which today affords us myriad medical luxuries unthinkable to the denizens of yesteryear. Yet, experts attest that we’ve only scratched the surface. Despite our immense ignorance with regard to the biological system that underpins all of our lives as human beings, we are able to continue functioning.
This gap in understanding might sound familiar somehow. Today, we seem to be fashioning systems which are quickly approaching the immense complexity of ourselves, or are, at the very least, approaching a boundary where they become unintelligible to the vast majority of individuals. For example, there are very, very few people who can claim to understand the global financial system in some abstraction of its entirety. That will not be the case for much longer, by which I mean that there will soon be none. Perhaps this gives you the impression that things are getting out of hand? Maybe, but remember, we’ve been living in the shadow of ignorance for millennia and with what can only be called evolutionary success given that our collective existence sojourns on to this day. Seeing as how we are on the cusp of building a new generation of unintelligible systems which will inevitably underscore our livelihoods, it follows that ignorance should be treated as a design principle, just as nature seems to have done in crafting us.
This thought itself is not revolutionary; every day, we interact with systems beyond our individual comprehension, and the simple fact that these interactions can occur indicates that ignorance is actively being used as a design principle, distilling complexity into a usable form. However, where these concepts find some novel light is in the development of autonomous systems. Ignorance as a design principle currently rests on two core assumptions: firstly, that the user has no knowledge of the underlying system, and secondly, that an individual or group exists which does have complete knowledge of the system. As long as the developers understand what they’re doing, then the system is maintainable, and can be reduced to a form where the lay individual can interact with it. But what happens when both the users and architects are ignorant? How can we build systems under the assumption that everyone is ignorant of them?
In many ways, this is what’s beginning to happen in the face of everyone’s favourite rapidly expanding industry, Artificial Intelligence. I don’t claim that the developers behind the models have no clue what they’re doing; that would be incorrect. However, AI engineers themselves will admit that they’re not quite sure why it works. Before the launch of ChatGPT, researchers concluded that model training processes operated on a bell curve where if you train them too little, the model won't accurately separate the datapoints, and if you train too much, you'll overfit the training data and be unable to make accurate predictions (This is a rather simplistic explanation but there are plenty of great books which discuss the arc of machiene learning research topic which I am, alas, unable to improve upon). The Fleming-like moment that led to our darling ChatGPT was when a researcher at OpenAI left a model training over their holiday (a mistake which should not be repeated today as it would cost the company hundreds of millions of dollars), training far beyond the aforementioned bell curve to a point where the model seemed to unexpectedly develop a deeper understanding of the training data on a fundamental level, which, for unknown reasons, allowed the model to make superior predictions. There is something about this understanding that remains unintelligible to us, and while we’ve made some inroads in discovering the truth behind this brute force scaling solution, we continue to build more capable systems at a much faster pace than our knowledge can develop.
So, in the light of our ignorance, how can we make like nature building robust and fault-tolerant systems beyond our own explicit understanding? The myriad challenges and contradictions present in this question will be the object of my next few blog posts.
(Photo credit @DALL-E 2025)