The Principle of Incomplete Knowledge
“All models are wrong, but some are useful”
— George Box
If you think of the complicated world we live in you quickly realize that we need to sort the inessential from the essential and then reduce complexity into something simpler. In the same way the map is not the terrority, knowledge is only but a subset of what it represents.
In why knowledge is incomplete, the authors elaborate on these ideas:
This principle can be deduced from a lot of other, more specific principles: Heisenberg’s uncertainty principle, implying that the information a control system can get is necessarily incomplete; the relativistic principle of the finiteness of the speed of light, implying that the moment information arrives, it is already obsolete to some extent; the principle of bounded rationality, stating that a decision-maker in a real-world situation will never have all information necessary for making an optimal decision; the principle of the partiality of self-reference, a generalization of Gšdel’s incompleteness theorem, implying that a system cannot represent itself completely, and hence cannot have complete knowledge of how its own actions may feed back into the perturbations. As a more general argument, one might note that models must be simpler than the phenomena they are supposed to model. Otherwise, variation and selection processes would take as much time in the model as in the real world, and no anticipation would be possible, precluding any control. Finally, models are constructed by blind variation processes, and, hence, cannot be expected to reach any form of complete representation of an infinitely complex environment.