Occam's Razor
one should not increase, beyond what is necessary, the number of entities required to explain anything |
Though the principle may seem rather trivial, it is essential for model building because of what is known as the "underdetermination of theories by data". For a given set of observations or data, there is always an infinite number of possible models explaining those same data. This is because a model normally represents an infinite number of possible cases, of which the observed cases are only a finite subset. The non-observed cases are inferred by postulating general rules covering both actual and potential observations.
For example, through two data points in a diagram you can always draw a straight line, and induce that all further observations will lie on that line. However, you could also draw an infinite variety of the most complicated curves passing through those same two points, and these curves would fit the empirical data just as well. Only Occam's razor would in this case guide you in choosing the "straight" (i.e. linear) relation as best candidate model. A similar reasoning can be made for n data points lying in any kind of distribution.
Occam's razor is especially important for universal models such as the ones developed in General Systems Theory, mathematics or philosophy, because there the subject domain is of an unlimited complexity. If one starts with too complicated foundations for a theory that potentially encompasses the universe, the chances of getting any manageable model are very slim indeed. Moreover, the principle is sometimes the only remaining guideline when entering domains of such a high level of abstraction that no concrete tests or observations can decide between rival models. In mathematical modelling of systems, the principle can be made more concrete in the form of the principle of uncertainty maximization: from your data, induce that model which minimizes the number of additional assumptions.
This principle is part of epistemology, and can be motivated by the requirement of maximal simplicity of cognitive models. However, its significance might be extended to metaphysics if it is interpreted as saying that simpler models are more likely to be correct than complex ones, in other words, that "nature" prefers simplicity.