Algorithmic Complexity theory:
Moreover a similar situation occurs in broad to specific ILP systems when the inference rules are deductive thus they specialize. So at some stage there a hypothesis will become so specialized in which it fails to explain all the positive examples. So such in this case there a similar pruning operation can be imposed that further specialization will not rectify the situation. Remember there in practice to compensate for noisy data and there is more flexibility built into the systems. Conversely in particular way there the posterior conditions that specify the problem can be relaxed, and the pruning of hypotheses that explain small numbers of negative examples may not be immediately dropped.
Whether we can see how the examples could be utilised to choose between two non-pruned hypotheses as: if performing a specific to general search or the number of positive examples explained through a hypothesis can be taken as a value to sort the hypotheses with as more positive examples explained being better. Correspondingly if performing a general to specific search so then the number of negatives still explained by a hypothesis can be taken as a value to sort the hypotheses with as fewer negatives being better.
In fact this may is however be a very crude measure means many hypotheses might score the same then especially if there is a small number of examples. Where all things are equal, so an ILP system may employ a sophisticated version of Occam's razor and choose between two equal scoring hypotheses according to some function derived by Algorithmic Complexity theory or some similar theory.