In his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts. This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters. However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems. Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research. This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements. We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models. Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.