Monthly Archives: April 2016

Opal offers validation opportunity for climate models

OrangeFanSpongeSmallMany of us will be familiar with the concept of the carbon cycle, but what about the silicon cycle?  Silicon is the second most abundant element in the Earth’s crust.  As a consequence of erosion, it is carried by rivers into the sea where organisms, such as sponges and diatoms (photosynthetic algae), convert the silicon in seawater into opal that ends up in ocean sediment when these organisms die.  This marine silicon cycle can be incorporated into climate models, since each step is influenced by climatic conditions, and the opal sediment distribution from deep sea sediment cores can be used for model validation.

This approach can assist in providing additional confidence in climate models, which are notoriously difficult to validate, and was described by Katharine Hendry, a Royal Society University Research Fellow at the University of Bristol at a recent conference at the Royal Society.  This struck me as an out-of-the box or lateral way of seeking to increase confidence in climate models.

There are many examples in engineering where we tend to shy away from comprehensive validation of computational models because the acquisition of measured data seems too difficult and, or expensive.  We should take inspiration from sponges – by looking for data that is not necessarily the objective of the modelling but that nevertheless characterises the model’s behaviour.

Source:

Thumbnail: http://www.aquariumcreationsonline.net/sponge.html

Advertisements

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.