Slow moving nanoparticles

Random track of a nanoparticle superimposed on its image generated in the microscope using a pin-hole and narrowband filter.

A couple of weeks ago I bragged about research from my group being included in a press release from the Royal Society [see post entitled ‘Press Release!‘ on November 15th, 2017].  I hate to be boring but it’s happened again.  Some research that we have been performing with the European Union’s Joint Research Centre in Ispra [see my post entitled ‘Toxic nanoparticles‘ on November 13th, 2013] has been published this morning by the Royal Society Open Science.

Our experimental measurements of the free motion of small nanoparticles in a fluid have shown that they move slower than expected.  At low concentrations, unexpectedly large groups of molecules in the form of nanoparticles up to 150-300nm in diameter behave more like an individual molecule than a particle.  Our experiments support predictions from computer simulations by other researchers, which suggest that at low concentrations the motion of small nanoparticles in a fluid might be dominated by van der Waals forces rather the thermal motion of the surrounding molecules.  At the nanoscale there is still much that we do not understand and so these findings will have potential implications for predicting nanoparticle transport, for instance in drug delivery [e.g., via the nasal passage to the central nervous system], and for understanding enhanced heat transfer in nanofluids, which is important in designing systems such as cooling for electronics, solar collectors and nuclear reactors.

Our article’s title is ‘Transition from fractional to classical Stokes-Einstein behaviour in simple fluids‘ which does not reveal much unless you are familiar with the behaviour of particles and molecules.  So, here’s a quick explanation: Robert Brown gave his name to the motion of particles suspended in a fluid after reporting the random motion or diffusion of pollen particles in water in 1828.  In 1906, Einstein postulated that the motion of a suspended particle is generated by the thermal motion of the surrounding fluid molecules.  While Stokes law relates the drag force on the particle to its size and fluid viscosity.  Hence, the Brownian motion of a particle can be described by the combined Stokes-Einstein relationship.  However, at the molecular scale, the motion of individual molecules in a fluid is dominated by van der Waals forces, which results in the size of the molecule being unimportant and the diffusion of the molecule being inversely proportional to a fractional power of the fluid viscosity; hence the term fractional Stokes-Einstein behaviour.  Nanoparticles that approach the size of large molecules are not visible in an optical microscope and so we have tracked them using a special technique based on imaging their shadow [see my post ‘Seeing the invisible‘ on October 29th, 2014].

Source:

Coglitore D, Edwardson SP, Macko P, Patterson EA, Whelan MP, Transition from fractional to classical Stokes-Einstein behaviour in simple fluids, Royal Society Open Science, 4:170507, 2017. doi:

Advertisements

Why playing the piano might enhance our intelligence?

By National Institutes of Health [Public domain], via Wikimedia Commons

Students and lecturers leave all sorts of things in lecture theatres, including lecture notes, pens and water bottles, that accumulate around the edges like flotsam on the beach because no one wants to throw away something for which the owner might return.  A few weeks ago, I found the front page of a letter published in Nature which roused my curiosity. Its title was ‘Verbal and non-verbal intelligence changes in the teenage brain’.  My memories of my teenage years are almost uniformly bad; in part because I was unable to reproduce the academic promise that I had shown when I was younger and the pressure to do so was unrelenting.  I suspect that my experience is not uncommon and the research described in this letter offers a potential explanation for my inability to ace examinations regardless of how hard I tried.

The conventional understanding of human intellectual capacity is that it is constant during our life. However, the authors of this article have shown that the statistics, upon which this understanding is based, hide a variation in our teenage years; because some teenagers experience a reduction and some an increase in intellectual capacity, which leaves the population’s average unchanged.

In addition, using structural and functional imaging, they were able to correlate changes in verbal IQ with changes in grey matter density in a region of the brain activated by speech (the left motor cortex), and changes in non-verbal IQ with changes in grey matter density in regions activated by finger movements (the anterior cerebellum).

The timeline of the reported research does not extend far enough to establish whether or not the changes seen in teenagers is temporary; however, my anecdotal evidence suggests that might be the case.  I would conclude that the effort used to apply psychological pressure on teenagers to ace examinations might be better expended on piano lessons and piano practice to enhance sensorimotor skills which are strongly correlated to cognitive intelligence – but I suspect many parents have already worked that one out!

Source:

Ramsden S, Richardson FM, Josse G, Thomas MSC, Ellis C, Shakeshaft C, Seghier ML & Price CJ, Verbal and non-verbal intelligence changes in the teenage brain, Nature, 479:113-116, 2011.

Entropy on the brain

It was the worst of times, it was the worst of times.  Again.  That’s the things about things.  They fall apart, always have, always will, it’s in their nature.’  They are the opening three lines of Ali Smith’s novel ‘Autumn’.  Ali Smith doesn’t mention entropy but that’s what she is describing.

My first-year lecture course has progressed from the first law of thermodynamics to the second law; and so, I have been stretching the students’ brains by talking about entropy.  It’s a favourite topic of mine but many people find it difficult.  Entropy can be described as the level of disorder present in a system or the environment.  Ludwig Boltzmann derived his famous equation, S=k ln W, which can be found on his gravestone – he died in 1906.  S is entropy, k is a constant of proportionality named after Boltzmann, and W is the number of arrangements in which a system can be arranged without changing its energy content (ln means natural logarithm).  So, the more arrangements that are possible then the larger is the entropy.

By now the neurons in your brain should be firing away nicely with a good level of synchronicity (see my post entitled ‘Digital hive mind‘ on November 30th, 2016 and ‘Is the world comprehensible?‘ on March 15th, 2017).  In other words, groups of neurons should be showing electrical activity that is in phase with other groups to form large networks.  Some scientists believe that the size of the network was indicative of the level of your consciousness.  However, scientists in Toronto led by Jose Luis Perez-Velazquez, have suggested that it is not the size of the network that is linked to consciousness but the number of ways that a particular degree of connectivity can be achieved.  This begins to sound like the entropy of your neurons.

In 1948 Claude Shannon, an American electrical engineer, stated that ‘information must be considered as a negative term in the entropy of the system; in short, information is negentropy‘. We can extend this idea to the concept that the entropy associated with information becomes lower as it is arranged, or ordered, into knowledge frameworks, e.g. laws and principles, that allow us to explain phenomena or behaviour.

Perhaps these ideas about entropy of information and neurons are connected; because when you have mastered a knowledge framework for a topic, such as the laws of thermodynamics, you need to deploy a small number of neurons to understand new information associated with that topic.  However, when you are presented with unfamiliar situations then you need to fire multiple networks of neurons and try out millions of ways of connecting them, in order to understand the unfamiliar data being supplied by your senses.

For diverse posts on entropy see: ‘Entropy in poetry‘ on June 1st, 2016; ‘Entropy management for bees and flights‘ on November 5th, 2014; and ‘More on white dwarfs and existentialism‘ on November 16th, 2016.

Sources:

Ali Smith, Autumn, Penguin Books, 2017

Consciousness is tied to ‘entropy’, say researchers, Physics World, October 16th, 2016.

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.

How many repeats do we need?

This is a question that both my undergraduate students and a group of taught post-graduates have struggled with this month.  In thermodynamics, my undergraduate students were estimating absolute zero in degrees Celsius using a simple manometer and a digital thermometer (this is an experiment from my MOOC: Energy – Thermodynamics in Everyday Life).  They needed to know how many times to repeat the experiment in order to determine whether their result was significantly different to the theoretical value: -273 degrees Celsius [see my post entitled ‘Arbitrary zero‘ on February 13th, 2013 and ‘Beyond  zero‘ the following week]. Meanwhile, the post-graduate students were measuring the strain distribution in a metal plate with a central hole that was loaded in tension. They needed to know how many times to repeat the experiment to obtain meaningful results that would allow a decision to be made about the validity of their computer simulation of the experiment [see my post entitled ‘Getting smarter‘ on June 21st, 2017].

The simple answer is six repeats are needed if you want 98% confidence in the conclusion and you are happy to accept that the margin of error and the standard deviation of your sample are equal.  The latter implies that error bars of the mean plus and minus one standard deviation are also 98% confidence limits, which is often convenient.  Not surprisingly, only a few undergraduate students figured that out and repeated their experiment six times; and the post-graduates pooled their data to give them a large enough sample size.

The justification for this answer lies in an equation that relates the number in a sample, n to the margin of error, MOE, the standard deviation of the sample, σ, and the shape of the normal distribution described by the z-score or z-statistic, z*: The margin of error, MOE, is the maximum expected difference between the true value of a parameter and the sample estimate of the parameter which is usually the mean of the sample.  While the standard deviation, σ,  describes the difference between the data values in the sample and the mean value of the sample, μ.  If we don’t know one of these quantities then we can simplify the equation by assuming that they are equal; and then n ≥ (z*)².

The z-statistic is the number of standard deviations from the mean that a data value lies, i.e, the distance from the mean in a Normal distribution, as shown in the graphic [for more on the Normal distribution, see my post entitled ‘Uncertainty about Bayesian methods‘ on June 7th, 2017].  We can specify its value so that the interval defined by its positive and negative value contains 98% of the distribution.  The values of z for 90%, 95%, 98% and 99% are shown in the table in the graphic with corresponding values of (z*)², which are equivalent to minimum values of the sample size, n (the number of repeats).

Confidence limits are defined as: but when n = , this simplifies to μ ± σ.  So, with a sample size of six (6 = n   for 98% confidence) we can state with 98% confidence that there is no significant difference between our mean estimate and the theoretical value of absolute zero when that difference is less than the standard deviation of our six estimates.

BTW –  the apparatus for the thermodynamics experiments costs less than £10.  The instruction sheet is available here – it is not quite an Everyday Engineering Example but the experiment is designed to be performed in your kitchen rather than a laboratory.

Press release!

A jumbo jet has about six million parts of which roughly half are fasteners – that’s a lot of holes.

It is very rare for one of my research papers to be included in a press release on its publication.  But that’s what has happened this month as a consequence of a paper being included in the latest series published by the Royal Society.  The contents of the paper are not earth shattering in terms of their consequences for humanity; however, we have resolved a long-standing controversy about why cracks grow from small holes in structures [see post entitled ‘Alan Arnold Griffith‘ on  April 26th, 2017] that are meant to be protected from such events by beneficial residual stresses around the hole.  This is important for aircraft structures since a civilian airliner can have millions of holes that contain rivets and bolts which hold the structure together.

We have used mechanical tests to assess fatigue life, thermoelastic stress analysis to measure stress distributions [see post entitled ‘Counting photons to measure stress‘ on November 18th, 2015], synchrotron x-ray diffraction to evaluate residual stress inside the metal and microscopy to examine failure surfaces [see post entitled ‘Forensic engineering‘ on July 22nd, 2015].  The data from this diverse set of experiments is integrated in the paper to provide a mechanistic explanation of how cracks exploit imperfections in the beneficial residual stress field introduced by the manufacturing process and can be aided in their growth by occasional but modest overloads, which might occur during a difficult landing or take-off.

The success of this research is particularly satisfying because at its heart is a PhD student supported by a dual PhD programme between the University of Liverpool and National Tsing Hua University in Taiwan.  This programme, which supported by the two partner universities, is in its sixth year of operation with a steady state of about two dozen PhD students enrolled, who divide their time between Liverpool, England and Hsinchu, Taiwan.  The synchrotron diffraction measurements were performed, with a colleague from Sheffield Hallam University, at the European Synchrotron Research Facility (ESRF) in Grenoble, France; thus making this a truly international collaboration.

Source:

Amjad K, Asquith D, Patterson EA, Sebastian CM & Wang WC, The interaction of fatigue cracks with a residual stress field using thermoelastic stress analysis and synchrotron x-ray diffraction experiments, R. Soc. Open Sci. 4:171100.