In digital detox

I am on vacation so I am re-posting something I wrote around this time last year which I still think is relevant.

It’s official – half of us are addicted to our internet-connected devices and a third of us have attempted to kick the addiction.  A recent study by the UK’s communication regulator, OFCOM found that 59% of internet users considered themselves ‘hooked’ and spending the equivalent of more than a day a week on-line.   They also reported that one in three internet users have attempted a ‘digital detox’ with a third saying they felt more productive afterwards, while slightly more that a quarter found it liberating and another quarter said they enjoyed life more.  So, switch off all of your devices, take a deep vacation, do some off-line reading (see my post entitled ‘Reading offline‘ on March 19th, 2014), slow down and breathe your own air (see my post entitled ‘Slow down, breathe your own air‘ on December 23rd, 2015).  Now, you won’t find many blogs advising you to stop reading them!

Health warning: OFCOM also found that 16% of ‘digital detoxers’ experienced FOMO (Fear Of Missing Out’ (‘FOMO’), 15% felt lost and 14% ‘cut-off’.

Technology causes deflation

Technology enables us to do more in a period of time.  A classic example is the washing-machine that requires you to do little more than load your dirty clothes and switch it on rather than laboriously wash, scrub and rinse each item repeatedly.  It costs less time to do the same thing and so we experience time-deflation.  It’s the same as with money: if you can buy two hamburgers today for the price of one yesterday then there has been some deflation.  In these circumstances, it becomes less important to have a large income because the necessities of life have reduced in price, and so you could work less hard, start saving more (but for what?) or buy some of life’s luxuries.  However, the analogy between time and money breaks down at this point, because you can’t reduce your supply of time or save it, you have to spend it.  But advancing technology means nearly everything costs less time and so it gets harder and harder to spend your alloted time.  Many of us react by trying to do more and more diverse activities, and often simultaneously, with the result that we over-compensate for time-deflation and become bankrupt, or burnt out wrecks.

We can cheat technology’s deflating effect by pursuing activities that involve no time-saving technology such as walking, reading, thinking and spending time with our loved ones.  In the last case, the clue is in the phraseology!

BTW – I will be on deep vacation by the time you read this post. Amongst other things, I will be curing my tsundoko by reading the books I bought in Camden Lock Books earlier in the summer [see my post entitled ‘Tsundoko‘ on May 24th 2017].

Wanted: user experience designers

A few weeks ago, I listened to a brilliant talk by Professor Rick Miller, President of Olin College.  He was talking at a conference on ‘New Approaches to Higher Education’.  He tolds us that the most common job description for recent Olin graduates was ‘user experience designer’ rather than a particular branch of engineering.  Aren’t all engineers, user experience designers?  We design, manufacture and maintain structures, machines, goods and services for society.  Whatever an engineer’s role in supplying society with the engineered environment around us, the ultimate deliverable is a user experience in the modern vernacular.

Rick Miller’s point was that society is changing faster than our education system.  He highlighted that the relevance of the knowledge economy had been destroyed by internet search engines.  There is no longer much advantage to be gained by having an enormous store of knowledge in your head, because much more is available on-demand via search engines, whose recall is faster than mine.  What matters is not what you know but what you can do with the knowledge.  And in the future, it will be all about what you can conceive or create with knowledge.  So, knowledge-intensive education should become a thing of the past and instead we need to focus on creative thinking and produce problem-solvers capable of dealing with complexity and uncertainty.

Feedback on feedback

Feedback on students’ assignments is a challenge for many in higher education.  Students appear to be increasingly dissatisfied with it and academics are frustrated by its apparent ineffectiveness, especially when set against the effort required for its provision.  In the UK, the National Student Survey results show that satisfaction with assessment and feedback is increasing but it remains the lowest ranked category in the survey [1].  My own recent experience has been of the students’ insatiable hunger for feedback on a continuing professional development (CPD) programme, despite receiving detailed written feedback and one-to-one oral discussion of their assignments.

So, what is going wrong?  I am aware that many of my academic colleagues in engineering do not invest much time in reading the education research literature; perhaps because, like the engineering research literature, much of it is written in a language that is readily appreciated only by those immersed in the subject.  So, here is an accessible digest of research on effective feedback that meets students’ expectations and realises the potential improvement in their performance.

It is widely accepted that feedback is an essential component [2] in the learning cycle and there is evidence that feedback is the single most powerful influence on student achievement [3, 4].  However, we often fail to realise this potential because our feedback is too generic or vague, not sufficiently timely [5], and transmission-focussed rather than student-centered or participatory [6].  In addition, our students tend not to be ‘assessment literate’, meaning they are unfamiliar with assessment and feedback approaches and they do not interpret assessment expectations in the same way as their tutors [5, 7].  Student reaction to feedback is strongly related to their emotional maturity, self-efficacy and motivation [1]; so that for a student with low self-esteem, negative feedback can be annihilating [8].  Emotional immaturity and assessment illiteracy, such as is typically found amongst first year students, is a toxic mix that in the absence of a supportive tutorial system leads to student dissatisfaction with the feedback process [1].

So, how should we provide feedback?  I provide copious detailed comments on students’ written work following the example of my own university tutor, who I suspect was following example of his tutor, and so on.  I found these comments helpful but at times overwhelming.  I also remember a college tutor who made, what seemed to me, devastatingly negative comments about my writing skills, which destroyed my confidence in my writing ability for decades.  It was only restored by a Professor of English who recently complimented me on my writing; although I still harbour a suspicion that she was just being kind to me.  So, neither of my tutors got it right; although one was clearly worse than the other.  Students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded [8].

Students like clear, unambiguous, instructional and direction feedback [8].  Feedback should provide a statement of student performance and suggestions for improvement [9], i.e. identify the gap between actual and expected performance and provide instructive advice on closing the gap.  This implies that specific assessment criteria are required that explicitly define the expectation [2].  The table below lists some of the positive and negative attributes of feedback based on the literature [1,2].  However, deploying the appropriate attributes does not guarantee that students will engage with feedback; sometimes students fail to recognise that feedback is being provided, for example in informal discussion and dialogic teaching; and hence, it is important to identify the nature and purpose of feedback every time it is provided.  We should reduce our over-emphasis on written feedback and make more use of oral feedback and one-to-one, or small group, discussion.  We need to take care that the receipt of grades or marks does not obscure the feedback, perhaps by delaying the release of marks.  You could ask students about the mark they would expect in the light of the feedback; and, you could require students to show in future work how they have used the feedback – both of these actions are likely to improve the effectiveness of feedback [5].

In summary, feedback that is content rather than process-driven is unlikely to engage students [10].  We need to strike a better balance between positive and negative comments, which includes a focus on appropriate guidance and motivation rather than justifying marks and diagnosing short-comings [2].  For most of us, this means learning a new way of providing feedback, which is difficult and potentially arduous; however, the likely rewards are more engaged, higher achieving students who might appreciate their tutors more.

References

[1] Pitt E & Norton L, ‘Now that’s the feedback that I want!’ Students reactions to feedback on graded work and what they do with it. Assessment & Evaluation in HE, 42(4):499-516, 2017.

[2] Weaver MR, Do students value feedback? Student perceptions of tutors’ written responses.  Assessment & Evaluation in HE, 31(3):379-394, 2006.

[3] Hattie JA, Identifying the salient facets of a model of student learning: a synthesis of meta-analyses.  IJ Educational Research, 11(2):187-212, 1987.

[4] Black P & Wiliam D, Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1):7-74, 1998.

[5] O’Donovan B, Rust C & Price M, A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in HE, 41(6):938-949, 2016.

[6] Nicol D & MacFarlane-Dick D, Formative assessment and self-regulatory learning: a model and seven principles of good feedback practice. Studies in HE, 31(2):199-218, 2006.

[7] Price M, Rust C, O’Donovan B, Handley K & Bryant R, Assessment literacy: the foundation for improving student learning. Oxford: Oxford Centre for Staff and Learning Development, 2012.

[8] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations.  Assessment & Evaluation in HE, DOI: 10.1080/02602938.2017.1310801, 2017.

[9] Saddler R, Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in HE, 35(5):535-550, 2010.

[10] Hounsell D, Essay writing and the quality of feedback. In J Richardson, M. Eysenck & D. Piper (eds) Student learning: research in education and cognitive psychology. Milton Keynes: Open University Press, 1987.

Getting smarter

A350 XWB passes Maximum Wing Bending test [from: http://www.airbus.com/galleries/photo-gallery%5D

Garbage in, garbage out (GIGO) is a perennial problem in computational simulations of engineering structures.  If the description of the geometry of the structure, the material behaviour, the loading conditions or the boundary conditions are incorrect (garbage in), then the simulation generates predictions that are wrong (garbage out), or least an unreliable representation of reality.  It is not easy to describe precisely the geometry, material, loading and environment of a complex structure, such as an aircraft or a powerstation; because, the complete description is either unavailable or too complicated.  Hence, modellers make assumptions about the unknown information and, or to simplify the description.  This means the predictions from the simulation have to be tested against reality in order to establish confidence in them – a process known as model validation [see my post entitled ‘Model validation‘ on September 18th, 2012].

It is good practice to design experiments specifically to generate data for model validation but it is expensive, especially when your structure is a huge passenger aircraft.  So naturally, you would like to extract as much information from each experiment as possible and to perform as few experiments as possible, whilst both ensuring predictions are reliable and providing confidence in them.  In other words, you have to be very smart about designing and conducting the experiments as well as performing the validation process.

Together with researchers at Empa in Zurich, the Industrial Systems Institute of the Athena Research Centre in Athens and Dantec Dynamics in Ulm, I am embarking on a new EU Horizon 2020 project to try and make us smarter about experiments and validation.  The project, known as MOTIVATE [Matrix Optimization for Testing by Interaction of Virtual and Test Environments (Grant Nr. 754660)], is funded through the Clean Sky 2 Joint Undertaking with Airbus acting as our topic manager to guide us towards an outcome that will be applicable in industry.  We held our kick-off meeting in Liverpool last week, which is why it is uppermost in my mind at the moment.  We have 36-months to get smarter on an industrial scale and demonstrate it in a full-scale test on an aircraft structure.  So, some sleepness nights ahead…

Bibliography:

ASME V&V 10-2006, Guide for verification & validation in computational solid mechanics, American Society of Mech. Engineers, New York, 2006.

European Committee for Standardisation (CEN), Validation of computational solid mechanics models, CEN Workshop Agreement, CWA 16799:2014 E.

Hack E & Lampeas G (Guest Editors) & Patterson EA (Editor), Special issue on advances in validation of computational mechanics models, J. Strain Analysis, 51 (1), 2016.

http://www.engineeringvalidation.org/