engineering

The red crane

The red crane by Weimen He

One of my favourite institutions to visit is the Bodleian’s Weston Library in Oxford.  I have written before about their rotating exhibition in the Treasury of unique books and manuscripts from their collection [see my post entitled ‘Pope and Austen‘ on September 9th, 2015].  A recent visit did not disappoint and included one of John Le Carré’s manuscripts showing his extensive editing as well as early texts written on birch bark.  However, it was in the shop that something really caught my eye.  The fusion of art and engineering in a postcard depicting a painting called ‘The red crane’, by artist in residence, Dr Weimen He, capturing the moments in time during the refurbishment of the library.  This level of fusion is rare in my experience and perhaps the ethos that created it is one of the reasons the Weston Library is such a pleasure to visit.

The Roman architect, Vitruvius identified the three principles of good structural design to be ‘firmitas, utilitas, venustas’ or durability, utility and beauty.  Too often utility, including value for money, trumps beauty and shortens horizons for durability; so that little is contributed to our culture and nothing worthwhile will be left for future generations.

BTW there is a very large bookshop next door to the Weston Library and I couldn’t resist buying ‘The Story of a Brief Marriage‘ by Anuk Arudpragasam.  It’s a beautiful novel of consciousness about love and war.

Wanted: user experience designers

A few weeks ago, I listened to a brilliant talk by Professor Rick Miller, President of Olin College.  He was talking at a conference on ‘New Approaches to Higher Education’.  He tolds us that the most common job description for recent Olin graduates was ‘user experience designer’ rather than a particular branch of engineering.  Aren’t all engineers, user experience designers?  We design, manufacture and maintain structures, machines, goods and services for society.  Whatever an engineer’s role in supplying society with the engineered environment around us, the ultimate deliverable is a user experience in the modern vernacular.

Rick Miller’s point was that society is changing faster than our education system.  He highlighted that the relevance of the knowledge economy had been destroyed by internet search engines.  There is no longer much advantage to be gained by having an enormous store of knowledge in your head, because much more is available on-demand via search engines, whose recall is faster than mine.  What matters is not what you know but what you can do with the knowledge.  And in the future, it will be all about what you can conceive or create with knowledge.  So, knowledge-intensive education should become a thing of the past and instead we need to focus on creative thinking and produce problem-solvers capable of dealing with complexity and uncertainty.

Feedback on feedback

Feedback on students’ assignments is a challenge for many in higher education.  Students appear to be increasingly dissatisfied with it and academics are frustrated by its apparent ineffectiveness, especially when set against the effort required for its provision.  In the UK, the National Student Survey results show that satisfaction with assessment and feedback is increasing but it remains the lowest ranked category in the survey [1].  My own recent experience has been of the students’ insatiable hunger for feedback on a continuing professional development (CPD) programme, despite receiving detailed written feedback and one-to-one oral discussion of their assignments.

So, what is going wrong?  I am aware that many of my academic colleagues in engineering do not invest much time in reading the education research literature; perhaps because, like the engineering research literature, much of it is written in a language that is readily appreciated only by those immersed in the subject.  So, here is an accessible digest of research on effective feedback that meets students’ expectations and realises the potential improvement in their performance.

It is widely accepted that feedback is an essential component [2] in the learning cycle and there is evidence that feedback is the single most powerful influence on student achievement [3, 4].  However, we often fail to realise this potential because our feedback is too generic or vague, not sufficiently timely [5], and transmission-focussed rather than student-centered or participatory [6].  In addition, our students tend not to be ‘assessment literate’, meaning they are unfamiliar with assessment and feedback approaches and they do not interpret assessment expectations in the same way as their tutors [5, 7].  Student reaction to feedback is strongly related to their emotional maturity, self-efficacy and motivation [1]; so that for a student with low self-esteem, negative feedback can be annihilating [8].  Emotional immaturity and assessment illiteracy, such as is typically found amongst first year students, is a toxic mix that in the absence of a supportive tutorial system leads to student dissatisfaction with the feedback process [1].

So, how should we provide feedback?  I provide copious detailed comments on students’ written work following the example of my own university tutor, who I suspect was following example of his tutor, and so on.  I found these comments helpful but at times overwhelming.  I also remember a college tutor who made, what seemed to me, devastatingly negative comments about my writing skills, which destroyed my confidence in my writing ability for decades.  It was only restored by a Professor of English who recently complimented me on my writing; although I still harbour a suspicion that she was just being kind to me.  So, neither of my tutors got it right; although one was clearly worse than the other.  Students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded [8].

Students like clear, unambiguous, instructional and direction feedback [8].  Feedback should provide a statement of student performance and suggestions for improvement [9], i.e. identify the gap between actual and expected performance and provide instructive advice on closing the gap.  This implies that specific assessment criteria are required that explicitly define the expectation [2].  The table below lists some of the positive and negative attributes of feedback based on the literature [1,2].  However, deploying the appropriate attributes does not guarantee that students will engage with feedback; sometimes students fail to recognise that feedback is being provided, for example in informal discussion and dialogic teaching; and hence, it is important to identify the nature and purpose of feedback every time it is provided.  We should reduce our over-emphasis on written feedback and make more use of oral feedback and one-to-one, or small group, discussion.  We need to take care that the receipt of grades or marks does not obscure the feedback, perhaps by delaying the release of marks.  You could ask students about the mark they would expect in the light of the feedback; and, you could require students to show in future work how they have used the feedback – both of these actions are likely to improve the effectiveness of feedback [5].

In summary, feedback that is content rather than process-driven is unlikely to engage students [10].  We need to strike a better balance between positive and negative comments, which includes a focus on appropriate guidance and motivation rather than justifying marks and diagnosing short-comings [2].  For most of us, this means learning a new way of providing feedback, which is difficult and potentially arduous; however, the likely rewards are more engaged, higher achieving students who might appreciate their tutors more.

References

[1] Pitt E & Norton L, ‘Now that’s the feedback that I want!’ Students reactions to feedback on graded work and what they do with it. Assessment & Evaluation in HE, 42(4):499-516, 2017.

[2] Weaver MR, Do students value feedback? Student perceptions of tutors’ written responses.  Assessment & Evaluation in HE, 31(3):379-394, 2006.

[3] Hattie JA, Identifying the salient facets of a model of student learning: a synthesis of meta-analyses.  IJ Educational Research, 11(2):187-212, 1987.

[4] Black P & Wiliam D, Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1):7-74, 1998.

[5] O’Donovan B, Rust C & Price M, A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in HE, 41(6):938-949, 2016.

[6] Nicol D & MacFarlane-Dick D, Formative assessment and self-regulatory learning: a model and seven principles of good feedback practice. Studies in HE, 31(2):199-218, 2006.

[7] Price M, Rust C, O’Donovan B, Handley K & Bryant R, Assessment literacy: the foundation for improving student learning. Oxford: Oxford Centre for Staff and Learning Development, 2012.

[8] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations.  Assessment & Evaluation in HE, DOI: 10.1080/02602938.2017.1310801, 2017.

[9] Saddler R, Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in HE, 35(5):535-550, 2010.

[10] Hounsell D, Essay writing and the quality of feedback. In J Richardson, M. Eysenck & D. Piper (eds) Student learning: research in education and cognitive psychology. Milton Keynes: Open University Press, 1987.

Getting smarter

A350 XWB passes Maximum Wing Bending test [from: http://www.airbus.com/galleries/photo-gallery%5D

Garbage in, garbage out (GIGO) is a perennial problem in computational simulations of engineering structures.  If the description of the geometry of the structure, the material behaviour, the loading conditions or the boundary conditions are incorrect (garbage in), then the simulation generates predictions that are wrong (garbage out), or least an unreliable representation of reality.  It is not easy to describe precisely the geometry, material, loading and environment of a complex structure, such as an aircraft or a powerstation; because, the complete description is either unavailable or too complicated.  Hence, modellers make assumptions about the unknown information and, or to simplify the description.  This means the predictions from the simulation have to be tested against reality in order to establish confidence in them – a process known as model validation [see my post entitled ‘Model validation‘ on September 18th, 2012].

It is good practice to design experiments specifically to generate data for model validation but it is expensive, especially when your structure is a huge passenger aircraft.  So naturally, you would like to extract as much information from each experiment as possible and to perform as few experiments as possible, whilst both ensuring predictions are reliable and providing confidence in them.  In other words, you have to be very smart about designing and conducting the experiments as well as performing the validation process.

Together with researchers at Empa in Zurich, the Industrial Systems Institute of the Athena Research Centre in Athens and Dantec Dynamics in Ulm, I am embarking on a new EU Horizon 2020 project to try and make us smarter about experiments and validation.  The project, known as MOTIVATE [Matrix Optimization for Testing by Interaction of Virtual and Test Environments (Grant Nr. 754660)], is funded through the Clean Sky 2 Joint Undertaking with Airbus acting as our topic manager to guide us towards an outcome that will be applicable in industry.  We held our kick-off meeting in Liverpool last week, which is why it is uppermost in my mind at the moment.  We have 36-months to get smarter on an industrial scale and demonstrate it in a full-scale test on an aircraft structure.  So, some sleepness nights ahead…

Bibliography:

ASME V&V 10-2006, Guide for verification & validation in computational solid mechanics, American Society of Mech. Engineers, New York, 2006.

European Committee for Standardisation (CEN), Validation of computational solid mechanics models, CEN Workshop Agreement, CWA 16799:2014 E.

Hack E & Lampeas G (Guest Editors) & Patterson EA (Editor), Special issue on advances in validation of computational mechanics models, J. Strain Analysis, 51 (1), 2016.

http://www.engineeringvalidation.org/

Listening with your eyes shut

I am in the London Underground onboard a train on my way to a conference on ‘New Approaches to Higher Education’ organised by the Institution of Engineering and Technology and the Engineering Professors’ Council.  The lady opposite has her eyes closed but she is not asleep because she opens them periodically as we come into stations to check whether it’s her stop.  I wonder if she is trying to reproduce John Hull’s experience of the depth of sounds as a blind person [see my post entitled ‘Rain brings out the contours in everything‘ on February 22, 2017].  For the second time in recent weeks, I close my eyes and try it for myself.  It is surprising how in a crowded train, I can’t hear anyone, just the noise made by the train.  It’s like a wobble board that’s joined by a whole percussion section of an orchestra when we go around a bend or over points.  The first time I closed my eyes was at a concert at the Philharmonic Hall in Liverpool.  My view of the orchestra was obstructed by the person in front of me so, rather than stare at the back of their head, I closed my eyes and allowed the music to dominate my mind.  Switching off the stream of images seemed to release more of my brain cells to register the depth and richness of Bach’s Harpsichord Concerto No. 5.  I was classified as tone deaf at school when I was kicked out of the choir and I learned no musical instruments, so the additional texture and dimensionality in the music was a revelation to me.

Back to the London Underground – many of my fellow passengers were plugged into their phones or tablets via their ears and eyes.  I wondered if any were following the MOOC on Understanding Super Structures that we launched recently.  Unlikely I know, but it’s a bit different, because it is mainly audio clips and not videos.  We’re trying to tap into some of the time many people spend with earbuds plugged into their ears but also make the MOOC more accessible in countries where internet access is mainly via mobile phones.  My recent experiences of listening with my eyes closed, make me realize that perhaps we should ask people to close their eyes when listening to our audio clips so that they can fully appreciate them.  If they are sitting on the train then that’s fine but not recommended if you are walking across campus or in town!