‘Engineers are slow, error-prone, biased, limited in experience and conditioned by education; and so we want to automate to increase reliability.’ This my paraphrasing of Professor Kristina Shea speaking at a workshop in Munich last year. At first glance it appears insulting to my profession but actually it is just classifying us with the rest of the human race. Everybody has these attributes, at least when compared to computers. And they are major impediments to engineers trying to design and manufacture systems that have the high reliability and low cost expected by the general public.
Professor Shea is Head of the Engineering Design and Computing Laboratory at ETH Zurich. Her research focuses on developing computational tools that enable the design of complex engineered systems and products. An underlying theme of her work, which she was talking about at the workshop, is automating design and fabrication processes to eliminate the limitations caused by engineers.
Actually, I quite like these limitations and perhaps they are essential because they represent the entropy or chaos that the second law of thermodynamics tells us must be created in every process. Many people have expressed concern about the development of Artificial Intelligence (AI) capable of designing machines smarter than humans, which would quickly design even smarter machines that we could neither understand nor control. Chaos would follow, possibly with apocalyptic consequences for human society. To quote the British mathematician, IJ Good (1916-2009), “There would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.” Stephen Cave in his essay ‘Rise of machines’ in the FT on March 20th, 2015, citing James Barrat suggested that “artificial intelligence could become super-intelligence in a matter of days, as it fixes its own bugs, rewriting its software and drawing on the wealth of information now available online”.
The decisions that we make are influenced, or even constrained, by a set of core values, unstated assumptions and what we call common sense which are very difficult to express in prose never mind computer code. So it seems likely that an ultra-intelligent machine would lack some or all of these boundary conditions with the consequences that while ‘To err is human, to really foul things up you need a computer.’ To quote Paul R. Ehrlich.
Hence, I would like to think that there is still room for engineers to provide the creativity. Perhaps Professor Shea is simply proposing a more sophisticated version of the out-of-skull thinking I wrote about in my post on March 18th, 2015.
Follow the link to Kristina Shea’s slides from the workshop on International Workshop on Validation of Computational Mechanics Models.
Stephen Cave, Rise of the machines, Essay in the Financial Times on 21/22 March, 2015.
James Barrat, ‘Our Final Invention: Artificial Intelligence and the End of the Human Era‘, St Martins Griffin, 2015