Wishful Thinking Is Not A Strategy And Other Cliches
Wouldn’t it be great for those who employ professional engineers if we could be replaced with something cheaper? Wouldn’t it be great if the only engineer you needed was a cheap green graduate driving some simulation or modelling software, or an LLM like ChatGPT?
Of course it would, but anyone who thinks that a green graduate plus a software license is equal to or better than an experienced engineer knows nothing about the nature of engineering. They are elevating “wouldn’t it be great if” into a business strategy, but it’s really more wishful thinking (a logical fallacy known to philosophers as the “ought-is” fallacy) than a valid strategy.
Engineering simulation and modelling software has maths, physics, chemistry and engineering science directly programmed into its code. In the hands of an expert engineer, it can be a useful tool. In the hands of a green graduate, it is often worse than useless, as they lack the experience to sense-check its outputs, and they in any case tend to trust these outputs far too much.
LLM stands for large language model. ChatGPT and its competitors are sophisticated chatbots. They process language. They give the appearance of “knowing” about other things by virtue of having assimilated and plagiarised things written by humans.
It’s a bit like an actor in a film impersonating a technical expert convincingly, despite having no idea what the words they use mean. To them, the physics of Star Trek and those of the Manhattan Project are equally plausible, and equally meaningless.
Even if these programmes worked perfectly, there is more to engineering than maths, physics, chemistry and engineering science, and there is definitely more to engineering than chatting, or writing good prose.
In fact, many excellent engineers are not great at chatting, and don’t write particularly well. Nor are they usually paid to write things which are published in the public domain, for software developers to then plagiarise.
Professional engineering outputs are more usually sets of calculations (which are not published), drawings (again, almost never in the public domain), and technical reports (usually kept confidential). None of these are available to train LLMs, or for modelling and simulation code monkeys to build into their models.
Professional engineering institutions such as the IChemE and Professional Engineers Ontario have published explicit guidance on use of engineering software which require due diligence to be applied to software outputs. If this is done properly, it may well take as long as having carried out design without use of this software.
I am not aware that any professional engineering institutions have relaxed this requirement, and I hope I never do see this, as it is a direct consequence of a fundamental principle of competence. Professional engineers are responsible for quality and correctness of the work we do. Our work is consequential. Getting it wrong can cost lives.
Anyone who understands engineering, the duties of engineers, and the nature of engineering software would treat these products with the greatest of care, but I am seeing a rapid integration of LLMs, especially into engineering.
This is especially true in software design. I am told by software developers than using ChatGPT as a trusted source of correct code has already been standardised. That it is far faster than having your code people write every line of code is undeniable.
However, even the best code ever written contains about 1% errors. ChatGPT is plagiarising its code suggestions from code in its training data, applying rules which no one understands to fit it to what you are asking of it.
Tony Hoare, a famous software expert, once said; “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
Code generated by ChatGPT clearly falls into the second category, but Tony Hoare’s words seem to have been forgotten, perhaps because it would be great for managers if he were wrong. However, his statement is not just true of software design, it is true of all design – something I first discussed nearly ten years ago in my first book, “An Applied Guide To Process And Plant Design”.
Models have a place in engineering, but we must know their place. “All models are wrong, but some are useful“, as George Box said (paraphrasing other, earlier similar insights). If you don’t understand the limitations of your model, you really shouldn’t be using it.
Design is the proper province of professional engineers, not model jockeys. Modelling and simulation programs can tell you how a team of scientists with zero plant design experience might try to design a plant, and LLMs can tell you how someone impersonating an engineer would do it, but following either of these approaches blindly is not going to go well.