Artificial Intelligence – Is it another ‘you’ve got to be in it to win it’?
Artificial Intelligence (AI) and Machine Learning (ML) have been the ‘buzz words’ for quite some time now. I recollect hearing about Artificial Intelligence for the first time in 2016, but I was unaware of its current impact and future potential transformations until I attended the CogX conference in the summer of 2018. Listening to various talks, I started to see how AI is not just another field of study, but an upcoming revolution for various industries.
In a recent BBC interview, Sundar Pichai, the avuncular leader of one the most popular, complex and rich tech companies in the world, Google, argues that AI is more profound than fire, electricity or the internet. In his view, it is the most influential technology that humanity will ever develop and work on. Moreover, most business leaders and investors universally agree that AI and ML will transform their businesses by reducing costs, managing risks, streamlining operations, accelerating growth, and fuelling innovation.
No doubt in why AI technologies gathered around $37bn (£26.7bn) in investments throughout the world in 2019. Once such evidence of investment in the UK is The Alan Turing Institute, the national institute for data science, created in 2015 with an investment of just 42 million. These investments are driven by an expectation that AI could add £630bn to the UK economy by 2035.
So what is AI? It is a technology that leverages computers to mimic the problem-solving and
decision-making capabilities of the human brain. Here, I use AI and ML interchangeably, but they are not quite the same. ML is the application of AI, based on the idea that we feed or let machines access the data they can learn for themselves.
The basis of this idea is the question, “Can machines think?”. This was exactly the question that was grappling with Alan Turing’s (British computer scientist) mind in 1950 when the first general-purpose computers had just been built. It was then, a couple of years later, John McCarthy coined the term ‘Artificial Intelligence’. This idea was set in stone in 1959 when Arthur Samuel realised that it is possible to teach the computer to learn themselves.
Nevertheless, the buzz has been fairly recent despite its early origins. This is due to the ever-increasing size of the internet and computing speed that has facilitated the accumulation and storage of colossal amounts of data for analysis. Therefore, it was realised that it is far more efficient to code machines teaching them to think like humans rather than telling them how to do everything.
As many of us know now, modern AI techniques power search engines, voice assistants, suggestions for full sentences while writing this document, facial recognition, items, and binge-worthy series recommendations on Netflix and Amazon, and the list can go on and on. A demonstration of the technology in these cases has placed huge expectations on AI to transform sectors like supply chain, healthcare, automotive, energy, manufacturing and process industry.
Some of the examples from the manufacturing and process industries are worth mentioning here to stir some excitement. For instance, Nokia has introduced a video application that uses ML to alert an assembly operator of inconsistencies in the production process. Likewise, Schneider Electric scientists use data from the oil field to build models that predict when and where maintenance is needed.
Similarly, Siemens uses ML for predictive analytics for detecting anomalies during operation. Additionally, the valves, pumps or heat exchangers are monitored for process optimisation.
Finally, General Motors adopted ML to optimise their product for the cost and design constraints to succeed with additive manufacturing techniques. The resulting seatbelt bracket prototype was tested and found 40% lighter and 20% stronger than the original design.
The project that I am currently involved in applies AI to analyse and optimise fluid flow patterns occurring in various applications such as energy and manufacturing systems and human bodies. These smart algorithms, once trained, can deliver efficient solutions that can reduce emissions and improve the accuracy and speed of patient diagnostics. Elaborating this further is the case of the optimal mixing in a chemical reactor.
Here ML methods can be applied to find the correlation between the reactor input/operating conditions and geometry with the mixing response. Once the correlation is found, the most optimal mixing response conditions can be determined, enhancing the reactor design's efficiency.
Nonetheless, some barriers make it tricky for adoption (Figure 1). After the popularly cited barriers related to the strategy, lack of talent and functional silos are technical. The technique that works on data needs lots and lots of data, which is the lifeblood of modern AI.
For example, training an ML system for image classification requires a large number of carefully labelled examples, and these labels have to be applied by humans. If the labels are related to scans in the medical domain, then the human labellers need to have relevant domain expertise to identify things like fractures or tumours from body scans.
This is very important as ML systems blindly correlate scans with the conditions without understanding the broader context as humans do. Also, the training data needs to be diverse to function in various environments. For instance, research at Mount Sinai found that an AI system trained to identify pneumonia on chest x-rays became less reliable when used in other hospitals than the ones it has been previously trained.
Nevertheless, this is not very different from other technologies we have adopted over time. Of course, with all technologies, the barriers must be overcome to harvest the fruit, but it has to be done quickly this time! Otherwise, we will risk missing out on the current and future AI opportunities.
Figure 1. Most significant barriers organisations face in adopting AI (data based on 1,646 respondents).