THANK YOU FOR SUBSCRIBING
Artificial Intelligence and its Future
By Prof Dr Detlef D Nauck, Chief Research Scientist for Data Science, BT Research & Innovation
At BT, we have a long tradition of AI research and we have deployed AI in several areas of operations. An AI system plans and schedules our mobile workforce and we use AI to optimise network layouts. We use AI to enhance the capabilities of our security operations centres in identifying and tackling cyber-threats. Similarly, we have also applied analytics and machine learning in marketing functions, predicting and identifying customer behaviour and improving customer experience. In research, we are testing Deep Learning and looking at new trends like explainable AI which looks at ways how future AI systems can explain their outputs and increase trust in their decision making.
Trends and opportunities
Many of the new opportunities for applying AI or machine learning coincide with the growing digitisation of platforms and infrastructure around us, be it data from tracking parcels in supply chains to monitoring pollution sensors on our roads. Each new source of data provides opportunities to discover new patterns and correlations, which provide valuable insights, which can transform society. We collaborate with academic partners from universities across the globe who have a strong interest in AI. We are working with them on ways of getting more value out of data and driving automation to support human operators. For example, AI-based automation of analytics can improve quality and reproducibility of data analysis and AI-based automation of operational processes can increase consistency, improve customer experience and free up of human operators to concentrate on solving difficult customer service issues faster.
Challenges to overcome
Traditional AI has focused on techniques like optimisation, search, planning and reasoning based on rule-based systems.
Machine learning drives most of the functions in analytics and datascience and researchers should focus on following best practice to achieve reproducible results
Machine learning has been successful through predictive analytics and data science leading, for example, to solutions for churn prevention or next best action based on propensity modelling. Recent advances in AI can be attributed to the success of artificial neural networks in the form of deep networks trained through deep learning. Driven by big data and advances in computing deep learning has achieved remarkable results in image recognition, speech recognition and language translation.
Deep learning relies on the availability of millions of labelled examples which are relatively easy to come in the data collected by search engines or social networks. Finding the amount of data in other application areas is still a challenge as is the fact that deep networks are black boxes that cannot explain what they have learned.
Bringing AI into operation typically involves challenges around making the infrastructure AI ready and having the right skills available. In the infrastructure space it is not only compute power, but also the availability of high quality data. Having the right people with the right skills means not only training your own employees because of the skill shortage in that space, but also move data science and machine learning up from a “craft” level to an engineering level with best practice tools and techniques to productionise the new technology. Once AI systems have been deployed we want them to be self-managing and self-learning which creates a whole new set of challenges around really understanding what these systems are doing and how they can explain their actions. Finally, we also need look at the ethics of AI systems and how their decisions impact on all of us. This involves looking at bias in data and models, for example, but also again at the transparency and explainability of AI systems.
Let’s look into the telecom space, for example. While AI and machine learning have already started to infiltrate various aspects of network planning, forecasting, operations and maintenance this is still only the beginning and there is a huge potential to do more. This will only accelerate due to the advent of 5G, which will drive virtualization, programmability and automation, and convergence – between fixed and mobile networks, but also between networks and IT. One of the key objectives of BT’s research is to ensure that new AI and machine learning techniques are fed into the architecture of a future networks platform in a structured, timely and effective way to ensure this potential is maximised. This will be a transformational journey and the main obstacle is that network data today is often more aptly described as “dirty data” than “big data” – i.e. the data is often unusable by advanced AI algorithms due to inconsistencies in data entry and data labelling. The starting point to address this issue is through training to give the thousands of internal and externals staff responsible for data entry visibility of how the data can be used by AI algorithms—and how erroneous labels can block this.
What makes a researcher a better leader?
What drives me at work is the passion and curiosity of finding solutions to business problems that are hidden in data. In my area it is essential to achieve reliable and reproducible results that can be explained to decision makers. I think that especially in fast-moving hi-tech areas like AI and machine learning it is important to identify and follow best practice and have a sense for rigour and quality in order to establish trust in the technology and drive its adaptation in the business.
I think as a researcher in AI or Data Science you need a thorough understanding of mathematics, statistics and algorithms. If you have the basics you can develop in any direction you are interested in.