20/12/2018 14:20:29 (GMT)
Whatever your view on if AI will result in the loss of jobs or not, it is almost certain that you will agree that at least some tasks will eventually be taken over by AI systems in every industry sector. This is great news for those who have dreamt of such a future for a long time. The problem comes when you consider the impact this will have on a business beyond the productivity and other positive effects. Previously when a decision that affected an outcome had been made, the person who made that decision could be asked for their reasoning as to why they made that decision. However, if an AI system has made that decision it is often very difficult, sometimes impossible, to pinpoint exactly why.
It is important that our AI systems have the ability to explain their actions. This is of value, not only to satisfy regulations like GDPR's Recital 71 regarding the right to obtain an explanation, but because some AI applications directly concern the safety of people. This could be either in a factory with heavy automated machinery, a warehouse with robot workers or even in self-driving cars. Having the ability to explain decisions will likely become necessary for a business to obtain insurance when deploying these technologies. It is also useful for AI researchers and engineers to be able to see why their models are making the decisions they do as it makes it easier to construct and train an efficient system.
This is not necessarily a new area that the AI community has been working on but with the increase in the adoption of AI systems into the mainstream it has become more of a priority over the last few years with 2019 set to see huge leaps forward in this area. There has already been much work on the subject of Explainable AI but one of the latest approaches, published earlier this year from researchers at Carnegie Mellon University, entitled "Understanding Convolutional Networks with APPLE: Automatic Patch Pattern Labelling for Explanation" shows the stage that we are currently at in the research community.
Concerning non-personal data, 2019 will see a huge increase in the amount of data that is openly available and with this an increase in previously unthought of AI applications will also emerge. When talking about personal data though 2018 has seen the introduction of GDPR and what seems like new data breach issues almost every couple of weeks. This trend is likely to continue into 2019 meaning more even more emphasis will be placed on data privacy in the media.
However, when used responsibly personal data can be a hugely powerful element of Machine Learning models. Therefore, it is likely that, although 2019 will probably see a more cautious approach to the use of this data, standards will be introduced and followed that allow us to make the most of this treasure trove of information. These standards will likely build upon what GDPR has already set out and will come together after much public debate and possibly be embedded into each country's own laws themselves.
Anonymising data is one approach that will receive some attention as this trade off still preserves enough of the data needed to extract meaningful insights and build powerful Machine Learning models. The downside to this is that anonymisation of data is often seen as a very trivial task and so it is often down in a very lazy way. One famous example of where this has been exposed is in data released by the New York Taxi Commission. When they released the data they had on taxicab rides and fares in New York, they thought they had done enough to protect the privacy of the passengers. However, it was possible to reverse engineer this data to expose celebrity passengers and find out the places they frequented, where they lived and even how much they had tipped. 2019 Will see a more in-depth discussion about how we can protect people's identities while still making data open and powering AI with it.
In 2019, we can expect an increase in the tools available to those who create AI solutions. These tools will make it easier to, not only develop, but also deploy AI and Machine Learning products and services.
One area in particular that will likely see a huge overhaul, is the process by which those involved in the creation of new Machine Learning models through Supervised Learning methods obtain their training data. For example, when working with image or video data each object of relevance would need to have a bounding box drawn around it and labelled. This was previously a manual process that was very tedious and time consuming. Currently, there are some AI assisted techniques that are used to annotate this data. However, they still require a human to edit, accept or reject the suggested annotations. We expect that 2019 will see a lot of growth in the research and development around this topic of automated annotation of data, initially in the academic community but probably also in the commercial sector as well.