Monthly Archives: July 2019

It’s time to AI-xplain

The Information Commissioner’s Office (ICO)  in a collaboration with The Alan Turing Institute (The Turing) has created Project ExplAInto which aims to create practical guidance to assist organisations with explaining artificial intelligence (AI) decisions to the individuals affected.The ICO and The Turing conducted public research to gather information about the views held on AI. The ICO has said that they are working on the project as they believe that AI presents ‘some of the biggest risks related to the use of personal data’. The ICO wants to provide ‘effective guidance’ on how to address data protection risks from new technology. The current lawThe GDPR makes no specific provisions for technology or AI. There are several provisions which are relevant to the use of AI:• Principle 1. (a) requires fair, lawful, and transparent processing of data. • Articles 13-15 give individuals the right to be informed of the existence of solely automated decision-making and the consequences. • Article 22 gives individuals the right not to be subject to a solely automated decision producing legal or similarly significant effects. It obliges organisations to adopt  measures to safeguard individuals when using solely automated decisions; and• Article 35 requires organisations to carry out Data Protection Impact Assessments when what they are doing with personal data, particularly when using new technologies, is likely to have high risks for individuals.Project ExplAIn plans to advise and assist organisation with meeting the requirements for use of AI in terms of data protection. They also intend to promote ‘best practice’. The ReportThe interim report published by the ICO sets out their findings from research into the current understanding of AI. This research will inform the guidance. EducationOne of the key findings was a need to improve education and awareness surrounding AI, so that individuals are better informed to understand the implications the technology has on their data. They hope that improving education will improve public confidence in AI decisions. This is particularly important in the wake of recent discussions on the use of AI in decision making and in Online Courts. The research suggests that there is a lack of understanding which leads to a lack of faith in the decision. The conclusion reached also posed the alternative view that over-normalising the use of AI decisions could lead to individuals being less likely to question its use and expect explanations. Though they want to avoid campaigns emphasising risks and negative impacts of AI. It was decided that it was important to be aware of this point and to include diverse voices in the work. The report identified the need to translate complex decision-making rationale into an appropriate language for a lay audience.ContextAnother key point from the report was that the content of AI explanations will depend on the context , including: timing and urgency, impact of decision, the ability to change influencing factors, scope for bias and interpretation, type of data and the recipient. The individual’s ability to challenge or respond to the decision will increase the need for an explanation. For example in criminal justice decisions. Whereas in situations where individuals are more focused on a quick decision, the explanation may be less relevant. The level of expertise of the individual alongside the technicality of the decision will also be relevant. Therefore, the ‘appropriate explanation’ is likely to be different in different cases. This will be factored into the guidance. Cost The report also concludes that cost will be a major challenge in providing explanations and will affect how they pitched. Industries are also concerned with revealing commercially sensitive information. This can be both in relation to third party details and to competitors. Next steps The report will be out for public consultation over the summer. The guidance is due to be published this autumn. The ICO’s AI auditing framework is due to be finalised in 2020 and it is likely that these findings will influence the framework. The guidance may serve to legitimise the use of AI and improve public confidence in its use. However, if the best practice is too onerous, it may hinder the development of AI in smaller businesses. 

Posted in Shorter Reads | Comments Off on It’s time to AI-xplain