-
Archives
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- May 2017
- January 2017
- December 2014
- September 2014
- June 2014
-
Meta
Monthly Archives: July 2019
It’s time to AI-xplain
The Information Commissioner’s Office (ICO) in a collaboration with The Alan Turing Institute (The Turing) has created Project ExplAInto which aims to create practical guidance to assist organisations with explaining artificial intelligence (AI) decisions to the individuals affected.The ICO and The Turing conducted public research to gather information about the views held on AI. The ICO has said that they are working on the project as they believe that AI presents ‘some of the biggest risks related to the use of personal data’. The ICO wants to provide ‘effective guidance’ on how to address data protection risks from new technology. The current lawThe GDPR makes no specific provisions for technology or AI. There are several provisions which are relevant to the use of AI:• Principle 1. (a) requires fair, lawful, and transparent processing of data. • Articles 13-15 give individuals the right to be informed of the existence of solely automated decision-making and the consequences. • Article 22 gives individuals the right not to be subject to a solely automated decision producing legal or similarly significant effects. It obliges organisations to adopt measures to safeguard individuals when using solely automated decisions; and• Article 35 requires organisations to carry out Data Protection Impact Assessments when what they are doing with personal data, particularly when using new technologies, is likely to have high risks for individuals.Project ExplAIn plans to advise and assist organisation with meeting the requirements for use of AI in terms of data protection. They also intend to promote ‘best practice’. The ReportThe interim report published by the ICO sets out their findings from research into the current understanding of AI. This research will inform the guidance. EducationOne of the key findings was a need to improve education and awareness surrounding AI, so that individuals are better informed to understand the implications the technology has on their data. They hope that improving education will improve public confidence in AI decisions. This is particularly important in the wake of recent discussions on the use of AI in decision making and in Online Courts. The research suggests that there is a lack of understanding which leads to a lack of faith in the decision. The conclusion reached also posed the alternative view that over-normalising the use of AI decisions could lead to individuals being less likely to question its use and expect explanations. Though they want to avoid campaigns emphasising risks and negative impacts of AI. It was decided that it was important to be aware of this point and to include diverse voices in the work. The report identified the need to translate complex decision-making rationale into an appropriate language for a lay audience.ContextAnother key point from the report was that the content of AI explanations will depend on the context , including: timing and urgency, impact of decision, the ability to change influencing factors, scope for bias and interpretation, type of data and the recipient. The individual’s ability to challenge or respond to the decision will increase the need for an explanation. For example in criminal justice decisions. Whereas in situations where individuals are more focused on a quick decision, the explanation may be less relevant. The level of expertise of the individual alongside the technicality of the decision will also be relevant. Therefore, the ‘appropriate explanation’ is likely to be different in different cases. This will be factored into the guidance. Cost The report also concludes that cost will be a major challenge in providing explanations and will affect how they pitched. Industries are also concerned with revealing commercially sensitive information. This can be both in relation to third party details and to competitors. Next steps The report will be out for public consultation over the summer. The guidance is due to be published this autumn. The ICO’s AI auditing framework is due to be finalised in 2020 and it is likely that these findings will influence the framework. The guidance may serve to legitimise the use of AI and improve public confidence in its use. However, if the best practice is too onerous, it may hinder the development of AI in smaller businesses.
Posted in Shorter Reads
Comments Off on It’s time to AI-xplain