Q&A series with Tracey Groves, founder and director of Intelligent Ethics on ethical leadership:
I don’t know how to fly an aeroplane, but I still travel by air when I need to, with the plane primarily being flown by a computer programme and with minimal human intervention. I trust the system, processes, procedures in place, and the people who designed the system. I know that there are safeguards, controls, checks and balances in place to assess and monitor safety and security, with governance, standards and regulation to enforce and maintain standards.
So, the question is, don’t we need a similar framework for AI? I believe we will have to learn how to trust AI enough. I don’t need to know how the system works in detail, but I do need to have enough confidence and a belief that the decisions that are being made by the system are intelligible, fair and subject to the appropriate monitoring and checks.
I recognise that there is concern about the ‘black box’ that AI is using more and more to make decisions and that there is an inherent level of mistrust in a system that we don’t understand. Therefore, my question is, how we can learn to trust it enough? What do we need to do to ensure that there is sufficient intelligibility and transparency? When critical decisions are made, for example, when a mortgage offer is accepted or declined, or a medical diagnosis is delivered using advanced technologies, how can I be confident of the fairness, accuracy and integrity of the outcome, even though I do not get to see the inner workings? These ethical principles, supported by a robust governance framework, must be hard-wired into the design of the AI from the start.
