Primum Non Nocere. First do no harm. So goes the more modern version of the Hippocratic Oath, taken by doctors despite knowing more than likely they will be involved in a patient’s death. For example, they may inadvertently cause death as a result of a mistaken diagnosis, exhaustion, or a variety of other reasons, leading to a natural concern about how many of these mistakes could be avoided. AI shows promise as a tool for supporting and strengthening doctors’ decision-making, but just as with doctors, if you give AI some power in decision making (along with the power of analysis), it will more than likely be implicated in a patient’s death. If so, is it the responsibility of the doctor? The hospital? The engineer? The firm?
Answers to such questions depend on how the governance is arranged - whether or not a doctor is at the end of each AI-provided analysis, the relative weight given to AI-driven insights or predictions, or in short, the buffers between the AI making the decision. It is paramount to remember that all attempts to automate and reproduce intelligence are not deterministic, they are probabilistic, and therefore subject to issues and experiential biases that shape all other kinds of intelligence.
The same issues arise for self-driving cars, autonomous drones, and the host of intentional and incidental ways AI will be involved in life-or-death scenarios, and the more day-to-day risks people face.
We can begin to see the larger picture, but governance is in the details. The risks of inaccuracy (whether as low as <5%) will differ in a hospital, in image recognition for food safety, and in text analysis for legal documents. As such, policy makers will need more nuanced accounts of decision-making processes and what those specific risks entail. While the burden of clarity is falling heavily on developers, so long as AI is part of a larger chain of decisions, or even after, the ethical question cannot be resolved by any one party alone.
We pose that while emerging ethics councils, task forces, and impact statements provide means for oversight, the challenge facing ethical AI is designing a clear and concise roadmap, for those who employ it, those who use it, and those affected by it alike. The purpose of the roadmap is not only to understand when and how AI is used, but to help improve literacy of AI’s personal, psychological, and social consequences.
In designing this roadmap, government should consider the following.
Holding organisations accountable in practice
When a decision was made using AI, we may not know whether or not the data was faulty; regardless, there will come a time when someone appeals a decision made by, or influenced by, AI-driven insights. People have the right to be informed that a significant decision concerning their lives was carried out with the help of an AI. Governments will need a better record of what companies and institutions use AI for making significant decisions to enforce this policy.
When specifically assessing a decision-making process of concern, the first step should be to determine whether or not the data set represents what the organisation wanted the AI to understand and make decisions about.
However, data sets, particularly easily available data sets, cover a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before – the ethical issue is the framework by which decisions occur, and good data cannot secure that kind of ethical behavior by itself.
New proposals are already emerging, showing how institutions can train the AI on synthetic data sets, demonstrating the kinds of correlations we want them to make or avoid to produce better decisions, as with GoodAI – but the issue is not simply in the algorithm, but in the choices about which kinds of data sets, the design of the algorithms, and the intended impact on decision making; in other words, its ecology of use. Even at 99% accuracy, we will need a system of governance to structure the appeals – in fact, under such conditions, we will need it even more.
Clarifying and enforcing the right to an explanation
General Data Protection Regulation (GDPR) gives us a right to an explanation, a means to be informed, but effective governance has to go a step further. To progress, there may need to be a common right to appeal an unintelligible decision, which could entail questioning whether or not the company itself understands the decision-making process.
Different kinds of practices will require different kinds of means of making AI decisions understandable - one such method already proposed is to have counterfactual assessments, a running account of different scenarios and their flow, which can be followed. However, while this enables oversight, it does not equate to getting the reasoning by which a decision was made.It is essential to note that the specifics of decision making by AI, whether a black box or otherwise, does not always need to be exposed. In fact, the majority of cases would not need to open the box, and a decision audit would suffice. However, the point remains that the inability to expose the specifics limits the viability of effective governance; it’s a completeness principle.
While the public sector may be receptive to the idea of opening the black box to understand why one decision was made instead of another, corporates may be less so. The most common defense corporates are likely to make is on the basis of intellectual property, and we may expect them to argue that specialized AI is a trade secret. When AI-driven systems are protected in such a way, we may wonder whether incentives are appropriately aligned for firms to maintain an adequate understanding of their decision-making systems.
Negotiating the trade-off between expediency and legitimacy
The decisions which corporations and governments will need to come to terms with follow from the previous issues: governing data is essential, but insufficient; as is being informed and as is providing a right to an explanation.
GDPR in the EU is a step in the right direction; however, the future of effective appeals and governance will need to be case-by-case.
Moreover, the unfortunate truth is that no privacy ruling will by itself be sufficient to either govern or maintain effective governance, and the degree to which that regulation would have to be updated to maintain pace with emerging innovation brings governments to the next dilemma – choosing between expediency and legitimacy.
Synthetic data provides an expedient way forward, but also means new arguments and agreements about how to create that data and what sort of oversight is required.
Task forces provide a legitimate way forward, but also require understanding the tech literacy gap between organisations making decisions, and the ability for users to understand the consequences of those decisions before the tech or use of that tech changes.
We are in a new age of informal experimental governance. Suffice to say, politicians, coders, and philosophers have their work cut out for them. Technology is a problem-solving tool as well as an enabler of more problems – AI is no different, for now.
This article is also featured on The RSA. and is written by Danny Goh, Josh Entsminger, Mark Esposito, and Terence Tse.
Danny Goh
Serial entrepreneur and an early-stage investor, co-founded and been CEO of Nexus FrontierTech, investing in early-stage start-ups with 20+ portfolios; currently serves as an entrepreneurship expert with the Entrepreneurship Centre at Said Business School, University of Oxford.
Josh Entsminger
Serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.
Mark Esposito
Professor of business and economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University; a faculty member at Harvard University since 2011; a socio-economic strategist researching the Fourth Industrial Revolution and global shifts.
Terence Tse
Professor at ESCP Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities.