In 2008, the UK published its first national risk register, allowing a public sense of the national security priorities for civil emergencies. The report measured risks along two dimensions—relative impact and relative likelihood—with the possibility of a pandemic influence topping the list. A year later the world saw the swine flu pandemic, causing hundreds of thousands of deaths. While the UK saw hundreds of thousands of cases, they saw relatively few deaths. From the perspective of governments, pandemics are not black swans. The issue is not whether a country foresees a pandemic and prepares accordingly, it’s whether the systems in place can function under the level of stress resulting from a pandemic. To that end, COVID-19 has served as a stress test not simply of health-care systems around the world, but of globalization itself.
Undoubtedly, the modern era of mass connectivity driven by globalization has increased the danger from pandemics, as they are more likely to spread given our increased mobility. But this does not mean we should take fault with globalization alone; rather, we need to use the inherent connectivity of globally distributed value chains to improve the way countries predict, respond, and coordinate around pandemics and catastrophes. The connectivity must become a vector for capacity building, toward the ambitions to foster better solutions to the kind of problems we are likely to increasingly face ahead.
In the context of our current pandemic, when we refer to capacity, we are not only referring to the number of beds and ventilators, the number of staff and masks, but the organizational models behind how hospitals and suppliers plan for continuous supply while conserving and preserving safety. Capacity is indeed a systemic issue. To this effect, despite heroic efforts on the frontlines, the COVID-19 pandemic has shown ineffective planning, and there are rising concerns on how a severely under-invested infrastructure could lead to even graver endemic social issues. These issues may compound old and new inequalities, leading to higher infections and deaths from suboptimal and under-coordinated health-care facilities.
To augment the capacity needed by national health-care and safety systems, governments and organizations are turning to data-intensive solutions to help leapfrog into faster, scalable means of response.
To augment the capacity needed by national health-care and safety systems, while attempting to broach the issue of socioeconomic consequences, governments and organizations are turning to data-intensive solutions to help leapfrog into faster, scalable means of response. From track-and-trace initiatives based on geolocation data in South Korea, Vietnam, China, Israel, and Singapore, to Alibaba’s widely touted covid-19 AI detection solution, the US’s machine readable covid data set approach, the NHS’s use of big tech vendors to dashboard national health response, and dozens of other stories.
The wider public now needs to ask if this data-intensive governmental demeanour is inherently conflicting with privacy. But is it?
South Korea’s use of granular location-based information shows the fine line here and will undoubtedly become not only a point of future investigation but a referent in a new wave of hyper-accuracy-based public health approaches. Such initiatives hold worries because such expansions may lead to a new normal. Yet as the privacy problem grows with data-intensive pursuits—translating to increased surveillance and expanded scopes of data collection by governments around the world—there is an equally pressing need to understand how the data will be assessed and measured and in which way data will be used to set public-sector capacity for new crises ahead. Addressing these two questions together can help administrations springboard to a new generation of effective digital government tools, capable of learning and responding adequately to uncertainty, beyond conventional models of predictions.
So, is the answer to this need for novel capacity building to be found in AI interfaces?
The concern is not simply whether AI works but whether it works when we need it to most; whether relying on such solutions might inject unintended biases and privacy violations or whether the presumed processed data can handle situations with such uncertainty. In crises, errors tend to multiply—from inconsistent administrative procedures, to rapid violation of privacy to disingenuous behaviours that unwillingly lead to future violations, all of which add to the potential risk of unintentionally furthering the risk of contagion.
When dealing with complex and nonlinear phenomena, the reality is that investments into improving the precision of information have a functional limit to their value. The limit is placed by the reliability of institutional capacity to respond to not only the known variables, but especially those variations that seem to be contingent to the nature of crises and large-scale events. Indeed, the name of the game is not the race to the best information. Rather the race to the best system that can function independently of that information because it has the capacity to adjust and learn.
The governments that will show the best use of AI-enabled services and organizations will not be those with the best data analytics per se, but those that use such analytics to improve the responsiveness towards outlying factors and unexpected externalities. This means a demand to understand a different paradigm of use and deployment for AI, as the largest promise is not just about the optimization of existing assets.
The world needs a new conversation on what expansive privacy and consent frameworks that enable broad capacity enhancement look like, how to build them quickly or how to set them up for the next inevitable crisis, and how to erect the necessary public health emergency response that we so much lacked in our current pandemic. This is finally not just about how we respond now—it’s about how the infrastructure we can institutionalize through a systematic use of AI services can build antifragility from tomorrow onward.
This article is also featured on World Economic Forum, and is written by Joshua Entsminger, Mark Esposito, and Terence Tse.
Mark Esposito
Professor of business and economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University; a faculty member at Harvard University since 2011; a socio-economic strategist researching the Fourth Industrial Revolution and global shifts.
Terence Tse
Professor at ESCP Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities.