Quantcast

Op-ed | The double-edged sword of pandemic-driven technological acceleration

Man seated in front of computer monitor at night
Photo via Getty Images

The COVID-19 pandemic has, without a doubt, changed the world – and one of its most notable effects has been the acceleration of technology initiatives as the public and private sectors seek to digitize more operations and communications. Many artificial intelligence (AI) products have come out of these advances, and many are being put to use by state and local governments with input from private tech companies.

Government is adopting AI at an accelerating pace. New York City and state agencies, for example, have experienced a broad expansion of AI applications, such as chatbots. And now, with last year’s widespread introduction of generative AI tools that can create new content such as text and images, it appears that further changes are on the way.     

As technological advancement continues, it is imperative that the private sector and government institutions meet the moment by adopting comprehensive risk mitigation strategies and effective AI governance frameworks that prioritize transparency, accuracy and fairness.

Unfortunately, the ability to understand the risks involved with some AI products — and the strategies to reduce or eliminate those risks – has not kept up with the pace at which AI is being put to use. Numerous studies have shown a significant rise in AI adoption and investment, with a majority of respondents foreseeing a further boost in AI investment in the coming years. Alarmingly, organizations have made little progress in addressing well-known AI-related risks, such as bias, lack of transparency and safety concerns.

This concerning trend is also evident in government institutions. My recent report on AI governance in New York City found that the city lacks an effective AI governance framework. City agencies have been left to develop their own divergent approaches to AI governance, resulting in ad hoc and incomplete measures that fail to ensure transparency, accuracy and fairness in AI systems.

This is concerning because while AI promises vast opportunities, it also carries inherent risks. Several incidents — even before the pandemic, illustrated the unintentional harm that can be caused by government AI systems designed or implemented irresponsibly. For instance, a faulty automated fraud detection system in Michigan erroneously accused thousands of unemployment insurance recipients of fraud, causing financial ruin for many. Similar issues have plagued other systems related to Medicaid eligibility determinations, facial recognition, criminal justice, health care, teacher evaluations and job recruitment applications.

New York City has been a forerunner in examining the use of AI. It was among the first to establish a Task Force dedicated to examining the responsible use of automated decision-making systems, including AI systems. However, the city’s efforts are no longer keeping pace with this rapidly advancing technology. Despite the Task Force’s recommendations and the expansion of AI applications during the pandemic, New York City does not have an effective AI governance framework.

As we continue to embrace the technological leaps brought forth by the pandemic, we must ensure that we do so responsibly. Audits, such as the one my office conducted in New York City, can help drive change by raising awareness of where risks lie. Understanding these risks and identifying blind spots is a first step in the right direction, but the city must also take further action, such as implementing a robust governance framework to ensure that the city’s use of AI is transparent, accurate, unbiased, and minimizes the potential for disparate impacts. I encourage my colleagues in government to join me in ensuring that AI systems work to further the greater good for all New Yorkers.