This may be the year when artificial intelligence (AI) transforms daily life. So said Brad Smith, president and vice chairman of Microsoft, at a Vatican-organised event on AI last week. But Smith’s statement was less a prediction than a call to action: the event, attended by industry leaders and representatives of the three Abrahamic religions, sought to promote an ethical, human-centered approach to the development of AI.
There is no doubt that AI poses a daunting set of operational, ethical and regulatory challenges. And addressing them will be far from straightforward. Although AI development dates back to the 1950s, the technology’s contours and likely impact remain hazy.
Of course, recent breakthroughs, from the almost chillingly human-like text produced by OpenAI’s ChatGPT to applications that may shave years off the drug-discovery process, shed light on some dimensions of AI’s immense potential. But it remains impossible to predict all the ways AI will reshape human lives and civilisation.
This uncertainty is nothing new. Even after recognizing a technology’s transformative potential, the shape of the transformation tends to surprise us. Social media, for example, was initially touted as an innovation that would strengthen democracy but has done far more to destabilise it by facilitating the spread of disinformation. It is safe to assume that AI will be exploited in similar ways.
We do not even fully understand how AI works. Consider the so-called black box problem: with most AI-based tools, we know what goes in and what comes out, but not what happens in between. If AI is making (at times irrevocable) decisions, this opacity poses a serious risk, which is compounded by issues like the transmission of implicit bias through machine learning.
The misuse of personal data and the destruction of jobs are two additional risks. And, according to former US secretary of state Henry A. Kissinger, AI technology may undermine human creativity and vision as information comes to “overwhelm” wisdom. Some worry that AI will lead to human extinction.
With stakes this high, the future of the technology cannot be left to AI researchers, let alone tech CEOs. While heavy-handed regulation is not the answer, the current regulatory vacuum must be filled. That process demands the kind of broad-based global engagement that is increasingly shaping efforts to combat climate change.
In fact, climate change offers a useful analogy for AI, far more useful than the oft-made nuclear comparison. The existence of nuclear weapons may affect people indirectly, through geopolitical developments, but the technology is not a fixture of our personal and professional lives; nor is it shared globally. But climate change, like AI, affects everyone, and action to limit it could put a country at a disadvantage.
Already, the race to dominate AI is a key feature of the US-China rivalry. If either country imposes limits on its AI industry, it risks allowing the other to pull ahead. That is why, as with emissions reduction, a cooperative approach is vital. Governments, together with other relevant public actors, must work together to design and install guardrails for private-sector innovation.
Of course, that is easier said than done. Limited consensus on how to approach AI has resulted in a hodgepodge of regulations. And efforts to devise a common approach within international forums have been stymied by power struggles among major players and the lack of enforcement authority.
But there is some promising news. The European Union is working to forge an ambitious principles-based instrument for establishing harmonized AI rules. The AI Act, expected to be finalised this year, aims to facilitate the “development and uptake” of AI in the EU, while ensuring that the technology “works for people and is a force for good in society”. From adapting civil-liability rules to revising the EU’s product-safety framework, the act takes the kind of comprehensive approach to AI regulation that we have been missing.
It should not be surprising that the EU has emerged as a frontrunner in AI regulation. The bloc has a history of leading the way in developing regulatory frameworks in critical areas. The EU’s legislation on data protection arguably inspired similar action elsewhere, from the Consumer Privacy Act in California to the Personal Information Protection Law in China.
But progress on global AI regulation will be impossible without the United States. And, despite its shared commitment with the EU to developing and implementing “trustworthy AI”, the US is committed to AI supremacy above all. To this end, it is seeking not only to bolster its own leading-edge industries, including by keeping the red tape to a minimum, but also to impede progress in China.
As the National Security Commission on Artificial Intelligence noted in a 2021 report, the US should be targeting “choke points that impose significant trickle-down strategic costs on competitors but minimal economic costs on US industry”. The export controls that the US imposed last October, which target China’s advanced-computing and semiconductor sectors, exemplify this approach. For its part, China is unlikely to be deterred from its quest to achieve technological self-sufficiency and, ultimately, supremacy.
Beyond opening the way for AI-generated risks to manifest, this technological rivalry has obvious geopolitical implications. For example, Taiwan’s outsize role in the global semiconductor industry gives it leverage, but may also put yet another target on its back.
It took more than three decades for awareness of climate change to crystalise into real action, and we still are not doing enough. Given the pace of technological innovation, we cannot afford to follow a similar path on AI. Unless we act now to ensure that the technology’s development is guided by human-centric principles, we will almost certainly regret it. And, as with climate change, we will most likely lament our inaction much sooner than we think.
Ana Palacio, a former minister of foreign affairs of Spain and former senior vice president and general counsel of the World Bank Group, is a visiting lecturer at Georgetown University.
This may be the year when artificial intelligence (AI) transforms daily life. So said Brad Smith, president and vice chairman of Microsoft, at a Vatican-organised event on AI last week. But Smith’s statement was less a prediction than a call to action: the event, attended by industry leaders and representatives of the three Abrahamic religions, sought to promote an ethical, human-centered approach to the development of AI.
There is no doubt that AI poses a daunting set of operational, ethical and regulatory challenges. And addressing them will be far from straightforward. Although AI development dates back to the 1950s, the technology’s contours and likely impact remain hazy.
Of course, recent breakthroughs, from the almost chillingly human-like text produced by OpenAI’s ChatGPT to applications that may shave years off the drug-discovery process, shed light on some dimensions of AI’s immense potential. But it remains impossible to predict all the ways AI will reshape human lives and civilisation.
This uncertainty is nothing new. Even after recognizing a technology’s transformative potential, the shape of the transformation tends to surprise us. Social media, for example, was initially touted as an innovation that would strengthen democracy but has done far more to destabilise it by facilitating the spread of disinformation. It is safe to assume that AI will be exploited in similar ways.
We do not even fully understand how AI works. Consider the so-called black box problem: with most AI-based tools, we know what goes in and what comes out, but not what happens in between. If AI is making (at times irrevocable) decisions, this opacity poses a serious risk, which is compounded by issues like the transmission of implicit bias through machine learning.
The misuse of personal data and the destruction of jobs are two additional risks. And, according to former US secretary of state Henry A. Kissinger, AI technology may undermine human creativity and vision as information comes to “overwhelm” wisdom. Some worry that AI will lead to human extinction.
With stakes this high, the future of the technology cannot be left to AI researchers, let alone tech CEOs. While heavy-handed regulation is not the answer, the current regulatory vacuum must be filled. That process demands the kind of broad-based global engagement that is increasingly shaping efforts to combat climate change.
In fact, climate change offers a useful analogy for AI, far more useful than the oft-made nuclear comparison. The existence of nuclear weapons may affect people indirectly, through geopolitical developments, but the technology is not a fixture of our personal and professional lives; nor is it shared globally. But climate change, like AI, affects everyone, and action to limit it could put a country at a disadvantage.
Already, the race to dominate AI is a key feature of the US-China rivalry. If either country imposes limits on its AI industry, it risks allowing the other to pull ahead. That is why, as with emissions reduction, a cooperative approach is vital. Governments, together with other relevant public actors, must work together to design and install guardrails for private-sector innovation.
Of course, that is easier said than done. Limited consensus on how to approach AI has resulted in a hodgepodge of regulations. And efforts to devise a common approach within international forums have been stymied by power struggles among major players and the lack of enforcement authority.
But there is some promising news. The European Union is working to forge an ambitious principles-based instrument for establishing harmonized AI rules. The AI Act, expected to be finalised this year, aims to facilitate the “development and uptake” of AI in the EU, while ensuring that the technology “works for people and is a force for good in society”. From adapting civil-liability rules to revising the EU’s product-safety framework, the act takes the kind of comprehensive approach to AI regulation that we have been missing.
It should not be surprising that the EU has emerged as a frontrunner in AI regulation. The bloc has a history of leading the way in developing regulatory frameworks in critical areas. The EU’s legislation on data protection arguably inspired similar action elsewhere, from the Consumer Privacy Act in California to the Personal Information Protection Law in China.
But progress on global AI regulation will be impossible without the United States. And, despite its shared commitment with the EU to developing and implementing “trustworthy AI”, the US is committed to AI supremacy above all. To this end, it is seeking not only to bolster its own leading-edge industries, including by keeping the red tape to a minimum, but also to impede progress in China.
As the National Security Commission on Artificial Intelligence noted in a 2021 report, the US should be targeting “choke points that impose significant trickle-down strategic costs on competitors but minimal economic costs on US industry”. The export controls that the US imposed last October, which target China’s advanced-computing and semiconductor sectors, exemplify this approach. For its part, China is unlikely to be deterred from its quest to achieve technological self-sufficiency and, ultimately, supremacy.
Beyond opening the way for AI-generated risks to manifest, this technological rivalry has obvious geopolitical implications. For example, Taiwan’s outsize role in the global semiconductor industry gives it leverage, but may also put yet another target on its back.
It took more than three decades for awareness of climate change to crystalise into real action, and we still are not doing enough. Given the pace of technological innovation, we cannot afford to follow a similar path on AI. Unless we act now to ensure that the technology’s development is guided by human-centric principles, we will almost certainly regret it. And, as with climate change, we will most likely lament our inaction much sooner than we think.
Ana Palacio, a former minister of foreign affairs of Spain and former senior vice president and general counsel of the World Bank Group, is a visiting lecturer at Georgetown University.
This may be the year when artificial intelligence (AI) transforms daily life. So said Brad Smith, president and vice chairman of Microsoft, at a Vatican-organised event on AI last week. But Smith’s statement was less a prediction than a call to action: the event, attended by industry leaders and representatives of the three Abrahamic religions, sought to promote an ethical, human-centered approach to the development of AI.
There is no doubt that AI poses a daunting set of operational, ethical and regulatory challenges. And addressing them will be far from straightforward. Although AI development dates back to the 1950s, the technology’s contours and likely impact remain hazy.
Of course, recent breakthroughs, from the almost chillingly human-like text produced by OpenAI’s ChatGPT to applications that may shave years off the drug-discovery process, shed light on some dimensions of AI’s immense potential. But it remains impossible to predict all the ways AI will reshape human lives and civilisation.
This uncertainty is nothing new. Even after recognizing a technology’s transformative potential, the shape of the transformation tends to surprise us. Social media, for example, was initially touted as an innovation that would strengthen democracy but has done far more to destabilise it by facilitating the spread of disinformation. It is safe to assume that AI will be exploited in similar ways.
We do not even fully understand how AI works. Consider the so-called black box problem: with most AI-based tools, we know what goes in and what comes out, but not what happens in between. If AI is making (at times irrevocable) decisions, this opacity poses a serious risk, which is compounded by issues like the transmission of implicit bias through machine learning.
The misuse of personal data and the destruction of jobs are two additional risks. And, according to former US secretary of state Henry A. Kissinger, AI technology may undermine human creativity and vision as information comes to “overwhelm” wisdom. Some worry that AI will lead to human extinction.
With stakes this high, the future of the technology cannot be left to AI researchers, let alone tech CEOs. While heavy-handed regulation is not the answer, the current regulatory vacuum must be filled. That process demands the kind of broad-based global engagement that is increasingly shaping efforts to combat climate change.
In fact, climate change offers a useful analogy for AI, far more useful than the oft-made nuclear comparison. The existence of nuclear weapons may affect people indirectly, through geopolitical developments, but the technology is not a fixture of our personal and professional lives; nor is it shared globally. But climate change, like AI, affects everyone, and action to limit it could put a country at a disadvantage.
Already, the race to dominate AI is a key feature of the US-China rivalry. If either country imposes limits on its AI industry, it risks allowing the other to pull ahead. That is why, as with emissions reduction, a cooperative approach is vital. Governments, together with other relevant public actors, must work together to design and install guardrails for private-sector innovation.
Of course, that is easier said than done. Limited consensus on how to approach AI has resulted in a hodgepodge of regulations. And efforts to devise a common approach within international forums have been stymied by power struggles among major players and the lack of enforcement authority.
But there is some promising news. The European Union is working to forge an ambitious principles-based instrument for establishing harmonized AI rules. The AI Act, expected to be finalised this year, aims to facilitate the “development and uptake” of AI in the EU, while ensuring that the technology “works for people and is a force for good in society”. From adapting civil-liability rules to revising the EU’s product-safety framework, the act takes the kind of comprehensive approach to AI regulation that we have been missing.
It should not be surprising that the EU has emerged as a frontrunner in AI regulation. The bloc has a history of leading the way in developing regulatory frameworks in critical areas. The EU’s legislation on data protection arguably inspired similar action elsewhere, from the Consumer Privacy Act in California to the Personal Information Protection Law in China.
But progress on global AI regulation will be impossible without the United States. And, despite its shared commitment with the EU to developing and implementing “trustworthy AI”, the US is committed to AI supremacy above all. To this end, it is seeking not only to bolster its own leading-edge industries, including by keeping the red tape to a minimum, but also to impede progress in China.
As the National Security Commission on Artificial Intelligence noted in a 2021 report, the US should be targeting “choke points that impose significant trickle-down strategic costs on competitors but minimal economic costs on US industry”. The export controls that the US imposed last October, which target China’s advanced-computing and semiconductor sectors, exemplify this approach. For its part, China is unlikely to be deterred from its quest to achieve technological self-sufficiency and, ultimately, supremacy.
Beyond opening the way for AI-generated risks to manifest, this technological rivalry has obvious geopolitical implications. For example, Taiwan’s outsize role in the global semiconductor industry gives it leverage, but may also put yet another target on its back.
It took more than three decades for awareness of climate change to crystalise into real action, and we still are not doing enough. Given the pace of technological innovation, we cannot afford to follow a similar path on AI. Unless we act now to ensure that the technology’s development is guided by human-centric principles, we will almost certainly regret it. And, as with climate change, we will most likely lament our inaction much sooner than we think.
Ana Palacio, a former minister of foreign affairs of Spain and former senior vice president and general counsel of the World Bank Group, is a visiting lecturer at Georgetown University.
comments