BY JOSH ENTSMINGER, MARK ESPOSITO, and TERENCE TSE
The “metaverse” isn’t here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment.
The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later – before self-interested corporations do it with their own profit margins in mind.
The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.
Owing to this work, we now know that AI is effectively “automating inequality,” as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.
This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.
To take the most common example, Twitter and Facebook will not deploy AI effectively against the full range of abuses on their platforms because doing so would undermine “engagement” (outrage) and thus profits. Similarly, these and other tech firms have leveraged value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now willingly give up the power they have gained.
More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical failures. Those working on AI within Big Tech companies would be pressed to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data being used to train AI applications can be trusted. To that end, many tech corporations established supposedly independent ethics boards. However, the reliability of this form of governance has since been called into question following high-profile ousters of internal researchers who raised concerns about the ethical and social implications of certain AI models.
Establishing a sound ethical foundation for the metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also must be mindful of how the metaverse is already diverging from AI. While AI has been largely centered around internal corporate operations, the metaverse is decidedly consumer-centric, which means that it will come with all kinds of behavioral risks that most people will not have considered.
Just as telecom regulation (specifically Section 230 of the US Communications Decency Act of 1996) provided the governance model for social media, regulation of social media will become the default governance model for the metaverse. That should worry us all. Though we can easily foresee many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we might underestimate the sheer scale that they will reach and the knock-on effects they will have.
It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A wholly digital environment creates the potential for even more exhaustive data collection, including of personal biometric data. And since no one really knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before allowing a wider rollout.
Anticipating the metaverse’s ethical challenges is still possible; but the clock is ticking. Without effective independent oversight, this new digital domain will almost certainly go rogue, recreating all the abuses and injustices of both AI and social media – and adding more that we have not even foreseen. A Metaverse Justice League may be our best hope.
*project-syndicate
BY JOSH ENTSMINGER, MARK ESPOSITO, and TERENCE TSE
The “metaverse” isn’t here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment.
The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later – before self-interested corporations do it with their own profit margins in mind.
The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.
Owing to this work, we now know that AI is effectively “automating inequality,” as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.
This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.
To take the most common example, Twitter and Facebook will not deploy AI effectively against the full range of abuses on their platforms because doing so would undermine “engagement” (outrage) and thus profits. Similarly, these and other tech firms have leveraged value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now willingly give up the power they have gained.
More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical failures. Those working on AI within Big Tech companies would be pressed to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data being used to train AI applications can be trusted. To that end, many tech corporations established supposedly independent ethics boards. However, the reliability of this form of governance has since been called into question following high-profile ousters of internal researchers who raised concerns about the ethical and social implications of certain AI models.
Establishing a sound ethical foundation for the metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also must be mindful of how the metaverse is already diverging from AI. While AI has been largely centered around internal corporate operations, the metaverse is decidedly consumer-centric, which means that it will come with all kinds of behavioral risks that most people will not have considered.
Just as telecom regulation (specifically Section 230 of the US Communications Decency Act of 1996) provided the governance model for social media, regulation of social media will become the default governance model for the metaverse. That should worry us all. Though we can easily foresee many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we might underestimate the sheer scale that they will reach and the knock-on effects they will have.
It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A wholly digital environment creates the potential for even more exhaustive data collection, including of personal biometric data. And since no one really knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before allowing a wider rollout.
Anticipating the metaverse’s ethical challenges is still possible; but the clock is ticking. Without effective independent oversight, this new digital domain will almost certainly go rogue, recreating all the abuses and injustices of both AI and social media – and adding more that we have not even foreseen. A Metaverse Justice League may be our best hope.
*project-syndicate
BY JOSH ENTSMINGER, MARK ESPOSITO, and TERENCE TSE
The “metaverse” isn’t here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment.
The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later – before self-interested corporations do it with their own profit margins in mind.
The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.
Owing to this work, we now know that AI is effectively “automating inequality,” as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.
This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.
To take the most common example, Twitter and Facebook will not deploy AI effectively against the full range of abuses on their platforms because doing so would undermine “engagement” (outrage) and thus profits. Similarly, these and other tech firms have leveraged value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now willingly give up the power they have gained.
More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical failures. Those working on AI within Big Tech companies would be pressed to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data being used to train AI applications can be trusted. To that end, many tech corporations established supposedly independent ethics boards. However, the reliability of this form of governance has since been called into question following high-profile ousters of internal researchers who raised concerns about the ethical and social implications of certain AI models.
Establishing a sound ethical foundation for the metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also must be mindful of how the metaverse is already diverging from AI. While AI has been largely centered around internal corporate operations, the metaverse is decidedly consumer-centric, which means that it will come with all kinds of behavioral risks that most people will not have considered.
Just as telecom regulation (specifically Section 230 of the US Communications Decency Act of 1996) provided the governance model for social media, regulation of social media will become the default governance model for the metaverse. That should worry us all. Though we can easily foresee many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we might underestimate the sheer scale that they will reach and the knock-on effects they will have.
It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A wholly digital environment creates the potential for even more exhaustive data collection, including of personal biometric data. And since no one really knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before allowing a wider rollout.
Anticipating the metaverse’s ethical challenges is still possible; but the clock is ticking. Without effective independent oversight, this new digital domain will almost certainly go rogue, recreating all the abuses and injustices of both AI and social media – and adding more that we have not even foreseen. A Metaverse Justice League may be our best hope.
comments