On January 29, 2018, the prominent Berlin-based Azerbaijani news site Meydan TV had its Facebook page hacked for the first time. The attackers removed all admin accounts, deleted all content, and removed nearly 100,000 followers.
The next hack took place on May 10, 2019. This time, all of the content on Meydan TV’s Russian-language Facebook page was removed, along with two weeks’ worth of content on the site’s Azerbaijani Facebook page. The third hack, which occurred on June 18, 2020, resulted in Meydan TV losing all of its Azerbaijani-language Facebook content going back to 2018.
Following these attacks, Meydan TV tried in vain to restore the removed content. But repeated attempts to communicate with Facebook were met with an automated response. Eventually, owing to third-party intervention by Access Now’s Digital Security Helpline, executives connected with a Facebook representative, who could not even provide them with clear answers about the hacks or share any details regarding the perpetrators’ identity. “These attacks prevented us from doing our job,” Meydan TV head Matt Kasper told me.
Meydan TV’s travails illustrate digital platforms’ vital role in the news ecosystems of authoritarian countries, and platforms’ carelessness about their responsibility. Journalists in Azerbaijan already face numerous offline and online threats, including intimidation and violence, unlawful detentions and arrests, frozen bank accounts, travel bans, legislative bottlenecks, government surveillance, and harassment. Tech platforms’ lack of transparency and their ignorance of national narratives and cultural nuances aggravate the risks facing small newsrooms working under repressive regimes.
Over the past ten years, an unprecedented government crackdown on civil society has caused news producers and consumers in Azerbaijan to rely on digital platforms, particularly Facebook, for news, information sharing, and critical views. The government has blocked access to at least ten news websites since 2017, among them several leading outlets, effectively making social media the primary source of independent reporting.
At the same time, the Azerbaijani government has strengthened online repression. By using its monopoly over the country’s information-technology infrastructure, it has disrupted internet access, placed temporary bans on social media services like TikTok, launched DDoS attacks, and used various digital-surveillance tools, including the Israeli spyware Pegasus, to target and censor activists and journalists. The democracy watchdog Freedom House now considers the internet in Azerbaijan “not free”.
In February, Azerbaijan’s government enacted a restrictive media law that makes blocking news sites much easier, thus forcing more outlets like Meydan TV, one of the first websites to be banned in 2017, to rely on social media platforms to reach audiences. But while these platforms have become de facto extensions of independent newsrooms, the considerations that drive their decision-making remain a mystery. Given that journalists are already being silenced by the government, “We do not want to be silenced by the platforms, too,” says Kasper.
Often, tech platforms’ content-moderation decisions seem opaque and arbitrary. When Meydan TV asked Facebook to remove a fake page that used its logo to target and harass current and former staffers in 2020, the platform refused to intervene because the hoax did not violate its community standards. Meanwhile, the fake page shared the names and pictures of current and former Meydan employees, falsely claiming that the outlet’s “real goal” was to tarnish Azerbaijan’s global reputation on behalf of Armenia. Once again, it took an intervention from a third party to convince Facebook to respond. But while it removed the fake page, Facebook refused to provide details on the hoaxers’ identity, maintaining in an email only that it had taken unspecified “appropriate action”.
This behaviour stands in stark contrast to Facebook founder Mark Zuckerberg’s pledges to make his company, now known as Meta, more transparent and more mindful of how bad actors could abuse its platforms. Following a 2017 manifesto in which Zuckerberg highlighted Facebook’s “positive impact” on the world, company executives began to hold monthly meetings with the platform’s “most engaged” user groups to support local communities. But Meta has not shown the same commitment toward countries where authoritarian regimes are restricting civil liberties.
If Facebook is serious about being a positive force, there is no shortage of guidance it can use. Numerous international organisations have suggested similar steps to increase tech platforms’ accountability and transparency. In 2019, an Oxford-Stanford report proposed that Facebook hire more contextually-competent content reviewers, clarify the platform’s decision-making criteria, and establish an external appeals body.
Will Facebook implement these changes? The company’s response to a recent report that examined its content moderation during the 2021 conflict between Israel, Hamas and Islamic Jihad in Gaza provides an instructive example. The report, which Meta commissioned from consulting firm BSR, found that Facebook harmed Palestinians’ human rights and freedom of expression, owing to policy errors stemming from a “lack of oversight” and insufficient understanding of local Arabic dialect and broader political dynamics. BSR recommended several steps to improve the platform’s moderation practices, such as linguistically compatible algorithms, moderators familiar with local dialects and cultural nuances, and increased oversight of outsourced moderators.
But rather than announce it would reform its policies, Meta responded to the report by asserting that its response “should not be construed as an admission, agreement with, or acceptance” of BSR’s findings or conclusions. Similarly, while the company referred to steps it has taken or plans to take, it also clarified that its response “is not intended to imply that Meta would, or will, take steps regarding” other Meta-owned platforms such as WhatsApp.
That does not bode well for organizations like Meydan TV. By engaging with local news producers and soliciting their feedback on the company’s enforcement policies, Facebook could help protect independent journalism and promote internet freedom. Sadly, it looks like the company has other goals in mind.
Arzu Geybulla is an Azerbaijani columnist and writer focusing on digital authoritarianism and its implications on human rights and press freedom in Azerbaijan. Copyright: Project Syndicate, 2022.
On January 29, 2018, the prominent Berlin-based Azerbaijani news site Meydan TV had its Facebook page hacked for the first time. The attackers removed all admin accounts, deleted all content, and removed nearly 100,000 followers.
The next hack took place on May 10, 2019. This time, all of the content on Meydan TV’s Russian-language Facebook page was removed, along with two weeks’ worth of content on the site’s Azerbaijani Facebook page. The third hack, which occurred on June 18, 2020, resulted in Meydan TV losing all of its Azerbaijani-language Facebook content going back to 2018.
Following these attacks, Meydan TV tried in vain to restore the removed content. But repeated attempts to communicate with Facebook were met with an automated response. Eventually, owing to third-party intervention by Access Now’s Digital Security Helpline, executives connected with a Facebook representative, who could not even provide them with clear answers about the hacks or share any details regarding the perpetrators’ identity. “These attacks prevented us from doing our job,” Meydan TV head Matt Kasper told me.
Meydan TV’s travails illustrate digital platforms’ vital role in the news ecosystems of authoritarian countries, and platforms’ carelessness about their responsibility. Journalists in Azerbaijan already face numerous offline and online threats, including intimidation and violence, unlawful detentions and arrests, frozen bank accounts, travel bans, legislative bottlenecks, government surveillance, and harassment. Tech platforms’ lack of transparency and their ignorance of national narratives and cultural nuances aggravate the risks facing small newsrooms working under repressive regimes.
Over the past ten years, an unprecedented government crackdown on civil society has caused news producers and consumers in Azerbaijan to rely on digital platforms, particularly Facebook, for news, information sharing, and critical views. The government has blocked access to at least ten news websites since 2017, among them several leading outlets, effectively making social media the primary source of independent reporting.
At the same time, the Azerbaijani government has strengthened online repression. By using its monopoly over the country’s information-technology infrastructure, it has disrupted internet access, placed temporary bans on social media services like TikTok, launched DDoS attacks, and used various digital-surveillance tools, including the Israeli spyware Pegasus, to target and censor activists and journalists. The democracy watchdog Freedom House now considers the internet in Azerbaijan “not free”.
In February, Azerbaijan’s government enacted a restrictive media law that makes blocking news sites much easier, thus forcing more outlets like Meydan TV, one of the first websites to be banned in 2017, to rely on social media platforms to reach audiences. But while these platforms have become de facto extensions of independent newsrooms, the considerations that drive their decision-making remain a mystery. Given that journalists are already being silenced by the government, “We do not want to be silenced by the platforms, too,” says Kasper.
Often, tech platforms’ content-moderation decisions seem opaque and arbitrary. When Meydan TV asked Facebook to remove a fake page that used its logo to target and harass current and former staffers in 2020, the platform refused to intervene because the hoax did not violate its community standards. Meanwhile, the fake page shared the names and pictures of current and former Meydan employees, falsely claiming that the outlet’s “real goal” was to tarnish Azerbaijan’s global reputation on behalf of Armenia. Once again, it took an intervention from a third party to convince Facebook to respond. But while it removed the fake page, Facebook refused to provide details on the hoaxers’ identity, maintaining in an email only that it had taken unspecified “appropriate action”.
This behaviour stands in stark contrast to Facebook founder Mark Zuckerberg’s pledges to make his company, now known as Meta, more transparent and more mindful of how bad actors could abuse its platforms. Following a 2017 manifesto in which Zuckerberg highlighted Facebook’s “positive impact” on the world, company executives began to hold monthly meetings with the platform’s “most engaged” user groups to support local communities. But Meta has not shown the same commitment toward countries where authoritarian regimes are restricting civil liberties.
If Facebook is serious about being a positive force, there is no shortage of guidance it can use. Numerous international organisations have suggested similar steps to increase tech platforms’ accountability and transparency. In 2019, an Oxford-Stanford report proposed that Facebook hire more contextually-competent content reviewers, clarify the platform’s decision-making criteria, and establish an external appeals body.
Will Facebook implement these changes? The company’s response to a recent report that examined its content moderation during the 2021 conflict between Israel, Hamas and Islamic Jihad in Gaza provides an instructive example. The report, which Meta commissioned from consulting firm BSR, found that Facebook harmed Palestinians’ human rights and freedom of expression, owing to policy errors stemming from a “lack of oversight” and insufficient understanding of local Arabic dialect and broader political dynamics. BSR recommended several steps to improve the platform’s moderation practices, such as linguistically compatible algorithms, moderators familiar with local dialects and cultural nuances, and increased oversight of outsourced moderators.
But rather than announce it would reform its policies, Meta responded to the report by asserting that its response “should not be construed as an admission, agreement with, or acceptance” of BSR’s findings or conclusions. Similarly, while the company referred to steps it has taken or plans to take, it also clarified that its response “is not intended to imply that Meta would, or will, take steps regarding” other Meta-owned platforms such as WhatsApp.
That does not bode well for organizations like Meydan TV. By engaging with local news producers and soliciting their feedback on the company’s enforcement policies, Facebook could help protect independent journalism and promote internet freedom. Sadly, it looks like the company has other goals in mind.
Arzu Geybulla is an Azerbaijani columnist and writer focusing on digital authoritarianism and its implications on human rights and press freedom in Azerbaijan. Copyright: Project Syndicate, 2022.
On January 29, 2018, the prominent Berlin-based Azerbaijani news site Meydan TV had its Facebook page hacked for the first time. The attackers removed all admin accounts, deleted all content, and removed nearly 100,000 followers.
The next hack took place on May 10, 2019. This time, all of the content on Meydan TV’s Russian-language Facebook page was removed, along with two weeks’ worth of content on the site’s Azerbaijani Facebook page. The third hack, which occurred on June 18, 2020, resulted in Meydan TV losing all of its Azerbaijani-language Facebook content going back to 2018.
Following these attacks, Meydan TV tried in vain to restore the removed content. But repeated attempts to communicate with Facebook were met with an automated response. Eventually, owing to third-party intervention by Access Now’s Digital Security Helpline, executives connected with a Facebook representative, who could not even provide them with clear answers about the hacks or share any details regarding the perpetrators’ identity. “These attacks prevented us from doing our job,” Meydan TV head Matt Kasper told me.
Meydan TV’s travails illustrate digital platforms’ vital role in the news ecosystems of authoritarian countries, and platforms’ carelessness about their responsibility. Journalists in Azerbaijan already face numerous offline and online threats, including intimidation and violence, unlawful detentions and arrests, frozen bank accounts, travel bans, legislative bottlenecks, government surveillance, and harassment. Tech platforms’ lack of transparency and their ignorance of national narratives and cultural nuances aggravate the risks facing small newsrooms working under repressive regimes.
Over the past ten years, an unprecedented government crackdown on civil society has caused news producers and consumers in Azerbaijan to rely on digital platforms, particularly Facebook, for news, information sharing, and critical views. The government has blocked access to at least ten news websites since 2017, among them several leading outlets, effectively making social media the primary source of independent reporting.
At the same time, the Azerbaijani government has strengthened online repression. By using its monopoly over the country’s information-technology infrastructure, it has disrupted internet access, placed temporary bans on social media services like TikTok, launched DDoS attacks, and used various digital-surveillance tools, including the Israeli spyware Pegasus, to target and censor activists and journalists. The democracy watchdog Freedom House now considers the internet in Azerbaijan “not free”.
In February, Azerbaijan’s government enacted a restrictive media law that makes blocking news sites much easier, thus forcing more outlets like Meydan TV, one of the first websites to be banned in 2017, to rely on social media platforms to reach audiences. But while these platforms have become de facto extensions of independent newsrooms, the considerations that drive their decision-making remain a mystery. Given that journalists are already being silenced by the government, “We do not want to be silenced by the platforms, too,” says Kasper.
Often, tech platforms’ content-moderation decisions seem opaque and arbitrary. When Meydan TV asked Facebook to remove a fake page that used its logo to target and harass current and former staffers in 2020, the platform refused to intervene because the hoax did not violate its community standards. Meanwhile, the fake page shared the names and pictures of current and former Meydan employees, falsely claiming that the outlet’s “real goal” was to tarnish Azerbaijan’s global reputation on behalf of Armenia. Once again, it took an intervention from a third party to convince Facebook to respond. But while it removed the fake page, Facebook refused to provide details on the hoaxers’ identity, maintaining in an email only that it had taken unspecified “appropriate action”.
This behaviour stands in stark contrast to Facebook founder Mark Zuckerberg’s pledges to make his company, now known as Meta, more transparent and more mindful of how bad actors could abuse its platforms. Following a 2017 manifesto in which Zuckerberg highlighted Facebook’s “positive impact” on the world, company executives began to hold monthly meetings with the platform’s “most engaged” user groups to support local communities. But Meta has not shown the same commitment toward countries where authoritarian regimes are restricting civil liberties.
If Facebook is serious about being a positive force, there is no shortage of guidance it can use. Numerous international organisations have suggested similar steps to increase tech platforms’ accountability and transparency. In 2019, an Oxford-Stanford report proposed that Facebook hire more contextually-competent content reviewers, clarify the platform’s decision-making criteria, and establish an external appeals body.
Will Facebook implement these changes? The company’s response to a recent report that examined its content moderation during the 2021 conflict between Israel, Hamas and Islamic Jihad in Gaza provides an instructive example. The report, which Meta commissioned from consulting firm BSR, found that Facebook harmed Palestinians’ human rights and freedom of expression, owing to policy errors stemming from a “lack of oversight” and insufficient understanding of local Arabic dialect and broader political dynamics. BSR recommended several steps to improve the platform’s moderation practices, such as linguistically compatible algorithms, moderators familiar with local dialects and cultural nuances, and increased oversight of outsourced moderators.
But rather than announce it would reform its policies, Meta responded to the report by asserting that its response “should not be construed as an admission, agreement with, or acceptance” of BSR’s findings or conclusions. Similarly, while the company referred to steps it has taken or plans to take, it also clarified that its response “is not intended to imply that Meta would, or will, take steps regarding” other Meta-owned platforms such as WhatsApp.
That does not bode well for organizations like Meydan TV. By engaging with local news producers and soliciting their feedback on the company’s enforcement policies, Facebook could help protect independent journalism and promote internet freedom. Sadly, it looks like the company has other goals in mind.
Arzu Geybulla is an Azerbaijani columnist and writer focusing on digital authoritarianism and its implications on human rights and press freedom in Azerbaijan. Copyright: Project Syndicate, 2022.
comments